Culture

How dolphins avoid "the bends"

video: The dolphin has a custom-made spirometry, a device to measure lung function, over the blow hole. The dolphin is then asked to do a breath hold through a hand signal from the trainer and turns around for a brief period to hold its breath underwater. Here, you can see the suction cup ECG electrodes that measure the heart rate. The dolphin then turns again and breathes into the spirometer so the volume of each breath and the oxygen and carbon dioxide of the exhaled air can be measured

Image: 
Dolphin Quest - Oahu

Dolphins actively slow down their hearts before diving, and can even adjust their heart rate depending on how long they plan to dive for, a new study suggests. Published in Frontiers in Physiology, the findings provide new insights into how marine mammals conserve oxygen and adjust to pressure while diving.

The authors worked with three male bottlenose dolphins (Tursiops truncatus), specially trained to hold their breath for different lengths of time upon instruction. "We trained the dolphins for a long breath-hold, a short one, and one where they could do whatever they want", explains Dr Andreas Fahlman of Fundación Oceanogràfic, Valencia, Spain. "When asked to hold their breath, their heart rates lowered before or immediately as they began the breath-hold. We also observed that the dolphins reduced their heart rates faster and further when preparing for the long breath-hold, compared to the other holds".

The results reveal that dolphins, and possibly other marine mammals, may consciously alter their heart rate to suit the length of their planned dive. "Dolphins have the capacity to vary their reduction in heart rate as much as you and I are able to reduce how fast we breathe", suggests Fahlman. "This allows them to conserve oxygen during their dives, and may also be key to avoiding diving-related problems such as decompression sickness, known as "the bends"".

Understanding how marine mammals are able to dive safely for long periods of time is crucial to mitigate the health impacts of man-made sound disturbance on marine mammals. "Man-made sounds, such as underwater blasts during oil exploration, are linked to problems such as "the bends" in these animals", continues Fahlman. "If this ability to regulate heart rate is important to avoid decompression sickness, and sudden exposure to an unusual sound causes this mechanism to fail, we should avoid sudden loud disturbances and instead slowly increase the noise level over time to cause minimal stress. In other words, our research may provide very simple mitigation methods to allow humans and animals to safely share the ocean".

The practical challenges of measuring a dolphin's physiological functions, such as heart rate and breathing, have previously prevented scientists from fully understanding changes in their physiology during diving. "We worked with a small sample size of three trained male dolphins housed in professional care", Fahlman explains. "We used custom-made equipment to measure the lung function of the animals, and attached electrocardiogram (ECG) sensors to measure their heart rates".

"The close relationship between the trainers and animals is hugely important when training dolphins to participate in scientific studies", explains Andy Jabas, Dolphin Care Specialist at Siegfried & Roy's Secret Garden and Dolphin Habitat at the Mirage, Las Vegas, United States, home of the dolphins studied here. "This bond of trust enabled us to have a safe environment for the dolphins to become familiar with the specialized equipment and to learn to perform the breath-holds in a fun and stimulating training environment. The dolphins all participated willingly in the study and were able to leave at any time".

Credit: 
Frontiers

Less sedentary time reduces heart failure risk for older women

DALLAS, Nov. 24, 2020 -- Even with regular physical activity, older women (ages 50-79) who spend more waking hours in sedentary behaviors, such as sitting or lying down, have an increased risk of heart failure serious enough to require hospitalization, according to new research published today in Circulation: Heart Failure, an American Heart Association journal.

"For heart failure prevention, we need to promote taking frequent breaks from prolonged sitting or lying down, in addition to trying to achieve guideline levels of physical activity, such as those recommended by the American Heart Association," said Michael J. LaMonte, Ph.D., M.P.H., lead author of the study and research associate professor of epidemiology in the School of Public Health and Health Professions at the University at Buffalo in Buffalo, New York. "Very few studies have been published on sedentary time and heart failure risk, and even fewer have focused on older women in whom both sedentary behavior and heart failure are quite common."

To determine if increased sedentary time raised the risk of increased heart failure in older women, researchers examined the records of almost 81,000 postmenopausal women (average age of 63 years) from the Women's Health Initiative (Women's Health Initiative Observational Study). Women participating self-reported the amount of time spent daily, while awake, either sitting, lying down or being physically active.

Researchers divided participants by the total daily sedentary time (sitting and lying down combined): 6.5 hours or less; 6.6-9.5 hours; and more than 9.5 hours. Total number of daily hours spent sitting for each participant was also itemized: 4.5 hours or less; 4.6-8.5 hours; and more than 8.5 hours. None of the participants had been diagnosed with heart failure when the study began, and all were able to walk the distance of at least one block without any assistance.

During an average of 9 years of follow-up, 1,402 women were hospitalized due to heart failure. Compared with women who reported spending less than 6.5 hours per day sitting or lying down, the risk of heart failure hospitalization was:

15% higher in women reporting 6.6-9.5 hours daily spent sitting or lying down; and

42% higher in women reporting more than 9.5 hours daily spent sitting or lying down.

Compared with women who reported sitting less than 4.5 hours a day, the risk of heart failure hospitalization was:

14% higher in women who sat between 4.6 and 8.5 hours each day; and

54% higher in women who sat more than 8.5 hours a day.

"These findings are consistent with other studies confirming that people with more daily sedentary time are more likely to develop chronic health conditions such as diabetes, high blood pressure, heart attack, stroke and premature death from heart disease and other causes," LaMonte said.

The association between sedentary time and heart failure hospitalization risk remained after accounting for known heart failure risk factors such as high blood pressure, diabetes, obesity and previous heart attack. An important finding in this study was that the association between more sedentary time and a higher risk of heart failure hospitalization was found even in the subgroup of women who were the most physically active and meeting recommended activity levels.

"Our message is simple: sit less and move more. Historically, we have emphasized promoting a physically active lifestyle for heart health - and we should continue to do so! However, our study clearly shows that we also need to increase efforts to reduce daily sedentary time and encourage adults to frequently interrupt their sedentary time. This does not necessarily require an extended bout of physical activity; it might simply be standing up for 5 minutes or standing and moving one's feet in place. We do not have sufficient evidence on the best approach to recommend for interrupting sedentary time. However, accumulating data suggest that habitual activities such as steps taken during household and other activities of daily living are an important aspect of cardiovascular disease prevention and healthy aging," said LaMonte.

Although the analysis only included postmenopausal women, the findings are similar to those reported in a multiethnic study of men in California and are likely to be generalizable to men. Because the study was observational, it cannot establish a cause-and-effect relationship between sedentary time and heart failure risk. The study was limited by assessing hours of sedentary time via questionnaire rather than a wearable device to track movement. Researchers do note that they were able to find a significantly elevated risk of heart failure even after statistically controlling for confounding factors, which raises their confidence in the accuracy of the direction and magnitude of association measured.

Credit: 
American Heart Association

AI detects COVID-19 on chest x-rays with accuracy and speed

image: Generated heatmaps appropriately highlighted abnormalities in the lung fields in those images accurately labeled as COVID-19 positive (A-C) in contrast to images which were accurately labeled as negative for COVID-19 (D). Intensity of colors on the heatmap correspond to features of the image that are important for prediction of COVID-19 positivity.

Image: 
Northwestern University

Algorithm was trained, tested on the largest COVID-era dataset (17,002 X-ray images)

Algorithm analyzed X-ray images of lungs about 10 times faster, 1-6% more accurately than individual specialized radiologists

Algorithm is now publicly available for other researchers to continue to train it with new data

System could also potentially flag patients for isolation and testing who are not otherwise under investigation for COVID-19

'It would take seconds to screen a patient and determine if they need to be isolated,' researcher says

EVANSTON, Ill. -- Northwestern University researchers have developed a new artificial intelligence (A.I.) platform that detects COVID-19 by analyzing X-ray images of the lungs.

Called DeepCOVID-XR, the machine-learning algorithm outperformed a team of specialized thoracic radiologists -- spotting COVID-19 in X-rays about 10 times faster and 1-6% more accurately.

The researchers believe physicians could use the A.I. system to rapidly screen patients who are admitted into hospitals for reasons other than COVID-19. Faster, earlier detection of the highly contagious virus could potentially protect health care workers and other patients by triggering the positive patient to isolate sooner.

The study's authors also believe the algorithm could potentially flag patients for isolation and testing who are not otherwise under investigation for COVID-19.

The study will be published on Nov. 24 in the journal Radiology.

"We are not aiming to replace actual testing," said Northwestern's Aggelos Katsaggelos, an A.I. expert and senior author of the study. "X-rays are routine, safe and inexpensive. It would take seconds for our system to screen a patient and determine if that patient needs to be isolated."

"It could take hours or days to receive results from a COVID-19 test," said Dr. Ramsey Wehbe, a cardiologist and postdoctoral fellow in A.I. at the Northwestern Medicine Bluhm Cardiovascular Institute. "A.I. doesn't confirm whether or not someone has the virus. But if we can flag a patient with this algorithm, we could speed up triage before the test results come back."

Katsaggelos is the Joseph Cummings Professor of Electrical and Computer Engineering in Northwestern's McCormick School of Engineering. He also has courtesy appointments in computer science and radiology. Wehbe is a postdoctoral fellow at Bluhm Cardiovascular Institute at Northwestern Memorial Hospital.

A trained eye

For many patients with COVID-19, chest X-rays display similar patterns. Instead of clear, healthy lungs, their lungs appear patchy and hazy.

"Many patients with COVID-19 have characteristic findings on their chest images," Wehbe said. "These include 'bilateral consolidations.' The lungs are filled with fluid and inflamed, particularly along the lower lobes and periphery."

The problem is that pneumonia, heart failure and other illnesses in the lungs can look similar on X-rays. It takes a trained eye to tell the difference between COVID-19 and something less contagious.

Katsaggelos' laboratory specializes in using A.I. for medical imaging. He and Wehbe had already been working together on cardiology imaging projects and wondered if they could develop a new system to help fight the pandemic.

"When the pandemic started to ramp up in Chicago, we asked each other if there was anything we could do," Wehbe said. "We were working on medical imaging projects using cardiac echo and nuclear imaging. We felt like we could pivot and apply our joint expertise to help in the fight against COVID-19."

A.I. vs. human

To develop, train and test the new algorithm, the researchers used 17,002 chest X-ray images -- the largest published clinical dataset of chest X-rays from the COVID-19 era used to train an A.I. system. Of those images, 5,445 came from COVID-19-positive patients from sites across the Northwestern Memorial Healthcare System.

The team then tested DeepCOVID-XR against five experienced cardiothoracic fellowship-trained radiologists on 300 random test images from Lake Forest Hospital. Each radiologist took approximately two-and-a-half to three-and-a-half hours to examine this set of images, whereas the A.I. system took about 18 minutes.

The radiologists' accuracy ranged from 76-81%. DeepCOVID-XR performed slightly better at 82% accuracy.

"These are experts who are sub-specialty trained in reading chest imaging," Wehbe said. "Whereas the majority of chest X-rays are read by general radiologists or initially interpreted by non-radiologists, such as the treating clinician. A lot of times decisions are made based off that initial interpretation."

"Radiologists are expensive and not always available," Katsaggelos said. "X-rays are inexpensive and already a common element of routine care. This could potentially save money and time -- especially because timing is so critical when working with COVID-19."

Limits to diagnosis

Of course, not all COVID-19 patients show any sign of illness, including on their chest X-rays. Especially early in the virus' progression, patients likely will not yet have manifestations on their lungs.

"In those cases, the A.I. system will not flag the patient as positive," Wehbe said. "But neither would a radiologist. Clearly there is a limit to radiologic diagnosis of COVID-19, which is why we wouldn't use this to replace testing."

The Northwestern researchers have made the algorithm publicly available with hopes that others can continue to train it with new data. Right now, DeepCOVID-XR is still in the research phase, but could potentially be used in the clinical setting in the future.

Credit: 
Northwestern University

Can drinking cocoa make you smarter?

Increased consumption of flavanols - a group of molecules which occur naturally in fruit and vegetables - can increase your mental agility, according to new research at the University of Birmingham.

A team in the University's School of Sport, Exercise and Rehabilitation Sciences found that people given a cocoa drink containing high levels of flavanols were able to complete certain cognitive tasks more efficiently than when drinking a non-flavanol enriched-drink.

The study participants also underwent non-invasive brain imaging to measure blood oxygenation levels in the brain. Working with experts at the University of Illinois, the researchers showed that participants who had consumed the flavanol-rich drink produced a faster and greater increase in blood oxygenation levels in response to artificially elevated levels of CO2 (hypercapnia).

Flavanols, a sub-group of plant flavonoids, are present in cocoa, grapes, apples, tea, berries and other foods. They are known to have a beneficial effect on cardiovascular health, but their effects on brain health are not well understood. This study, published in Scientific Reports, is the first time the cognitive effects of flavanols in young, healthy subjects and the link with brain blood oxygenation have been investigated.

Lead author, Dr Catarina Rendeiro, of the University of Birmingham's School of Sport, Exercise and Rehabilitation Sciences, explains: "We used cocoa in our experiment, but flavanols are extremely common in a wide range of fruit and vegetables. By better understanding the cognitive benefits of eating these food groups, as well as the wider cardiovascular benefits, we can offer improved guidance to people about how to make the most of their dietary choices."

In the study, 18 healthy male participants aged between 18 and 40 underwent a standard procedure to challenge the brain's blood circulation that involves breathing 5% carbon dioxide - about 100 times the normal concentration in air, producing an effect called hypercapnia. Non-invasive near-infrared spectroscopy, a technique that uses light to capture changes in blood oxygenation levels, was used to track the increases in brain oxygenation in the frontal cortex in response to this carbon dioxide challenge.

Each participant underwent the test before and after drinking a cocoa drink on two occasions and on one of those occasions, the drink was enriched with flavanols. Following the carbon dioxide test, the participants were asked to complete a number of progressively complex cognitive tests.

The researchers found that the participants who had taken the flavanol-enriched drink had the highest levels of blood oxygenation in response to hypercapnia, reaching levels up to three times higher than participants drinking the non-flavanol-enriched drink. They also achieved these elevated levels 1 minute faster than participants who drank the non-enriched cocoa.

In the cognitive tests, the researchers found significant differences in the speed and accuracy with which volunteers completed the higher complexity tasks, with volunteers who had taken the flavanol-enriched drink performing the tasks 11 per cent faster on average.

"Our results showed a clear benefit for the participants taking the flavanol-enriched drink - but only when the task became sufficiently complicated," explains Dr Rendeiro. "We can link this with our results on improved blood oxygenation - if you're being challenged more, your brain needs improved blood oxygen levels to manage that challenge. It also further suggests that flavanols might be particularly beneficial during cognitively demanding tasks".

The researchers also noted a further outcome. Within the study cohort, there was a small group who did not benefit at all from the flavanol-enriched drink in terms of blood oxygenation levels, and who also did not derive any cognitive benefit. This group was shown to have existing high levels of brain oxygenation responses to start with that were not increased further by drinking the enriched cocoa. "This may indicate that some individuals, that perhaps are already very fit, have little room for further improvement" explain Dr. Rendeiro.

"The small group of participants who did not react to the flavanol gives us additional evidence to confirm the link between increased brain blood oxygenation and cognitive ability," adds Dr Rendeiro.

Credit: 
University of Birmingham

Pitt scientists provide insights into the quality of life of bariatric surgery patients

image: Wendy King, Ph.D.

Image: 
UPMC

PITTSBURGH, Nov. 24, 2020 - In today's issue of the Annals of Surgery, epidemiologists from the University of Pittsburgh published two separate analyses that could help guide clinicians and policymakers in counseling bariatric surgery patients to improve their quality of life for many years to come.

A study led by Gretchen White, Ph.D., assistant professor of medicine and clinical and translational science at Pitt's Institute for Clinical Research and Education, identified several patient characteristics pre- and post-surgery - such as insufficient social support and unrealistic weight loss expectations - that can predict not being satisfied long-term with Roux-en-Y gastric bypass surgery.

In a second paper, White's colleague and collaborator Wendy King, Ph.D., associate professor of epidemiology at Pitt's Graduate School of Public Health, found that higher physical activity levels after bariatric surgery lessen depressive symptoms and improve mental and physical quality of life, irrespective of weight loss.

Every year, tens of thousands of Americans who struggle with obesity undergo gastric bypass surgery to manage their body weight and comorbidities, such as diabetes. Yet, the Pitt scientists found, while most patients are at least somewhat satisfied with their surgery long-term, satisfaction decreased from 85% to 77% three to seven years post-surgery. Most patients also continue to lead sedentary lives, which contributes to weight regain and negatively affects their mental well-being.

Knowing which patients are more likely to be dissatisfied with their surgery can help doctors guide a conversation about expectations and maximize beneficial effects of the procedure, White said. Similarly, providing quantitative data that show being more physically active has positive effects on a person's well-being might help shift a patient's perspective on exercise.

"Our data support why it's important to counsel patients regarding their physical activity behaviors," said King.

"Although patients in general are not meeting physical activity recommendations post-surgery, we found a dose-response relationship - the more active patients were, the better improvement they had in depressive symptoms and health-related quality of life. Every bit matters."

Both studies analyzed data collected from 1,700 adults who underwent Roux-en-Y gastric bypass surgery between March 2006 and April 2009 and were followed for up to seven years.

In a pre-operation assessment, younger age, lower body mass index (BMI), higher percent weight loss needed to reach "dream weight," worse physical and mental health status, and less social support all independently predicted higher risk of not being satisfied with surgery. In addition, less weight loss, worsening physical and mental health status, less social support and greater depressive symptomology after the surgery were associated with not being satisfied.

"Knowing these characteristics can be useful for clinicians when talking to patients about how realistic their post-surgery expectations are, particularly when having conversations about achieving their dream weight," said White. "Modifying expectations early may lead to better satisfaction long-term."

In a separate study, King found that improvements in mental and physical health-related quality of life differed by physical activity level. By analyzing objective measures collected from wearable activity monitors -- step count, amount of time spent sedentary and amount of time spent doing moderate-to-vigorous activity -- she found that higher levels of physical activity related to improvements independent of weight loss.

In her recent work, also published in the Annals of Surgery, King showed that higher activity level predicted better weight loss and less weight regain - but that study didn't look at quality-of-life measures.

Even after the surgery, an average bariatric surgery patient leads a significantly more sedentary lifestyle than recommended by physicians.

King says this may explain why the magnitude of associations between physical activity level and improvement in health-related quality of life and depressive symptoms in their cohort was small. Still, the findings provide support for expanding measures that increase physical activity in bariatric surgery patients to influence mental and physical health outcomes.

"Most insurance providers include coverage for dietary counseling but don't reimburse expenses for hiring a health coach or getting a gym membership," said King. "There needs to be more systemic support to help patients increase their activity level and maintain an active lifestyle post-surgery."

Credit: 
University of Pittsburgh

New research shows Vype ePen 3 highly preferred by vapers

Users expressed a stronger preference for Vype ePen 3 versus previous generation Vype ePen 2
Study supports BAT’s commitment to conduct and publish scientific research which helps build A Better Tomorrow by reducing the health impact of its business

Innovation drives significant product improvements to encourage current smokers to switch to scientifically-substantiated, reduced-risk alternatives

MONDAY, 30th NOVEMBER 2020: New research shows that Vype ePen 3, BAT’s flagship vapour product, can provide smokers with similar levels of nicotine as standard cigarettes. If used exclusively, ePen3 could help smokers avoid many of the risks associated with smoking.

The study, which has been published in Scientific Reports, compared both Vype ePen 3 and Vype ePen 2 to combustible cigarettes. It assessed a variety of e-liquids and nicotine strengths for each product, and how nicotine concentration and delivery device combined to affect user preference.

Results from the study show that Vype ePen 3 was superior to Vype ePen 2 for nicotine delivery and ranked significantly higher for user satisfaction, with the newer device scoring nearly double for likability compared to its predecessor.

The study showed that the maximum concentration of nicotine in the blood after using the Vype ePen 3 (18mg/mL protonated nicotine) was on average twice that achieved using Vype ePen 2 (18mg/mL unprotonated nicotine).

BAT is committed to building A Better Tomorrow by reducing the health impact of its business through providing a range of enjoyable and lower risk products. The company continues to be clear that combustible cigarettes pose serious health risks, and the only way to avoid these risks is not to start or to quit. BAT encourages those who would otherwise continue to smoke, to switch completely to scientifically-substantiated, reduced-risk alternatives.

This study is a part of a comprehensive programme of scientific research designed to assess and substantiate the reduced-risk potential of non-combustible products.

Dr David O’Reilly, BAT’s Director of Scientific Research said:
“The results of this study are very impressive and show how through continued innovation and research we have improved our understanding of what consumers want, but importantly how to deliver greater satisfaction with our products.

“We know that for many smokers, nicotine levels are an important factor in choosing a vaping product, particularly when initially switching. They want it to work for them, delivering the nicotine they want in a device they like. We think we have achieved this with Vype ePen 3 and hope that through continued product innovation we can encourage and enable those who would otherwise continue to smoke to switch to a reduced-risk alternative which has been scientifically substantiated.”

Study Design

This was a randomised study during which each participant was given 8 different options. The subjects were all current smokers, who were dual users of cigarettes and vapour products. Over a period of 9 days, each morning, following an overnight wash-out period (12hrs), participants used one of the available options. The level of nicotine in the blood (plasma) was assessed before, during and after their use. These data were used to evaluate changes in nicotine concentration in the user. Participants were also asked to rate their liking of each product after use.

BAT’s Vype ePen 3 was designed to provide consumers with the same high-quality product they expect but to deliver an improved user experience. Early generations of e-cigarettes often delivered lower nicotine yields than combustible cigarettes, which many associate with a lower acceptance or continuation by user.

BAT has continued to innovate the Vype ePen to develop its nicotine delivery capabilities and enhance user satisfaction. This study was designed to validate the acceptance and preference of consumers who frequently use cigarettes and vapour products.

One of the improved design features of the Vype ePen 3 is its increased power, which, together with its ability to deliver a nicotine salt, known as protonated nicotine, provides a higher aerosol mass, resulting in a greater absorption of nicotine. The ability to increase the aerosol mass also reduces the potential for irritation of the pharynx, which has been associated with higher levels of unprotonated nicotine.

Credit: 
R&D at British American Tobacco

COVID-19 second wave in Myanmar causes dramatic increases in poverty

Yangon, Myanmar: In September 2020, 59 percent of 1000 households surveyed in urban Yangon and 66 percent of 1000 households surveyed in the rural Dry Zone earned less than $1.90/day (a common measure of extreme poverty), according to a new study from researchers at the International Food Policy Research Institute (IFPRI). The study provides new insight into the economic impacts of the COVID-19 pandemic and strict lockdowns, as well as the additional efforts needed to protect Myanmar's vulnerable people and fragile economic recovery.

"Only 16% of our respondents were poor in January this year before the COVID crisis hit, but now 62% are poor. What is really worrying is that during the second COVID-19 wave one-third of our households said they earned zero income in the last month. That level of poverty poses huge risks for food insecurity and malnutrition" said IFPRI Senior Research Fellow and lead author of the study, Derek Headey.

"Though necessary to control the virus, lockdown periods have resulted in disastrous impacts on poverty and need to be accompanied by larger and better targeted cash transfers if Myanmar is to successfully contain the economic destruction of COVID-19's second wave."

The study, "Poverty, food insecurity, and social protection during COVID-19 in Myanmar: Combined evidence from a household telephone survey and micro-simulations," analyzed data from over 2000 women in Yangon and the Dry Zone between June and September. The findings were recently highlighted in a recent IFPRI policy seminar, "Assessments of the impact of COVID-19 on Myanmar's food security and welfare". In that seminar, officials from several government ministries acknowledged the challenge of expanding social protection in response to COVID-19, as well as the need to strengthen monitoring and evaluation of COVID-19 economic relief programs.

"The government of Myanmar has rapidly expanded social protection in 2020 and they should be commended for that. However, the scale of the second wave simply means they must do more. Over half of poor households in our survey received cash transfers by September, so unfortunately a lot of poor people are still falling through the cracks", said Headey.

The study also used a forward-looking simulation analysis to show that extreme poverty at the national level triples in lockdowns but can be reduced by cash transfers larger than those the government is currently employing. The Myanmar government has been giving 20,000 kyat transfers to most households, which can reduce national poverty by 18 percent. However, increasing those transfers to 40,000 kyat would cut the lockdown poverty rate by half.

"I think there is a growing consensus both within government and among development partners that greater social protection efforts are needed", said Headey. "We know the disease is not going away soon, so we must do better to protect households from the economic ravages of this disease."

Credit: 
International Food Policy Research Institute

Historical bias overlooks genes related to COVID-19

Based on genome-wide experiments, the human body has 2,064 genes relevant to COVID-19. So why are researchers only studying 611 of them?

A historical bias -- which has long dictated which human genes are studied -- is now affecting how biomedical researchers study COVID-19, according to new Northwestern University research.

Although biomedical researchers know that many overlooked human genes play a role in COVID-19, they currently do not study them. Instead, researchers that study COVID-19 continue to focus on human genes that have already been heavily investigated independent of coronaviruses.

"For understandable reasons, researchers tend to build upon existing knowledge and research tools. They appear to select genes to study based on the ease of experimentation rather than their ultimate relevance to a disease," said Northwestern's Thomas Stoeger, who co-led the research. "This means that research into COVID-19 concentrates only on a small subset of the human genes involved in the response to the virus. Consequently, many aspects of the response of human cells toward COVID-19 remain not understood."

"There are many genes related to COVID-19, but we don't know what they are doing in the context of COVID-19," added Northwestern's Luís Amaral, who co-led the study with Stoeger. "We didn't study these genes before the pandemic, and COVID-19 does not seem to be an incentive to investigate them."

The research will be published on Nov. 24 in the journal eLife.

Stoeger is a data science scholar at the Northwestern Institute on Complex Systems (NICO) and the Center for Genetic Medicine. Through a "Pathway to Independence" award from the National Institute of Aging, Stoeger is starting a research laboratory dedicated to uncovering unstudied genes with important contributions to aging and age-related diseases. Amaral is the Erastus O. Haven Professor of Chemical and Biological Engineering in Northwestern's McCormick School of Engineering. Stoeger and Amaral are both members of Successful Clinical Response in Pneumonia Therapy (SCRIPT) Systems Biology Center.

Pinpointing a historical bias

This study builds on Stoeger and Amaral's 2018 research, which was the first to explain why some human genes are more popular to study than others. In that work, they found that 30% of all genes have never been studied and less than 20% of genes are the subject of more than 90% of published papers.

Despite the increasing availability of new techniques to study and characterize genes, researchers continue to study a small group of genes that scientists have studied since the 1980s. Historically, these genes have been easier to investigate experimentally. If an animal model has a similar gene to humans, for example, researchers are more likely to study that gene. The Northwestern team also discovered that postdoctoral fellows and Ph.D. students who focus on poorly characterized genes have a 50% reduced chance of becoming an independent researcher.

Although the Human Genome Project -- the identification and mapping of all human genes, completed in 2003 -- aimed to expand the scope of scientific study beyond this small subset of genes, it has yet to fulfill this aim.

"The bias to study the exact same human genes is very high," Amaral said. "The entire system is fighting the very purpose of the agencies and scientific knowledge, which is to broaden the set of things we study and understand. We need to make a concerted effort to incentivize the study of other genes important to human health."

Bias continues into COVID-era

For the new study, Stoeger and Amaral turned to LitCOVID, a collection of research publications related to COVID-19, curated by the National Library of Medicine. LitCOVID tags genes mentioned in the titles, abstracts or results sections of individual publications.

Northwestern researchers analyzed 10,395 published papers and pre-prints from the collection. Then, they integrated them into a custom database along with more than 100 different biological and bibliometric databases in an effort to survey and measure all aspects of biomedical research. Finally, they compared genes mentioned in the COVID-19 papers to COVID-19-related genes as identified by four genome-wide studies.

Stoeger and Amaral also tracked the occurrence of genes appearing in COVID-19 literature over time. Surprisingly, they observed that studies of COVID-19 genes are becoming not more but less expansive since the onset of the pandemic.

The team hopes its study inspires other researchers to be aware of past biases and to explore unstudied genes.

"Our findings have a direct implication on the long-term planning of scientific policymakers," Stoeger said. "We can point researchers toward human genes that are important for the cellular response against viruses but risk being ignored due to historically acquired biases, which are culturally reinforced."

Credit: 
Northwestern University

Brain waves guide us in spotlighting surprises

image: Earl K. Miller is co-senior author of the new study.

Image: 
The Picower Institute for Learning and Memory

If you open your office door one morning and there is a new package waiting on your desk, that's what you will notice most in the otherwise unchanged room. A new study by MIT and Boston University neuroscientists finds that the dynamic interplay of different brain wave frequencies, rather than dedicated circuitry, appears to govern the brain's knack for highlighting what's surprising and downplaying what's predictable.

By measuring thousands of neurons along the surface, or cortex, of the brain in animals as they reacted to predictable and surprising images, the researchers observed that low frequency alpha and beta brain waves, or rhythms, originating in the brain's frontal cognitive regions tamped down neural activity associated with predictable stimuli. That paved the way for neurons in sensory regions in the back of the brain to push forward information associated with unexpected stimuli via higher-frequency gamma waves. The backflow of alpha/beta carrying inhibitory predictions typically channeled through deeper layers of the cortex, while the forward flow of excitatory gamma carrying novel stimuli propagated across superficial layers.

"These interactions between beta and gamma are happening all over the cortex," said Earl Miller, Picower Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT and co-senior author of the study in Proceedings of the National Academy of Sciences. "And it's not generic - it targets the processing of specific stimuli."

In those regards, this new study extends much of Miller's recent work. Previously his lab at The Picower Institute for Learning and Memory has shown that in the prefrontal cortex, working memory depends on bursts of beta rhythms from deep layers regulating gamma frequency activity in more superficial layers. Those findings built, in part, on research published in 2012 by postdoc André Bastos, who is lead author of the new paper. Now the new study and another published by Miller's lab earlier this year suggest that this push and pull between the frequency bands is a common regulatory system of information flow in the cortex. Moreover, the new results show experimentally that it has a key role in predictive coding (as Bastos began to theorize in 2012), not just the related function of working memory.

Predictive coding is a key cognitive function that appears to become disrupted in autism spectrum disorders, noted Miller and Bastos. Some people with autism struggle to regard familiar stimuli as such, treating everything as new and equally salient. That can interfere with learning to recognize predictable situations and therefore the ability to make generalizations about experience.

"Because you are not able to tamp down and actively regulate predicted information, the brain is in a constant state of surging information forward which can be overwhelming," Bastos said. In fact, for anyone, he added, being in a completely new place where predictions of the environment have not yet had time to form can produce a feeling of sensory overload.

Setting and violating expectations

In the study, the team gave animals a simplified predictive coding experience. They were presented with an image as a cue, and then after a brief pause three images returned to the screen including the original. The animals simply had to direct their gaze to the previously cued image to complete the task. Sometimes the cue would be the same for many trials on end (thereby becoming predictable and familiar). Sometimes the cue would suddenly change, violating the predicted expectation. As animals played the game, the scientists were reading out neural activity and overall rhythms produced by that activity in five areas across the cortex, from visual areas in the back of the head to a parietal cortex in the middle, to cognitive cortices, including the prefrontal cortex, in the front.

The team wasn't looking to analyze working memory, or how the animals held the cue image in memory. Instead they were measuring differences made when the cue image was predictable vs. when it was not. Their measurements showed that unpredicted stimuli generated more neural activity than predictable ones. They also revealed that the activity associated with unpredicted stimuli was strongest in the gamma frequency band (and the very low frequency theta band), while activity associated with predicted stimuli was strongest in alpha/beta frequencies.

These changes in power in each frequency weren't across the board - it was greatest specifically among neurons that responded most to the presented stimulus. That means that the regulatory changes of brain waves were acting most strongly on the neural circuitry processing the cue images the animals were seeing. For this reason, the team refers to their conceptual model of predictive coding as "predictive routing."

"Our paper shows that predictive coding can work without specialized circuits for detecting mismatches between predictions and reality," Miller said.

Bastos further explained, "The key element of this new model is that prediction can be accomplished by selectively inhibiting routes of information flow that carry predictable information."

Co-senior author Nancy Kopell, William Fairfield Warren Distinguished Professor of Mathematics at Boston University, added, "To be able to support that idea required the elaborate experiments described in the paper, involving measurements from multiple parts of the brain."

Subsequent analysis of the data also showed other key trends. Among them was that the coherence of activity between cortical regions was stronger in the alpha/beta band when the cue image was predictable and stronger in gamma when it was not. Moreover, the direction of these different bands (how they propagated back and forth across the cortex) showed that alpha/beta fed back from higher cognitive) regions to lower (sensory) regions, while gamma fed forward from lower regions to higher ones.

Paying attention to exceptions

The scientists also saw that alpha/beta mostly peaked in deeper layers of the visual cortices, while gamma often was strongest in superficial layers. But there were exceptions along the way. The parietal cortex region 7A bucked the trend of peaking in gamma for unexpected stimuli, instead peaking in the higher end of the beta frequency band. One possibility, Kopell says, is that 7A was involved in a working memory buffer, which is believed to use beta oscillations. Another explanation could be that 7A's beta activity is related not to prediction as much as attention. Animals performing the task did need to pay at least some degree of attention to the cue, whether it was predictable or not.

Designing experiments that can fully separate attention from prediction could be an important future direction, Bastos said. Another important future goal will be to create computational models that simulate the interactions between layers and frequencies to inhibit predicted information.

"The laminar detail from the current data set will be very useful in producing such a model," Kopell said.

Credit: 
Picower Institute at MIT

Potential treatment against antibiotic-resistant bacteria causing gonorrhea and meningitis

image: INRS Professors Frédéric Veyrier, specialist in bacteriology, and Annie Castonguay, specialist in bioorganometallic chemistry, with the Ph.D student Eve Bernet, first author of the study.

Image: 
INRS

A team from the Institut national de la recherche scientifique (INRS) has demonstrated the effectiveness of an inexpensive molecule to fight antibiotic-resistant strains of the bacteria responsible for gonorrhea and meningococcal meningitis. These two infections affect millions of people worldwide. The results of this research, led by Professor Frédéric Veyrier and Professor Annie Castonguay, have just been published online in the Antimicrobial Agents and Chemotherapy journal.

Antibiotic Resistance

In recent years, rising rates of antibiotic resistance have been of concern to the World Health Organization (WHO), who has celebrated the World Antimicrobial Awareness Week, from November 18th to 24th 2020. This concern is particularly true in the case of Neisseria gonorrhoeae, for which some strains have developed resistance to all effective antibiotics. This bacterium is responsible for gonorrhea, an infection whose incidence has almost tripled in the last decade in Canada. Resistant strains of Neisseria meningitidis, which cause bacterial meningitis, have also emerged. In the current pandemic context, scientists are particularly concerned about a rise in antibiotic resistance due to their increased use.

Unlike other bacteria, Neisseria that cause meningitis and gonorrhea evolve very rapidly because of certain intrinsic properties. For example, they have a great capacity to acquire genes from other bacteria. They also have a suboptimal DNA repair system that leads to mutations; antibiotic resistance can therefore easily emerge. The fact that these diseases affect many people around the world also gives them many opportunities to evolve, explaining why it is urgent to develop new ways to fight these bacteria.

A Specific Molecule

The research team has demonstrated the efficacy of a simple molecule in bacterial cultures and in a model of infection. Well known by chemists, this molecule is accessible, inexpensive, and could greatly help in the fight against these two types of pathogenic Neisseria. The advantage of this molecule is its specificity: "We noticed that the molecule only affects pathogenic Neisseria. It does not affect other types of Neisseria that are found in the upper respiratory system and can be beneficial," underlines Professor Frédéric Veyrier, also the scientific manager of the Platform for Characterization of Biological and Synthetic Nanovehicles.

During its experiments, the research team tested whether there was any possible resistance to the molecule: "We were able to isolate strains of bacteria that were less sensitive to the treatment, but this resistance was a double-edged sword because these mutants completely lost their virulence" says the microbiologist.

For the moment, the team does not know exactly why the molecule reacts specifically with the two types of Neisseria, but they suspect a connection with the membrane of these pathogens. This specificity opens the door to more fundamental research to determine what makes one bacterium virulent compared to others.

The next step will be to modify the structure of the molecule to make it more efficient, while maintaining its specificity. In parallel, the team wishes to identify an industrial partner to evaluate the possibility of developing a potential treatment.

Credit: 
Institut national de la recherche scientifique - INRS

1 in 3 who are aware of deepfakes say they have inadvertently shared them on social media

A Nanyang Technological University, Singapore (NTU Singapore) study has found that some Singaporeans have reported that, despite being aware of the existence of 'deepfakes' in general, they believe they have circulated deepfake content on social media which they later found out was a hoax.

Deepfakes, a portmanteau of 'deep learning' and 'fake', are ultrarealistic fake videos made with artificial intelligence (AI) software to depict people doing things they have never done - not just slowing them down or changing the pitch of their voice, but also making them appear to say things that they have never said at all.

In a survey of 1,231 Singaporeans led by NTU Singapore's Assistant Professor Saifuddin Ahmed, 54 per cent of the respondents said they were aware of deepfakes, of which one in three reported sharing content on social media that they subsequently learnt was a deepfake.

The study also found that more than one in five of those who are aware of deepfakes said that they regularly encounter deepfakes online.

The survey findings, reported in the journal Telematics and Informatics in October, come in the wake of rising numbers of deepfake videos identified online. Over the six months to June 2020, Sensity, a deepfake detection technology firm , estimates that identified deepfake videos online had doubled to 49,081.

Deepfakes that have gone viral include one with former President Barack Obama using an expletive to describe President Donald Trump in 2018, and another last year of Facebook founder Mark Zuckerberg claiming to control the future, thanks to stolen data.

Assistant Professor Saifuddin of NTU's Wee Kim Wee School of Communication and Information said: "Fake news refers to false information published under the guise of being authentic news to mislead people, and deepfakes are a new, far more insidious form of fake news. In some countries, we are already witnessing how such deepfakes can be used to create non-consensual porn, incite fear and violence, and influence civic mistrust. As the AI technology behind the creation of deepfakes evolves, it will be even more challenging to discern fact from fiction."

"While tech companies like Facebook, Twitter and Google have started to label what they have identified as manipulated online content like deepfakes, more efforts will be required to educate the citizenry in effectively negating such content."

Americans more likely than Singaporeans to share deepfakes

The study benchmarked the findings on Singaporeans' understanding of deepfakes against a similar demographic and number of respondents in the United States.

Respondents in the US were more aware of deepfakes (61% in US vs. 54% in SG). They said they were also more concerned by and frequently exposed to deepfakes. More people reported sharing content that they later learnt was a deepfake in the US than in Singapore (39% in US vs. 33% in SG).

Asst Prof Saifuddin said: "These differences are not surprising, given the more widespread relevance and public discussion surrounding deepfakes in the US. More recently, a rise in the number of deepfakes, including those of President Donald Trump, has raised anxieties regarding the destructive potential of this form of disinformation.

"On the other hand, Singapore has not witnessed direct impacts of deepfakes, and the government has introduced the Protection from Online Falsehoods and Manipulation Act (POFMA) to limit the threat posed by disinformation, including deepfakes."

But legislation alone is not enough, he added, citing a 2018 survey by global independent market research agency Ipsos which found that while four in five Singaporeans say that they can confidently spot fake news, more than 90 per cent mistakenly identified at least one in five fake headlines as being real.

"The government's legislation to inhibit the pervasive threat of disinformation has also been helpful, but we need to continue improving digital media literacy going forward, especially for those who are less capable of discerning facts from disinformation," said Asst Prof Saifuddin, whose research interests include social media and public opinion.

The NTU study on deepfake awareness was funded by the University and Singapore's Ministry of Education, and the findings are part of a longer-term study that examines citizens' trust in AI technology.

Credit: 
Nanyang Technological University

World's first: Drug guides stem cells to desired location, improving their ability to heal

image: Neural stem cells maturing into astrocytes (yellow).

Image: 
Sanford Burnham Prebys Medical Discovery Institute

Scientists at Sanford Burnham Prebys Medical Discovery Institute have created a drug that can lure stem cells to damaged tissue and improve treatment efficacy--a scientific first and a major advance for the field of regenerative medicine. The discovery, published in the Proceedings of the National Academy of Sciences (PNAS), could improve current stem cell therapies designed to treat neurological disorders such as spinal cord injury, stroke, amyotrophic lateral sclerosis?(ALS) and other neurodegenerative disorders; and expand their use to new conditions, such as heart disease or arthritis.

Toxic cells (green) disappeared when mice with a neurodegenerative condition received both therapeutic stem cells (red) and the drug SDV1a-which corresponded with longer lives and delayed symptom onset. These results suggest that SDV1a can be used to improve the efficacy of stem cell treatments.

"The ability to instruct a stem cell where to go in the body or to a particular region of a given organ is the Holy Grail for regenerative medicine," says Evan Y. Snyder, M.D. Ph.D., professor and director of the Center for Stem Cells & Regenerative Medicine at Sanford Burnham Prebys and senior author of the study. "Now, for the first time ever, we can direct a stem cell to a desired location and focus its therapeutic impact."

Nearly 15 years ago, Snyder and his team discovered that stem cells are drawn to inflammation--a biological "fire alarm" that signals damage has occurred. However, using inflammation as a therapeutic lure isn't feasible because an inflammatory environment can be harmful to the body. Thus, scientists have been on the hunt for tools to help stem cells migrate--or "home"--to desired places in the body. This tool would be helpful for disorders in which initial inflammatory signals fade over time--such as chronic spinal cord injury or stroke--and conditions where the role of inflammation is not clearly understood, such as heart disease.

"Thanks to decades of investment in stem cell science, we are making tremendous progress in our understanding of how these cells work and how they can be harnessed to help reverse injury or disease," says Maria T. Millan, M.D., president and CEO of the California Institute for Regenerative Medicine (CIRM), which partially funded the research. "Dr. Snyder's group has identified a drug that could boost the ability of neural stem cells to home to sites of injury and initiate repair. This candidate could help speed the development of stem cell treatments for conditions such as spinal cord injury and Alzheimer's disease."

A drug with only the "good bits"

In the study, the scientists modified CXCL12--an inflammatory molecule that Snyder's team previously discovered could guide healing stem cells to sites in need of repair--to create a drug called SDV1a. The new drug works by enhancing stem cell binding and minimizing inflammatory signaling--and can be injected anywhere to lure stem cells to a specific location without causing inflammation.

"Since inflammation can be dangerous, we modified CXCL12 by stripping away the risky bit and maximizing the good bit," says Snyder. "Now we have a drug that draws stem cells to a region of pathology, but without creating or worsening unwanted inflammation."

To demonstrate that the new drug is able to improve the efficacy of a stem cell treatment, the researchers implanted SDV1a and human neural stem cells into the brains of mice with a neurodegenerative disease called Sandhoff disease. This experiment showed that SDV1a helped the human neural stem cells migrate and perform healing functions, which included extending lifespan, delaying symptom onset, and preserving motor function for much longer than the mice that didn't receive the drug. Importantly, inflammation was not activated, and the stem cells were able to suppress any pre-existing inflammation.

Next steps

The researchers have already begun testing SDV1a's ability to improve stem cell therapy in a mouse model of ALS, also known as Lou Gehrig's disease, which is caused by a progressive loss of motor neurons in the brain. Previous studies conducted by Snyder's team indicated that broadening the spread of neural stem cells helps more motor neurons survive--so the scientists are hopeful that the strategic placement of SDV1a will expand the terrain covered by neuroprotective stem cells and help slow the onset and progression of the disease.

"We are optimistic that this drug's mechanism of action may potentially benefit a variety of neurodegenerative disorders, as well as non-neurological conditions such as heart disease, arthritis and even brain cancer," says Snyder. "Interestingly, because CXCL12 and its receptor [JK1] are implicated in the cytokine storm that characterizes severe COVID-19, some of our insights into how to selectively inhibit inflammation without suppressing other normal processes may be useful in that arena as well."

Credit: 
Sanford Burnham Prebys

More skin-like, electronic skin that can feel

image: An image of recognition when the ion-electronic skin was pushed. The temperature change and direction of force in the part of contact are accurately recognized.

Image: 
POSTECH

What if we didn't have skin? We would have no sense of touch, no detection of coldness or pain, leaving us inept to respond to any situation. The skin is not just a protective shell for organs, but rather a signaling system for survival that provides information on the external stimuli or temperature, or a meteorological observatory that reports the weather. Tactile receptors, tightly packed throughout the skin, feel the temperature or mechanical stimuli - such as touching or pinching - and convert them into electrical signals to the brain.

The challenge for electronic skin, being developed for use in artificial skins or humanlike robots like the humanoids, is to make it feel the temperatures and movements like how human skin feels them as much as possible. So far, there are electronic skins that can detect movement or temperature separately, but none are able to recognize both simultaneously like the human skin.

A joint research team consisting of POSTECH professor Unyong Jeong and Dr. Insang You of the Department of Materials Science and Engineering, and Professor Zhenan Bao of Stanford University have together developed the multimodal ion-electronic skin that can measure the temperature and mechanical stimulation at the same time. The research findings, published on November 20th edition of Science, are characterized by making very simple structures through applying special properties of the ion conductors.

There are various tactile receptors in the human skin that can detect hot or cold temperatures as well as other tactile sensations such as pinching, twisting or pushing. Through these receptors, humans can distinguish between the mechanical stimuli and temperature. The conventional electronic skin fabricated so far had the issue of having large errors in measuring temperature if mechanical stimuli were applied to the skin.

Human skin is freely stretchable yet unbreakable because it is full of electrolytes, so the joint research team made the sensor using them. They also took advantage of the fact that the ion conductor material containing electrolyte can have different measurable properties according to its measurement frequency. On the basis of the new finding, a multifunctional artificial receptor was created that can measure a tactile sensation and temperature at the same time.

In addition, the research team derived variables - the charge relaxation time and the normalized capacitance - that only respond to temperatures in ion conductors and variables that only respond to mechanical stimuli. The outputs of the variables could be obtained measuring at only two measurement frequencies. The charge relaxation time, which is the time it takes for the polarization of the ions to disappear, can measure temperature and does not respond to movements, and the normalized capacitance can measure the movements without responding to temperature.

This artificial receptor with a simple electrode-electrolyte-electrode structure has great commercialization potential and accurately measures the temperature of the object applied as well as the direction or strain profile upon external stimuli such as squeezing, pinching, spreading and twisting.

The multimodal ion-electronic skin, which can be freely stretched or modified but can also detect temperature, is anticipated to be applicable in wearable temperature sensors or in robot skins for humanlike robots like humanoids.

"When an index finger touches an electronic skin, the electronic skin detects contact as a temperature change, and when a finger pushes the skin, the back part of the contact area stretches and recognizes it as movement," explained Dr. Insang You of POSTECH who is the first author of the paper. "I suspect that this mechanism is one of the ways that the actual human skin recognizes different stimuli like temperature and movement."

"This study is the first step in opening the door for multimodal electronic skin research using electrolytes," remarked Professor Unyong Jeong of POSTECH and the corresponding author. "The ultimate goal of this research is to create artificial ion-electronic skin that simulates human tactile receptors and neurotransmitters, which will help restore the sense of touch in patients who have lost their tactile sensation due to illness or accidents."

Credit: 
Pohang University of Science & Technology (POSTECH)

Guiding the way to improved solar cell performance

image: Bilayer solar cell based on the organic semiconductor copper(I) thiocyanate (CuSCN) provides a new platform for exciton diffusion studies.

Image: 
© 2020 KAUST

Understanding how particles travel through a device is vital for improving the efficiency of solar cells. Researchers from KAUST, working with an international team of scientists, have now developed a set of design guidelines for enhancing the performance of molecular materials.

When a packet of light, or photon, is absorbed by a semiconductor, it generates a pair of particles known as an exciton. An electron is one part of this pair; the other is its positively charged equivalent, called a hole. Excitons are electrically neutral, so it is impossible to set them in motion by applying an electric field. Instead the excitons "hop" by a random motion or diffusion. The dissociation of the excitons into charges is necessary to create a current but is highly improbable in an organic semiconductor.

"So typically, we need to blend two semiconductors, a so-called electron donor and an electron acceptor, to efficiently generate free charges," explains Yuliar Firdaus. "The donor and acceptor materials penetrate into one another; maximizing the exciton diffusion length-- the distance the exciton can travel before recombining and being lost-- is crucial for optimizing the organic solar cell's performance.

Many previous organic solar cells were made by blending a polymer with molecules, known as fullerenes. But more recently, replacing the fullerene with other organic materials such as nonfullerene small molecules produced impressive improvements in device efficiency.

Firdaus and colleagues combined measurements of the photocurrent with ultrafast spectroscopy to calculate the diffusion length of a wide variety of nonfullerene molecules. They observed very long exciton diffusion lengths, in the range of 20 to 47 nanometers--an improvement on the 5 to 10 nanometer range characteristic of fullerenes.

To better understand this improvement, the team compared data describing the crystallographic structure of the molecules with quantum chemical calculations. In this way they could identify key relationships between the chemical structure of the molecule and the diffusion length. With these connections established, the scientists developed a set of rules to aid in the synthesis of improved materials and, ultimately, help the design of organic photovoltaic devices with improved conversion efficiency.

"Next, we plan to investigate how film processing processes might affect the exciton transfer rate of the existing small-molecule acceptors," says Firdaus. "We are also interested in translating the molecular design rules to synthesize new acceptor materials with better performance."

Credit: 
King Abdullah University of Science & Technology (KAUST)

Head in the game

image: Experimental set-up.

Image: 
University of Tsukuba

Tsukuba, Japan - Researchers from the Faculty of Health and Sport Sciences at the University of Tsukuba studied the way blind players and sighted non-athletes tracked an incoming noise-making ball. They found that blind players employed a larger downward head rotation when trapping the rolling ball, compared with blindfolded sighted volunteers. This work may help explain the methods visually impaired people utilize to complete daily tasks, as well as assist in the creation of new smart-assistant devices.

Blind soccer is a sport that can be enjoyed by anyone, regardless of visual ability. Except for the goalkeepers, players are blindfolded during the game, and can follow the location of the ball using the sounds it emits. To better understand the way visually impaired players are able to receive and control the ball, scientists at the University of Tsukuba recruited both experienced blind soccer players as well as sighted nonathlete volunteers. A system of ten cameras was used to keep track of the three-dimensional position of the reflective markers attached to the body of each test subject. The task for each participant was to trap an incoming rolling ball with his right foot while blindfolded.

The seasoned blind footballers showed a larger downward head rotation angle, as well as better overall performance, compared with the sighted non-athletes. However, no significant differences were found in the horizontal head or trunk rotation. This indicates that blind footballers can more closely match the motion of their head with the movement of the approaching ball.

"Our study suggests that blind footballers are better at keeping the ball in a consistent egocentric direction relative to the head throughout the trapping process," Senior author Professor Masahiro Kokubu says.

It is known that blind individuals can have superior hearing compared with sighted individuals, especially for sound localization. "Our results are consistent with previous findings that practice improves the ability to track sounds even in blind individuals who already do better than sighted people on this task," explains Professor Kokubu. These results also suggest that the strategy of blind footballers to accurately localize a ball is similar to the way high-level baseball batters rotate their heads. The results of this project may lead to improved smart devices that take advantage of these same techniques to assist visually impaired individuals.

Credit: 
University of Tsukuba