Brain

Immigration restrictions for EU citizens could damage UK research and healthcare

An analysis of senior European scientists and doctors working in the UK underlines the high risk of considerable damage to the UK's science output and international research reputation caused by any post-Brexit immigration restrictions, as well as an associated reduction in healthcare quality. The research, published by the Journal of the Royal Society of Medicine, was carried out by a team from King's College London and Queen's University Belfast.

The researchers examined how many European individuals had been elected as fellows of the Royal Society and the Academy of Medical Sciences. These independent bodies recognise people who have excelled in their respective fields and have therefore contributed substantially to high-level UK science and research. The researchers also examined the UK Medical Register to identify European doctors working in senior positions as hospital consultants and GPs in the UK.

Their analysis confirmed a significant increase in the numbers of UK-based fellows of the Royal Society and the Academy of Medical Sciences from European countries since the Maastricht treaty was signed in 1992. They also established that since 2004 doctors from European countries are the largest cohort of foreign-qualified practitioners in the UK, with Eastern European doctors predominating during the last decade.

Lead researcher and King's College London academic Mursheda Begum said: "Our results indicate a very positive and statistically significant contribution of European scientists, academics and medical practitioners to the UK research base and the provision of clinical care. Many immigrants have built strong careers that have been rewarded by prestigious fellowships because they have impacted positively on UK research."

Co-author Professor Richard Sullivan added: "With the UK now officially in the process of leaving the EU, there are concerns about how this will affect NHS services and patient care, health research and international cooperation. It is distinctly possible that uncertainty about the ability of European citizens to work in the NHS may lead to a staffing crisis, as they seek work elsewhere. Medical specialties with a heavy reliance on EU nationals, such as general surgery and ophthalmology, and nursing are likely to be seriously affected."

"Our study clearly demonstrates the positive contribution by scientists and medical doctors from European countries to high-level research and clinical care in the UK. It is vital that national diversity in high-quality human capital is maintained in a post-Brexit UK research and healthcare environment."

Credit: 
SAGE

Best foods to eat for a good night's sleep

Sleep has become widely recognized as playing a really important role in our overall health and wellness – alongside diet, stress management and exercise.

Light makes Rice University catalyst more effective

image: Scientists with Rice's Laboratory for Nanophotonics have shown how a light-driven plasmonic effect allows catalysts of copper and ruthenium to more efficiently break apart ammonia molecules, which each contain one nitrogen and three hydrogen atoms. When the catalyst is exposed to light (right), resonant plasmonic effects produce high-energy "hot carrier" electrons that become localized at ruthenium reaction sites and speed up desorption of nitrogen compared with reactions conducted in the dark with heat (left).

Image: 
LANP/Rice University

HOUSTON -- (Oct. 4, 2018) -- Rice University nanoscientists have demonstrated a new catalyst that can convert ammonia into hydrogen fuel at ambient pressure using only light energy, mainly due to a plasmonic effect that makes the catalyst more efficient.

A study from Rice's Laboratory for Nanophotonics (LANP) in this week's issue of Science describes the new catalytic nanoparticles, which are made mostly of copper with trace amounts of ruthenium metal. Tests showed the catalyst benefited from a light-induced electronic process that significantly lowered the "activation barrier," or minimum energy needed, for the ruthenium to break apart ammonia molecules.

The research comes as governments and industry are investing billions of dollars to develop infrastructure and markets for carbon-free liquid ammonia fuel that will not contribute to greenhouse warming. But the researchers say the plasmonic effect could have implications beyond the "ammonia economy."

"A generalized approach for reducing catalytic activation barriers has implications for many sectors of the economy because catalysts are used in the manufacture of most commercially produced chemicals," said LANP Director Naomi Halas, a chemist and engineer who's spent more than 25 years pioneering the use of light-activated nanomaterials. "If other catalytic metals can be substituted for ruthenium in our synthesis, these plasmonic benefits could be applied to other chemical conversions, making them both more sustainable and less expensive."

Catalysts are materials that speed up chemical reactions without reacting themselves. An everyday example is the catalytic converter that reduces harmful emissions from a vehicle's exhaust. Chemical producers spend billions of dollars on catalysts each year, but most industrial catalysts work best at high temperature and high pressure. The decomposition of ammonia is a good example. Each molecule of ammonia contains one nitrogen and three hydrogen atoms. Ruthenium catalysts are widely used to break apart ammonia and produce hydrogen gas (H2), a fuel whose only byproduct is water, and nitrogen gas (N2), which makes up about 78 percent of Earth's atmosphere.

The process begins with the ammonia sticking, or adsorbing, to the ruthenium, and proceeds through a series of steps as the bonds in ammonia are broken one by one. The hydrogen and nitrogen atoms left behind grab a partner then leave, or desorb, from the ruthenium surface. This final step turns out to be the most critical, because the nitrogen has a strong affinity for the ruthenium and likes to stick around, which blocks the surface from attracting other ammonia molecules. To drive it away, more energy must be added to the system.

Graduate student Linan Zhou, the lead author of the Science study, said the efficiency of LANP's copper-ruthenium catalyst derives from a light-induced electronic process that produces localized energy at ruthenium reaction sites, which aids desorption.

The process, known as "hot carrier-driven photocatalysis," has its origins in the sea of electrons that constantly swirl through the copper nanoparticles. Some wavelengths of incoming light resonate with the sea of electrons and set up rhythmic oscillations called localized surface plasmon resonances. LANP has pioneered a growing list of technologies that make use of plasmonic resonances for applications as diverse as color-changing glass, molecular sensing, cancer diagnosis and treatment and solar energy collection.

In 2011, LANP's Peter Nordlander, one of the world's leading theoretical experts on nanoparticle plasmonics, Halas and colleagues showed that plasmons could be used to boost the amount of short-lived, high-energy electrons called "hot carriers" that are created when light strikes metal. In 2016, a LANP team that included Dayne Swearer, who's also a co-author of this week's study, showed that plasmonic nanoparticles could be married with catalysts in an "antenna-reactor" design where the plasmonic nanoparticle acted as antenna to capture light energy and transfer it to a nearby catalytic reactor via a near-field optical effect.

"That was the first generation," Zhou said of the antenna-reactor. "And the main catalytic effect came from the near-field induced by the antenna when it absorbs light. This near-field drives oscillations in the adjacent reactor, which then generate hot carriers. But if we can have hot carriers that can directly reach the reactor and drive the reaction, it would be much more efficient."

Zhou, a chemist, spent months refining the synthesis of the copper-ruthenium nanoparticles, which are much smaller than a red blood cell. Each nanoparticle contains tens of thousands of copper atoms but just a few thousand ruthenium atoms, which take the place of some copper atoms on the particle's surface.

"Basically, there are ruthenium atoms scattered in a sea of copper atoms, and it's the copper atoms that are absorbing the light, and their electrons are shaking back and forth collectively," Swearer said. "Once a few of those electrons gain enough energy through a quantum process called nonradiative plasmon decay, they can localize themselves within the ruthenium sites and enhance catalytic reactions.

"Room temperature is about 300 Kelvin and plasmon resonances can raise the energy of these hot electrons up to 10,000 Kelvin, so when they localize on the ruthenium, that energy can be used to break the bonds in molecules, assist in adsorption and more importantly in desorption," Swearer said.

Just as a metal picnic table heats up on a sunny afternoon, the white laser light -- a stand-in for sunlight in Zhou's experiments -- also caused the copper-ruthenium catalyst to heat. Because there is no way to directly measure how many hot carriers were created in the particles, Zhou used a heat-sensing camera and spent months taking painstaking measurements to tease apart the thermal-induced catalytic effects from those induced by hot carriers.

"About 20 percent of the light energy was captured for ammonia decomposition," Zhou said. "This is good, and we think we can refine to improve this and make more efficient catalysts."

Zhou and Halas said the team is already working on follow-up experiments to see if other catalytic metals can be substituted for ruthenium, and the initial results are promising.

"Now that we have insight about the specific role of hot carriers in plasmon-mediated photochemistry, it sets the stage for designing energy-efficient plasmonic photocatalysts for specific applications," Halas said.

Additional co-authors include Chao Zhang, Hossein Robatjazi, Hangqi Zhao, Luke Henderson and Liangliang Dong, all of Rice; Phillip Christopher of the University of California, Santa Barbara; and Emily Carter of Princeton University.

Halas is Rice's Stanley C. Moore Professor of Electrical and Computer Engineering and professor of chemistry, bioengineering, physics and astronomy, and materials science and nanoengineering. Nordlander is the Wiess Chair and Professor of Physics and Astronomy, and professor of electrical and computer engineering, and materials science and nanoengineering.

Credit: 
Rice University

First-born children more likely to learn about sex from parents

Birth order may play a significant role in how children learn about sex, especially for boys, according to a new study published in the journal Sex Education.

Researchers found that first-born children were more likely to report parental involvement in sex education than later-born children, a pattern which was especially pronounced in men.

Led by the London School of Hygiene & Tropical Medicine (LSHTM), the study is the first to look at the relationship between birth order and two key sexual health outcomes: parental involvement in sex education and early sexual experience.

The study's authors say that a better understanding of the relationship between birth order and parental involvement in learning about sex could help to improve the design and delivery of sex education programmes.

Researchers used data from the third National Survey of Sexual Attitudes and Lifestyles (Natsal-3), the largest scientific studies of sexual health and lifestyles in Britain. Conducted by LSHTM, UCL and NatCen Social Research, the studies have been carried out every 10 years since 1990, and have involved interviews with more than 45,000 people to date.

Taking a sub-sample of Natsal-3 participants - 5,000 individuals aged 17 to 29 who were either first-born, middle-born or last-born - the team analysed responses to questions about the involvement of parents or siblings in sex education and early sexual experiences .

First-born children were more likely to report parental involvement in sex education compared with later-born children. From the study sample, 48% of first-born women and 37% of first-born men reported discussing sex with a parent aged 14, as opposed to 40% of middle-born women and 29% of middle-born men.

Middle-born and last-born men were less likely than first-born to report having found it easy to discuss sex with their parents growing up. Later-born men were also less likely to report learning about sex from their mothers, while last-born women were less likely than first-born women to report a parent as a main source of sex education.

Although there were differences by birth order in parental involvement in sex education, there did not appear to be an association between birth order and early sexual experiences, although middle-born men were at increased odds of being under 16 when they first had sex.

Dr Lotte Elton, who led the research as part of her MSc project at LSHTM, said: "Although there has been much research into how the order in which children are born into a family may impact psychological or social outcomes, studies on the relationship between birth order and sexual health outcomes are limited.

"In addition to seeing differences in sex education according to birth order, we also found clear differences between the sexes; across all birth order categories, men consistently reported lower parental involvement in sex education than women. Our findings suggest that the previously-reported difficulties men face in talking about sex with parents may be exacerbated if they are middle- or last-born."

When researchers looked at sibling involvement in learning about sex, they found that all later-born children were more likely to report learning about sex from siblings compared with first-born children.

Although there is little research on how sex education from siblings might affect sexual health, associations between sibling behavior and sexual health outcomes have been documented - for example, having a sexually active brother or sister has been linked with more liberal sexual attitudes, and sisters of pregnant and childbearing adolescents have been found to be younger at first sexual intercourse.

With later-born children reporting that siblings played a greater role in their learning about sex, the authors say that brothers and sisters could be utilized in sex education programmes.

Given that later-born children are learning from their siblings about sex, dedicated sex education programmes could better equip adolescents to teach their younger siblings about sex, particularly where parental involvement in sex education is low.

Wendy Macdowall, senior investigator from LSHTM, said: "Our findings support previous work demonstrating gender disparities in family involvement in sex education, highlighting the need for further work in this area - particularly around how birth order might affect the involvement of parents in children's sex education.

"The results are particularly significant given plans to make sex education compulsory in schools in England and Wales, since relying on parents to provide sex education for children may disadvantage later-borns."

The authors acknowledge the limitations of the study. Of particular importance is that middle-born children had different socio-demographic characteristics, including social class and ethnicity, compared to first- and last-born children, which means that even with statistical adjustment, the results for middle-born children may reflect socio-demographic differences rather than birth order.

Furthermore, although adjustment was made for sibling number, other sibling factors which were not adjusted for, such as gender and age difference, which may have been relevant.

Natsal-3 is one of the largest and most comprehensive studies of sexual behaviour and lifestyles in the world, and is a major source of data informing sexual and reproductive health policy in Britain. Natsal was funded by the Medical Research Council and the Wellcome Trust, with additional funding from the UK Research and Innovation and Department of Health and Social Care.

Credit: 
Taylor & Francis Group

Clear the air

image: University of Utah electrical and computer engineering doctoral graduate, Kyeong T. Min, assembles air pollution sensors for a study that determines how efficient heating and air conditioning systems in the home are at cleaning the air quality while also saving energy.

Image: 
University of Utah

Sept. 26, 2018 -- Air conditioning and heating systems are not only great for keeping a home cool or warm, but they also help clean the air of harmful pollutants.

While home thermostats control HVAC (heating, ventilation, and air conditioning) systems based on temperature, engineers from the University of Utah have studied the effects of controlling them based on a home's indoor air quality. They have discovered that programming your air conditioner and furnace to turn on and off based on the indoor air quality as well as the temperature doesn't waste a lot of additional energy but keeps the air much cleaner.

Their findings, published in a paper titled Smart Home Air Filtering System: A Randomized Controlled Trial for Performance Evaluation, will be presented Sept. 26 at this year's "IEEE/ACM Conference on Connected Health: Applications, Systems and Engineering Technologies" in Washington D.C. The lead authors of the paper are University of Utah electrical and computer engineering professor Neal Patwari and U electrical and computer engineering doctoral graduate, Kyeong T. Min.

The researchers, led by Patwari, purchased a series of off-the-shelf portable air pollution sensors and connected them wirelessly to Raspberry Pis, small and inexpensive computers for hobbyists. With specialized software developed by the engineers, the computers were programmed to automatically turn on the air conditioning system whenever the particulate matter in the air reached a certain point and turn off the system when the particulate matter dipped below a certain measurement.

For the study, 12 sensors were deployed in four homes in 2017. In each house, two of the sensors were inside rooms, and one was placed outside under a covered porch. Starting at midnight each night, each home would randomly operate the sensors under one of three conditions: "Normal," in which the HVAC systems turned on and off normally based on temperature only; "Always On," in which the air system operated continuously all day, and; "SmartAir," in which the system turned on and off the HVAC fan based on the pollution measurement in the house as well as the thermostat's temperature setting.

Based on five months of data, the study revealed that operating with the "SmartAir" setting in which it turned on and off based on temperature and air quality cleaned the air almost as well as if the HVAC fan was operating all day, but it used 58 percent less energy. Meanwhile, when the heating and cooling system operates normally without regards to the air quality, the air was 31 percent dirtier than with the "SmartAir" setting.

"For someone with asthma, an exacerbation can be triggered by poor air in the home, particularly for children," Patwari says. "This kind of monitoring system could allow them to live more comfortably and with fewer asthma symptoms and fewer trips to the emergency room."

Because of ordinary activities in the home such as cooking, vacuuming and running the clothes dryer, air quality inside a home can at certain times of the day be much worse than outside. Constant exposure to indoor air pollutants can lead to short-term health effects such as irritation of the eyes, nose, and throat, as well as headaches, dizziness, and fatigue, according to the United States Environmental Protection Agency. Long-term exposure could also lead to respiratory diseases, heart disease and cancer and could be fatal for some. Yet there are no known home or commercial HVAC systems that are controlled by air quality sensors.

Credit: 
University of Utah

To improve auto coatings, new tests do more than scratch the surface

image: Schematic of the coating layers in a typical automobile composite body. Mar and scratch damages from a variety of object impacts are shown.

Image: 
Eastman Chemical Co./ K. Irvine, NIST

Know that sickening feeling when you exit the grocery store and find your car has been banged up by a runaway shopping cart? It may one day be just a bad memory if auto body manufacturers make use of a new suite of tests developed by the National Institute of Standards and Technology (NIST) and three industry partners. Data from these tests could eventually help your vehicle's exterior better defend itself against dings, dents, scratches and things that go bump on the highway.

In a new paper in the journal Progress in Organic Coatings, researchers at four organizations--NIST and industry partners Eastman Chemical Co., the Hyundai America Technical Center and Anton Paar USA--describe three versions of a fast, reliable laboratory method for simulating scratching processes on automobile clearcoats (the uppermost, or surface, layer of an exterior polymer composite coating). The tests are designed to give manufacturers a better understanding of the mechanisms behind those processes so that future coating materials can be made more scratch resistant and resilient.

Stronger, more robust coatings are important to meet both consumer and industrial demands. For example, statistics show that: people are keeping their cars longer and want them to stay attractive (those owning cars for more than two years rose 41 percent from 2006 to 2015); nearly 600,000 drivers work for ride-sharing services in the United States that require them to maintain vehicle appearance; improved paint durability is consistently among the top three performance requirements for original equipment manufacturers; and 60 percent of all consumer complaints about autos are attributed to paint scratches and chip imperfections.

Currently, automobile coating manufacturers use two simple test methods to evaluate clearcoat scratch resistance and predict field performance: the crockmeter and the Amtech-Kistler car wash. The former is a device that uses a robotic 'finger" moving back and forth with varying degrees of force to mimic damage from human contact and abrasive surfaces. The latter is a rotating wheel of brushes that simulate the impact of car washes on clearcoats.

"Unfortunately, both methods only assess clearcoat performance based on appearance, a qualitative measure where the results vary from test to test, and they don't provide the quantitative data that scientifically helps us understand what happens to auto finishes in real life," said NIST physicist Li Piin Sung, one of the authors of the new paper. "We demonstrated a test method that characterizes scratch mechanisms at the molecular level because that's where the chemistry and physics happens ... and where coatings can be engineered to be more resilient."

For their test method, the researchers first tapped a diamond-tipped stylus across the surface of a polymer composite sample to map its morphology, then used the stylus to create a scratch and finally, retapped and remapped the surface. Three different scales of scratch tests--nano, micro and macro--were conducted using different size tips and different ranges of force.

The quantitative differences between the pre-scratch and post-scratch profiles, along with microscopic analyses of the scratches, provided valuable data on vulnerability to deformation (How deep does the scratch go?), fracture resistance (How much force does it take to crack the composite?) and resilience (How much does the material recover from the physical insult?).

NIST ran the nano-scratch test with a tip radius of 1 micrometer (a micrometer is a millionth of a meter, or about one-fifth the diameter of a strand of spider silk) and a force range between 0 and 30 micronewtons (a micronewton is a millionth of a newton, or about 20 millionths of a pound of force). Anton Parr did the micro-scratch test with a 50-micrometer tip and a force range between 25 micronewtons and 5 newtons (equivalent to 5 millionths of a pound to 1.25 pounds of force), while Eastman Chemical performed the macro-scratch test with a 200-micrometer tip and a force range between 0.5 and 30 newtons (equivalent to one-tenth of a pound to 7.5 pounds of force).

When scratches in the clearcoat are a few micrometers in depth and width, and occur without fracture, they are referred to as mars. These shallow, difficult-to-see deformations, Sung said, are most often the result of car washing. She explained that the nano-scratch test performed at NIST provided the best data on the mechanisms of marring and light scratches while the micro- and macro-scratch tests conducted by NIST's partners were better at yielding detailed information about the larger, deeper and more visible deformations known as fracture scratches--the injuries caused by keys, tree branches, shopping carts and other solid objects.

"Data from the nano-scratch test also proved best for determining how well the coating responded to physical insult based on its crosslink density, the measure of how tightly the polymer components are bound together," Sung said. "With this molecular-level understanding, clearcoat formulas can be improved so that they yield materials dense enough to be scratch resistant and resilient but not so hard that they cannot be worked with easily."

The researchers concluded that to get the truest evaluation of clearcoat performance, the nano-, micro- and macro-scratch tests should be conducted in conjunction with the current industry standard methods.

"That way, one gets the complete picture of an auto body coating, both qualitatively and quantitatively characterized, so that the tougher coatings created in the lab will work just as well on the road," Sung said.

Credit: 
National Institute of Standards and Technology (NIST)

Study documents poor mental and physical health in rural borderland community members

image: Photo shows trailer homes in eastern Coachella Valley.

Image: 
Katheryn Rodriguez, UC Riverside.

RIVERSIDE, Calif. - The borderlands between the United States and Mexico are home to numerous Mexican and Central American rural communities, with many members living in poverty and frustrated by limited access to basic resources.

 

A study on inequalities and health among foreign-born Latinos in rural borderland communities, led by a researcher at the University of California, Riverside, has found that this population is vulnerable to high stress that negatively impacts its mental and physical health.

 

"While the research focused on Latino immigrants in Southern California, our findings tell us a lot about structural level factors and daily life events and chronic strain that create stress for minorities and immigrants in rural communities," said Ann Cheney, an assistant professor in the Center for Healthy Communities in the School of Medicine, who led the research. "Factors outside Latino immigrants' control negatively affect their health. Some of these factors are historically based, such as the subjugation of Mexicans in the eastern Coachella Valley, whereas others are because of current immigration and policing practices and unfair living and working conditions."

 

The findings, published in the journal Social Science & Medicine, have implications for immigrants along the U.S.-Mexico border as well as immigrant populations in rural America. They also have implications for nonprofit organizations and public healthcare systems.

 

"While the structural factors which are historically rooted may differ per racial or ethnic group, the lack of healthcare access and inequalities present in their living and working environments are likely similar across immigrant communities in rural America," Cheney said. "These communities are often characterized by substandard housing, poor infrastructure, unsanitary conditions, and unsafe public drinking water. In the U.S., rural health is an often-overlooked health disparity."

 

During 2015 and 2016, Cheney and her team conducted individual and group interviews with farmworkers, farmworker advocates, community leaders, and healthcare service providers on the health concerns and factors contributing to poor health among immigrant farm-working communities in inland Southern California's eastern Coachella Valley.

 

Their investigation showed that historical factors such as oppression of farm labor in the valley and current practices such as racial profiling make foreign-born Latinos vulnerable to inequalities that contribute to low social status, employment and housing instability, and limited access to healthcare services. These inequalities affect self-worth, dignity, and wellbeing; creating stress and resulting in poor mental and physical health.

 

"Over time, these daily and chronic strains affect control over life and self-worth, contributing to poor mental and physical health conditions," Cheney said.

 

The research paper calls for local community action, healthcare policy change, and further in-depth research on structural inequalities in health among foreign-born Latinos.

 

"Nonprofits, healthcare systems, policy makers and researchers need to take action and address structural-level inequality in health," Cheney said. "The health of this Latino population is incredibly important -- they are the backbone of the American food system. The eastern Coachella Valley, one of the richest agricultural areas of the world, alone contributes more than $600 million in agricultural production to our economy."

 

The paper notes that the undocumented status of many foreign-born Latinos limits their access to public services such as healthcare and stable employment. It also makes them vulnerable to discrimination and exploitation, ultimately affecting their health and wellbeing by increasing blood pressure, cholesterol levels, hypertension, and adipose tissue.

 

"Not being able to achieve life goals that are collectively defined as important can be a profoundly stressful experience and lead to psychological distress, psychosomatic symptoms, and compromised immune system," Cheney said. "We need to create long-term solutions that identify the structures that have historically produced and continue to produce inequalities in health. We tend to overemphasize empirical, measurable data which allows us to create immediate solutions, such as increased transportation, but that doesn't change the general lack of resources within these communities or social status or inequality in working and living contexts that contribute to poor health."

 

Cheney and her team used a participatory research approach to engage the community. The approach helped the team develop rapport and long-term partnerships with leaders and farmworkers in the eastern Coachella Valley. The researchers conducted informal interviews with 18 stakeholders, including community leaders, service providers, and farmworkers and advocates. They spoke to a community review board with nine stakeholders; and three focus groups with a total of 25 farmworkers and advocates. The majority of the participants were women.

 

Participants self-identified as Mexican, were local, and were current or former farm workers or had grown up in a farm-working family. Forty percent had not seen a provider in the past year. Forty-four percent did not have health insurance.

 

Eastern Coachella Valley is home to predominantly Mexican-origin farm-working families living in poverty. Once migrants reach the valley, they face chronic sources of strain linked to aggressive immigration and local law enforcement, unfair housing practices, and severely limited access to essential resources, including healthcare services.

 

"Our study gives voice to the voiceless in an effort to influence public understandings of the immigrant experience and provides evidence that can inform public health and immigration policy change," Cheney said. "We show how inequalities across multiple social systems position foreign-born Latinos in low social status in the U.S. rendering them vulnerable to persistent and chronic strain that affect health and wellbeing."

 

Credit: 
University of California - Riverside

AI improves doctors' ability to correctly interpret tests and diagnose lung disease

Paris, France: Artificial intelligence (AI) can be an invaluable aid to help lung doctors interpret respiratory symptoms accurately and make a correct diagnosis, according to new research presented today (Wednesday) at the European Respiratory Society International Congress [1].

Dr Marko Topalovic (PhD), a postdoctoral researcher at the Laboratory for Respiratory Diseases, Catholic University of Leuven (KU Leuven), Belgium, told the meeting that after training an AI computer algorithm using good quality data, it proved to be more consistent and accurate in interpreting respiratory test results and suggesting diagnoses than lung specialists.

"Pulmonary function tests provide an extensive series of numerical outputs and their patterns can be hard for the human eye to perceive and recognise; however, it is easy for computers to manage large quantities of data like these and so we thought AI could be useful for pulmonologists. We explored if this was true with 120 pulmonologists from 16 hospitals. We found that diagnosis by AI was more accurate in twice as many cases as diagnosis by pulmonologists. These results show how AI can serve as a second opinion for pulmonologists when they are assessing and diagnosing their patients," he said.

Pulmonary function tests (PFT) include: spirometry, which involves the patient breathing through a mouthpiece to measure the amount of air inhaled and exhaled; a body box or plethysmography test, which enables doctors to assess lung volume by measuring the pressure in a booth in which the patient is sitting and breathing through a mouthpiece; and a diffusion capacity test, which tests how well a patient's lungs are able to transfer oxygen and carbon dioxide to and from the bloodstream by testing the efficiency of the alveoli (small air sacks in the lungs). Results from these tests give doctors important information about the functioning of the lungs, but do not tell them what is wrong with the patient. This requires interpretation of the results in order to reach a diagnosis.

In this study, the researchers used historical data from 1430 patients from 33 Belgian hospitals. The data were assessed by an expert panel of pulmonologists and interpretations were measured against gold standard guidelines from the European Respiratory Society and the American Thoracic Society. The expert panel considered patients' medical histories, results of all PFTs and any additional tests, before agreeing on the correct interpretation and diagnosis for each patient.

"When training the AI algorithm, the use of good quality data is of utmost importance," explained Dr Topalovic. "An expert panel examined all the results from the pulmonary function tests, and the other tests and medical information as well. They used these to reach agreement on final diagnoses that the experts were confident were correct. These were then used to develop an algorithm to train the AI, before validating it by incorporating it into real clinical practice at the University Hospital Leuven. The challenging part was making sure the algorithm recognised patterns of up to nine different diseases."

Then, 120 pulmonologists from 16 European hospitals (from Belgium, France, The Netherlands, Germany and Luxembourg) made 6000 interpretations of PFT data from 50 randomly selected patients. The AI also examined the same data. The results from both were measured against the gold standard guidelines in the same way as during development of the algorithm.

The researchers found that the interpretation of the PFTs by the pulmonologists matched the guidelines in 74% of cases (with a range of 56-88%), but the AI-based software interpretations perfectly matched the guidelines (100%). The doctors were able to correctly diagnose the primary disease in 45% of cases (with a range of 24-62%), while the AI gave a correct diagnosis in 82% of cases.

Dr Topalovic said: "We found that the interpretation of pulmonary function tests and the diagnosis of respiratory disease by pulmonologists is not an easy task. It takes more information and further tests to reach a satisfactory level of accuracy. On the other hand, the AI-based software has superior performance and therefore can provide a powerful decision support tool to improve current clinical practice. Feedback from doctors is very positive, particularly as it helps them to identify difficult patterns of rare diseases."

Two large Belgian hospitals are already using the AI-based software to improve interpretations and diagnoses. "We firmly believe that we can empower doctors to make their interpretations and diagnoses easier, faster and better. AI will not replace doctors, that is certain, because doctors are able to see a broader perspective than that presented by pulmonary function tests alone. This enables them to make decisions based on a combination of many different factors. However, it is evident that AI will augment our abilities to accomplish more and decrease chances for errors and redundant work. The AI-based software has superior performance and therefore may provide a powerful decision support tool to improve current clinical practice.

"Nowadays, we trust computers to fly our planes, to drive our cars and to survey our security. We can also have confidence in computers to label medical conditions based on specific data. The beauty is that, independent of location or medical coverage, AI can provide the highest standards of PFT interpretation and patients can have the best and affordable diagnostic experience. Whether it will be widely used in future clinical applications is just a matter of time, but will be driven by the acceptance of the medical community," said Dr Topalovic.

He said the next step would be to get more hospitals to use this technology and investigate transferring the AI technology to primary care, where the data would be captured by general practitioners (GPs) to help them make correct diagnoses and referrals.

Professor Mina Gaga is President of the European Respiratory Society, and Medical Director and Head of the Respiratory Department of Athens Chest Hospital, Greece, and was not involved in the study. She said: "This work shows the exciting possibilities that artificial intelligence offers to doctors to help them provide a better, quicker service to their patients. Over the past 20 to 30 years, the evolution in technology has led to better diagnosis and treatments: a revolution in imaging techniques, in molecular testing and in targeted treatments have make medicine easier and more effective. AI is the new addition! I think it will be invaluable in helping doctors and patients and will be an important aid to their decision-making."

Credit: 
European Respiratory Society

Grad students may be future professors, but that doesn't mean they can teach

A new Portland State University study found that graduate students are on board with wanting to adopt interactive teaching methods but often don't get the training or support they need from their institutions to do so.

Research has shown that students may benefit more from the use of student-centered teaching strategies such as small-group discussions, in-class "clicker" questions, demonstrations, and question-answer sessions than they would from a professor giving a traditional lecture. But despite national calls to adopt what's known as evidence-based teaching practices, many science faculty have been slow to do so.

The study led by PSU biology Ph.D. student Emma Goodwin focused on graduate students, whom she says play a critical role in responding to the push for increased adoption of evidence-based teaching practices -- not only because they serve as teaching assistants in undergraduate classrooms, but they will also be future faculty members if they choose to pursue academia as a career.

The authors -- Goodwin, her advisor, assistant biology professor Erin Shortlidge, PSU post-baccalaureate researcher Miles Fletcher and undergraduate researchers Jane Cao and Justin Flaiban -- conducted an in-depth interview study with 32 biology graduate students from 25 institutions nationwide. They believe it's the first qualitative study on biology graduate students' perceptions and experiences with evidence-based teaching.

Their work, published online Aug. 24 in the journal CBE - Life Sciences Education, supports a recent study by Shortlidge published in PLOS One, which found that Ph.D. students who spend time developing their teaching skills can be just as successful in their research, if not more so, than those who do not.

Goodwin's study found that most graduate students in the sample were aware of and valued evidence-based teaching, but only 37.5 percent had implemented the methods in classes or labs. Barriers included a lack of instructional training opportunities, little control over the curriculum and a perception that their time should prioritize research over teaching.

"It's great to know about something, but actually practicing a behavior is crucial to using it in the future," Goodwin said. "If they don't have the experiences where they're actually practicing these teaching methods as graduate students, then do we know for sure that they'll use them as faculty, even if they want to?"

Those who found some success in implementing evidence-based teaching practices often had to seek out training opportunities on their own. The authors say the study illustrates that graduate students do not necessarily lack interest in using evidence-based teaching, but rather lack the support to learn about and practice effective teaching methods for the sciences.

Shortlidge said that a good first step would be to give graduate students techniques they can use within the confines of the kinds of courses they teach.

"I can't overemphasize enough how important it is to consider this alongside building up research skills," Shortlidge said. "A graduate student who wants to stay in academia may end up working in a teaching-intensive position where research is not a significant percentage of their expectations. Giving them the opportunities to build up that skill set is important so they can walk into their positions fully equipped to do a good job."

Credit: 
Portland State University

Hate speech-detecting AIs are fools for 'love'

video: Here's how Google Perspective toxicity rating reacts to typos and a little love thrown in an otherwise hateful sentence.

Image: 
Aalto University Secure Systems research group

State-of-the-art detectors that screen out online hate speech can be easily duped by humans, shows new study.

Hateful text and comments are an ever-increasing problem in online environments, yet addressing the rampant issue relies on being able to identify toxic content. A new study by the Aalto University Secure Systems research group https://ssg.aalto.fi has discovered weaknesses in many machine learning detectors currently used to recognize and keep hate speech at bay.

Many popular social media and online platforms use hate speech detectors that a team of researchers led by Professor N. Asokan have now shown to be brittle and easy to deceive. Bad grammar and awkward spelling--intentional or not--might make toxic social media comments harder for AI detectors to spot.

The team put seven state-of-the-art hate speech detectors to the test. All of them failed.

Modern natural language processing techniques (NLP) can classify text based on individual characters, words or sentences. When faced with textual data that differs from that used in their training, they begin to fumble.

'We inserted typos, changed word boundaries or added neutral words to the original hate speech. Removing spaces between words was the most powerful attack, and a combination of these methods was effective even against Google's comment-ranking system Perspective,' says Tommi Gröndahl, doctoral student at Aalto University.

Google Perspective ranks the 'toxicity' of comments using text analysis methods. In 2017, researchers from the University of Washington showed that Google Perspective can be fooled by introducing simple typos. Gröndahl and his colleagues have now found that Perspective has since become resilient to simple typos yet can still be fooled by other modifications such as removing spaces or adding innocuous words like 'love'.

A sentence like 'I hate you' slipped through the sieve and became non-hateful when modified into 'Ihateyou love'.

The researchers note that in different contexts the same utterance can be regarded either as hateful or merely offensive. Hate speech is subjective and context-specific, which renders text analysis techniques insufficient as stand-alone solutions.

The researchers recommend that more attention be paid to the quality of data sets used to train machine learning models--rather than refining the model design. The results indicate that character-based detection could be a viable way to improve current applications.

Credit: 
Aalto University

First-of-kind study reveals public & physician attitudes toward recording clinical visits

image: Dartmouth Institute Assistant Professor Paul Barr, Ph.D., led a first-of-kind study that assesses the attitudes of doctors and the public toward recording clinical visits. The research team also surveyed 49 of the largest health systems in the US to determine whether they currently have in place policies on the sharing of recordings for doctors and patients.

Image: 
The Dartmouth Institute

With over three-quarters of Americans now owning a smartphone, healthcare researchers have speculated that the number of patients recording visits with their doctor was increasing. However, a new study by researchers from The Dartmouth Institute for Health Policy and Clinical Practice is the first to measure the prevalence of recording of clinical visits in the United States. The first-of-its-kind study, recently published in the Journal of Medical Internet Research, also assesses the attitudes of doctors and the public toward recording, and surveys 49 of the largest health systems in the U.S. to determine whether they currently have in place policies on the sharing of recordings for doctors and patients.

"We know that up to 80% of healthcare information is forgotten by patients after their clinic visit," says Dartmouth Institute Assistant Professor Paul Barr, PhD, the study's lead investigator. "There's also been significant research that shows access to recordings can improve patient satisfaction and increase understanding of medical information. But, this is the first study, to my knowledge, that surveys doctor and patient attitudes to try to really understand how they feel about recordings and where things might be headed."

The research team used online surveys to assess clinician and patient attitudes about recording. To ensure a diverse representation of specialists, researchers included clinicians from the following eight specialties: emergency medicine, general/family medicine, internal medicine, general surgery, obstetrics and gynecology, orthopedic surgery, physical rehabilitation, and psychiatry. To assess patient attitude toward recording, the research team surveyed over 500 adults from 48 states -- in a sample that was representative of U.S. demographics.

Among their findings on clinician attitudes:

28% reported recording a clinical visit for a patient's personal use.

Among those who had not, 50% were willing to do so, while 50% were not.

Analysis found that only clinical specialty (as opposed to factors such as gender or length of time in practice) was associated with recording a visit in the past.

Clinicians in oncology and physical rehabilitation were more likely to have recorded a visit, while clinicians in general/family practice were least likely to have recorded a visit.

Among their findings on public attitudes:

16% of respondents reported recording a clinic visit with permission, while only 3% did so secretly (without asking permission first); 82% had never recorded a clinic visit.

59% said they would consider recording with the permission of the doctor, while only 7% said they would consider recording without a doctor's permission.

Analysis found that individuals who reported having recorded a visit with permission of a doctor were more likely to be male, to be younger, and to speak a language other than English at home.

While 63% of individual respondents were interested in recording a visit in the future.

Only 10% of respondents said their clinic (doctor's office) offered recordings of visits for personal use.

A limitation of the study was that focusing on a sample of the public, rather than a sample of patients, may underrepresent the prevalence of recording occurring in healthcare, as it includes respondents who may have limited experience with health systems.

Of the 49 health systems surveyed, none reported having a dedicated policy or guidance for doctors or patients on the practice of sharing recordings; two said they had an existing policy which would cover patient requests for audio and video recordings of clinical visits.

The researchers conclude that their findings suggest that while patients and individual healthcare providers are taking the lead on sharing recordings of clinical visits, policy makers are lagging behind. They also note that dissemination of innovation in healthcare has a tipping point of between 15-20%, after which it's difficult to stop, and that recording and sharing of clinical visits may have reached this point.

"Recording clinical visits could help us tackle some of the biggest challenges in healthcare. It could help patients with chronic conditions better adhere to their treatment plans, potentially lowering costs. It could help alleviate the documentation burden many healthcare providers currently face," Barr says. "But, we urgently need to have some policy guidelines in place for clinicians and patients--we needed them yesterday."

To read the full study: http://www.jmir.org/2018/9/e11308/

Credit: 
The Dartmouth Institute for Health Policy & Clinical Practice

Researchers decode mood from human brain signals

By developing a novel decoding technology, a team of engineers and physicians at the University of Southern California (USC) and UC San Francisco have discovered how mood variations can be decoded from neural signals in the human brain--a process that has not been demonstrated to date.

Their study, published in Nature Biotechnology, is a significant step towards creating new closed-loop therapies that use brain stimulation to treat debilitating mood and anxiety disorders in millions of patients who are not responsive to current treatments.

Assistant Professor and Viterbi Early Career Chair Maryam Shanechi of the Ming Hsieh Department of Electrical Engineering and the Neuroscience Graduate Program at USC led the development of the decoding technology, and Professor of Neurological Surgery Edward Chang at UCSF led the human implantation and data collection effort. The researchers were supporting the Defense Advanced Research Projects Agency's SUBNETS program to develop new biomedical technologies for treating intractable neurological diseases.

The team recruited seven human volunteers among a group of epilepsy patients who already had intracranial electrodes inserted in their brain for standard clinical monitoring to locate their seizures. Large-scale brain signals were recorded from these electrodes in the volunteers across multiple days at UCSF, while they also intermittently reported their moods using a questionnaire. Shanechi and her students, Omid Sani and Yuxiao Yang, used that data to develop a novel decoding technology that could predict mood variations over time from the brain signals in each human subject, a goal unachievable to date.

"Mood is represented across multiple sites in the brain rather than localized regions, thus decoding mood presents a unique computational challenge," Shanechi said. "This challenge is made more difficult by the fact that we don't have a full understanding of how these regions coordinate their activity to encode mood and that mood is inherently difficult to assess. To solve this challenge, we needed to develop new decoding methodologies that incorporate neural signals from distributed brain sites while dealing with infrequent opportunities to measure moods."

To build the decoder, Shanechi and the team of researchers analyzed brain signals that were recorded from intracranial electrodes in the seven human volunteers. Raw brain signals were continuously recorded across distributed brain regions while the patients self-reported their moods through a tablet-based questionnaire.

In each of the 24 questions, the patient was asked to "rate how you feel now" by tapping one of 7 buttons on a continuum between a pair of negative and positive mood state descriptors (e.g., "depressed" and "happy"). A higher score corresponded to a more positive mood state.

Using their methodology, the researchers were able to uncover the patterns of brain signals that matched the self-reported moods. They then used this knowledge to build a decoder that would independently recognize the patterns of signals corresponding to a certain mood. Once the decoder was built, it measured the brain signals alone to predict mood variations in each patient over multiple days.

A Potential Solution for Untreatable Neuropsychiatric Conditions?

The USC/UCSF team believe their findings could support the development of new closed-loop brain stimulation therapies for mood and anxiety disorders.

Data from the 2016 National Survey on Drug Use and Health revealed that 16.2 million adults in the United States (approximately 6.7 % of all U.S. adults) have suffered at least one major depressive episode. Treatments such as selective serotonin reuptake inhibitors (SSRIs) can be effective in some but not all patients.

According to the National Institutes of Health-funded STAR*D trial--the longest study to evaluate depression treatments--almost 33% of major depression patients do not respond to treatment (more than 5.3M people in the U.S. alone). Also, in June 2018, Centers for Disease Control and Prevention reported that suicide is on the rise in the U.S.

For the millions of treatment-resistant patients, alternative therapies may be effective. For example, human imaging studies using positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) have suggested that several brain regions mediate depression, and thus brain stimulation therapies in which a mood-relevant region is electrically stimulated may be applied to alleviate depressive symptoms. While open-loop brain stimulation treatments hold some promise, a more precise, effective therapy could necessitate a closed-loop approach in which an objective tracking of mood over time guides how stimulation is delivered.

According to Shanechi, for clinical practitioners, a powerful decoding tool would provide the means to clearly delineate, in real time, the network of brain regions that support emotional behavior.

"Our goal is to create a technology that helps clinicians obtain a more accurate map of what is happening in a depressed brain at a particular moment in time and a way to understand what the brain signal is telling us about mood. This will allow us to obtain a more objective assessment of mood over time to guide the course of treatment for a given patient," Shanechi said. "For example, if we know the mood at a given time, we can use it to decide whether or how electrical stimulation should be delivered to the brain at that moment to regulate unhealthy, debilitating extremes of emotion. This technology opens the possibility of new personalized therapies for neuropsychiatric disorders such as depression and anxiety for millions who are not responsive to traditional treatments."

The new decoding technology, Shanechi explained, could also be extended to develop closed-loop systems for other neuropsychiatric conditions such as chronic pain, addiction, or post-traumatic stress disorder whose neural correlates are again not anatomically localized, but rather span a distributed network of brain regions, and whose behavioral assessment is difficult and thus not frequently available.

Credit: 
University of Southern California

Chronic diseases driven by metabolic dysfunction

image: This is a false-color transmission electron micrograph of a mitochondrion inside a cell.

Image: 
Thomas Deerinck, National Center for Microscopy and Imaging Research, UC San Diego

Much of modern Western medicine is based upon the treatment of acute, immediate harm, from physical injury to infections, from broken bones and the common cold to heart and asthma attacks.

But progress in treating chronic illness, where the cause of the problem is often unknown--and, in fact, may no longer even be present -- has lagged. Chronic conditions like cancer, diabetes and cardiovascular disease defy easy explanation, let alone remedy. The Centers for Disease Control and Prevention estimate that more than half of adults and one-third of children and teens in the United States live with at least one chronic illness. Chronic medical conditions, according to the National Institutes of Health, cause more than half of all deaths worldwide.

In a new paper, available online in Mitochondrion in advance of publication, Robert K. Naviaux, MD, PhD, professor of medicine, pediatrics and pathology at University of California San Diego School of Medicine, posits that chronic disease is essentially the consequence of the natural healing cycle becoming blocked, specifically by disruptions at the metabolic and cellular levels.

"The healing process is a dynamic circle that starts with injury and ends with recovery. The molecular features of this process are universal," said Naviaux, who also directs the Mitochondrial and Metabolic Disease Center at UC San Diego. "Emerging evidence shows that most chronic illnesses are caused by the biological reaction to an injury, not the initial injury or the agent of the injury. The illness occurs because the body is unable to complete the healing process."

For example, said Naviaux, melanoma -- the deadliest form of skin cancer -- can be caused by sun exposure that occurred decades earlier, damaging DNA that was never repaired. Post-traumatic stress disorder can flare months or years after the original head injury has healed. A concussion sustained before an earlier concussion has completely resolved typically results in more severe symptoms and prolonged recovery, even if the second impact is less than the first.

"Progressive dysfunction with recurrent injury after incomplete healing occurs in all organ systems, not just the brain," said Naviaux. "Chronic disease results when cells are caught in a repeating loop of incomplete recovery and re-injury, unable to fully heal. This biology is at the root of virtually every chronic illness known, including susceptibility to recurrent infections, autoimmune diseases like rheumatoid arthritis, diabetic heart and kidney disease, asthma, chronic obstructive pulmonary disease, Alzheimer's dementia, cancer and autism spectrum disorder."

For more than a decade, Naviaux and colleagues have been investigating and developing a theory based on cell danger response (CDR), a natural and universal cellular reaction to injury or stress. In the new paper, Naviaux describes the metabolic features of the three stages of CDR that comprise the healing cycle.

"The purpose of CDR is to help protect the cell and jump-start the healing process," said Naviaux, by essentially causing the cell to harden its membranes, cease interaction with neighbors and withdraw within itself until the danger has passed.

"But sometimes CDR gets stuck. At the molecular level, cellular equilibrium is altered, preventing completion of the healing cycle and permanently changing the way the cell responds to the world. When this happens, cells behave as if they are still injured or in imminent danger, even though the original cause of the injury or threat has passed."

Last year, Naviaux conducted a small, randomized clinical trial of 10 boys diagnosed with autism, treating them with a single dose of a century-old drug that inhibits adenosine triphosphate (ATP), a small molecule produced by cellular mitochondria that serves as a warning siren of danger. When the abnormal ATP signaling was silenced, the treated boys in the trial displayed dramatically improved communication and social behaviors. They spoke, made eye contact and ceased repetitive motions. But the benefits were transient, fading and disappearing as the drug exited their systems. Naviaux's team is preparing for a larger, longer trial in 2019.

In his new paper, Naviaux describes in detail how he, based on growing evidence, believes metabolic dysfunction drives chronic disease. Progression through the healing cycle, he said, is controlled by mitochondria -- organelles within cells best known for their production of most of the energy cells need to survive -- and metabokines, signaling molecules derived from metabolism to regulate cellular receptors, including more than 100 linked to healing.

"It's abnormalities in metabokine signaling that cause the normal stages of the cell danger response to persist abnormally, creating blocks in the healing cycle," said Navaiux, who noted CDR theory also explains why some people heal more quickly than others and why a chronic disease seemingly treated successfully can relapse. It's a form of metabolic "addiction" in which the recovering cell becomes conditioned to its impaired state.

Naviaux suggests science may be on the cusp of writing a second book of medicine, one that focuses on the prevention of chronic illness and new treatments for chronic disease that can help some people recover completely, where old approaches produced only small improvements with symptoms that persisted for life.

"The idea would be to direct treatments at the underlying processes that block the healing cycle," he said. "New treatments might only be given for a short period of time to promote healing, not unlike applying a cast to promote the healing of a broken leg. When the cast is removed, the limb is weak, but over time, muscles recover and bone that was once broken may actually be stronger."

"Once the triggers of a chronic injury have been identified and removed, and on-going symptoms treated, we need to think about fixing the underlying issue of impaired healing. By shifting the focus away from the initial causes to the metabolic factors and signaling pathways that maintain chronic illness, we can find new ways to not only end chronic illness but prevent it."

Credit: 
University of California - San Diego

Criminals don't think about risk like law-abiding citizens do

For the first time, a study has shown a distinction between how risk is cognitively processed by law-abiding citizens and how that differs from lawbreakers, allowing researchers to better understand the criminal mind.

“We have found that criminal behavior is associated with a particular kind of thinking about risk,” said Valerie Reyna, the Lois and Melvin Tukman Professor of Human Development and director of the Cornell University Magnetic Resonance Imaging Facility. “And we have found, through our fMRI capabilities, that there is a correlate in the brain that corresponds to it.”

Marriage protects against malnutrition in old age

More and more elderly people are suffering from malnutrition. People who are unmarried, separated or divorced are most often affected, whilst men and women who are either married or widowed tend to take better care of themselves. Those who have difficulty walking or coping with stairs or who have just returned home from hospital are also more likely to suffer from malnutrition than others of the same age. This is the conclusion reached in a meta-analysis conducted by Prof. Dr. Dorothee Volkert and her team at the Institute for Biomedicine of Ageing (IBA) at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU).

Malnutrition can occur at any age, but elderly people aged 65 and above are particularly prone to it. 'We speak of malnutrition when people have a drastically reduced dietary intake and the body lacks energy and nutrients as a result,' explains Prof. Dr. Dorothee Volkert, although there is no binding scientific definition for the term at present. 'The consequences of malnutrition are manifold. They range from weight loss to a weakened immune system or functional impairment of muscles and all organs. The body falls back on all its reserves.'

MaNuEL connects researchers

Prof. Dr. Volkert is working with researchers from seven countries to track down the causes of malnutrition. The collaborative project 'Malnutrition in the Elderly' (MaNuEL), brings together 22 research groups from Austria, France, Germany, Ireland, Spain, the Netherlands and New Zealand. The project was launched in March 2016 for a duration of two years and has been provided funding of approximately 1.9 million euros from the Federal Ministry of Food and Agriculture (BMEL), national funding organisations in Austria, Ireland and the Netherlands, as well as non-cash contributions from the research groups involved.

Within the framework of MaNuEL, the researchers shared their knowledge relating to malnutrition in the elderly. In the next step, they intend to make recommendations for screening for and preventing malnutrition in the elderly on the basis of their joint database. Prof. Dr. Dorothee Volkert from the Institute for Biomedicine of Ageing (IBA) at FAU is one of the two coordinators of the project.

First meta-analysis on causes of malnutrition

The work package 'Determinants of malnutrition' is located at the IBA. 'Until now, we unfortunately didn't know which were the key factors behind malnutrition,' explains the nutrition expert Prof. Dr. Volkert. She and her team therefore set out to explore which of a total of 23 variables - ranging from aspects such as difficulties with chewing and swallowing or cognitive impairments to loneliness and depression or moving into a care home - were decisive for malnutrition. 'The research partners took six existing sets of data from studies on the elderly over the age of 65 and re-evaluated them using a common approach. We then compiled the results in a meta-analysis,' explains Prof. Dr. Volkert. The overall result: 'Malnutrition in the elderly appears to be caused by a surprisingly narrow range of factors. Only age, marital status, difficulties with walking and coping with stairs and stays in hospital had a significant role to play,' according to Prof. Dr. Volkert. A lack of appetite, which is often perceived as a key cause of malnutrition, was of no relevance.

The average age of the 4,844 participants in the six studies on which the results are based was between 72 and 85. All those surveyed lived in private homes in Germany, Ireland, the Netherlands and New Zealand. Between 4.6 and 17.2 percent of the participants developed malnutrition over the course of the studies. 'The older the people are, the more likely it is that they will suffer from malnutrition', according to Prof. Dr. Dorothee Volkert. 'The risk increases a little with every year that passes.'

Further studies required

Prof. Dr. Volkert recommends carrying out further studies following the same methods in order to identify further factors. 'We need a common definition of malnutrition and a uniform design for our studies. Only then can we obtain comparable results and make recommendations for preventative measures.'

Credit: 
Friedrich-Alexander-Universität Erlangen-Nürnberg