Culture

Significant link found between air pollution and neurological disorders

Boston, MA - Air pollution was significantly associated with an increased risk of hospital admissions for several neurological disorders, including Parkinson's disease, Alzheimer's disease, and other dementias, in a long-term study of more than 63 million older U.S. adults, led by researchers at Harvard T.H. Chan School of Public Health.

The study, conducted with colleagues at Emory University's Rollins School of Public Health and Columbia University's Mailman School of Public Health, is the first nationwide analysis of the link between fine particulate (PM2.5) pollution and neurodegenerative diseases in the U.S. The researchers leveraged an unparalleled amount of data compared to any previous study of air pollution and neurological disorders.

The study will be published online October 19, 2020 in The Lancet Planetary Health.

"The 2020 report of the Lancet Commission on dementia prevention, intervention, and care has added air pollution as one of the modifiable risk factors for these outcomes," said Xiao Wu, doctoral student in biostatistics at Harvard Chan School and co-lead author of the study. "Our study builds on the small but emerging evidence base indicating that long-term PM2.5 exposures are linked to an increased risk of neurological health deterioration, even at PM2.5 concentrations well below the current national standards."

Researchers looked at 17 years' worth (2000-2016) of hospital admissions data from 63,038,019 Medicare recipients in the U.S. and linked these with estimated PM2.5 concentrations by zip code. Taking into account potential confounding factors like socioeconomic status, they found that, for each 5 microgram per cubic meter of air (μg/m3) increase in annual PM2.5 concentrations, there was a 13% increased risk for first-time hospital admissions both for Parkinson's disease and for Alzheimer's disease and related dementias. This risk remained elevated even below supposedly safe levels of PM2.5 exposure, which, according to current U.S. Environmental Protection Agency standards, is an annual average of 12 μg/m3 or less.

Women, white people, and urban populations were particularly susceptible, the study found. The highest risk for first-time Parkinson's disease hospital admissions was among older adults in the northeastern U.S. For first-time Alzheimer's disease and related dementias hospital admissions, older adults in the Midwest faced the highest risk.

"Our U.S.-wide study shows that the current standards are not protecting the aging American population enough, highlighting the need for stricter standards and policies that help further reduce PM2.5 concentrations and improve air quality overall," said Antonella Zanobetti, principal research scientist in Harvard Chan School's Department of Environmental Health and co-senior author of the study.

Credit: 
Harvard T.H. Chan School of Public Health

What lies between grey and white in the brain

image: The team created very high resolution maps of the white-grey matter border across the entire living brain.

Image: 
MPI CBS

Traditionally, neuroscience regards the brain as being made up of two basic tissue types. Billions of neurons make up the grey matter, forming a thin layer on the brain's surface. These neuronal cells are interlinked in a mindboggling network by hundreds of millions of white matter connections, running in bundles, deeper in the brain. Until very recently, not much was known about the interface between the white and grey matter - the so-called superficial white matter - because methods were lacking to study it in living human brains. Yet, previous investigations had suggested the region to be implicated in devastating conditions such as Alzheimer's disease and autism. Now a multidisciplinary team led by Nikolaus Weiskopf from the Max Planck Institute for Human Cognitive and Brain Sciences has succeeded in making the superficial white matter visible in the living human brain.

"We demonstrated that the superficial white matter contains a lot of iron. It is known that iron is necessary for the process of myelination," explains Evgeniya Kirilina, first author of the study published in Science Advances. Myelin is what makes the white matter white. It's the fatty coating of nerve cell axons that speeds up transmission of information through the brain. The myelination process can occur throughout the lifespan but is predominant during development. In fact, the largest concentration of iron the researchers found was in the superficial white matter in regions of the frontal cortex, which happens to be the slowest developing structure in the human brain. Incredibly, the human frontal cortex is not fully myelinated until the forth decade of life.

The key to the new method is MRI (Magnetic Resonance Imaging) but at very high field strength. While typical clinical MRI scanners work at 1.5 or 3 Tesla, in terms of the strength of the magnetic field, the Max Planck Institute for Human Cognitive and Brain Sciences houses a powerful 7 Tesla scanner. This, in combination with advanced biophysical model, allowed the team to create very high resolution maps of the white-grey matter border across the entire living brain. The accuracy of their submillimetre maps was assessed against classic and advanced histological methods involving physical dissection and analysis of post mortem brains.

The new method promises many further insights into the organisation of the interface between white and grey matter. Evgeniya Kirilina adds, "We hope the method can be used to increase our understanding of brain development as well as pathological conditions involving the superficial white matter."

Credit: 
Max-Planck-Gesellschaft

Results from the VOYAGER PAD Trial reported at TCT Connect

NEW YORK - October 18, 2020 - A large subgroup analysis of a randomized clinical trial showed neither a mortality risk nor benefit associated with the use of paclitaxel drug-coated devices (DCD) in the treatment of peripheral artery disease (PAD). The study also found that the benefit of rivaroxaban use on reducing ischemic limb and cardiovascular outcomes was consistent regardless of whether a DCD was used.

Findings were reported today at TCT Connect, the 32nd annual scientific symposium of the Cardiovascular Research Foundation (CRF). TCT is the world's premier educational meeting specializing in interventional cardiovascular medicine.

Paclitaxel drug-coated devices (DCD) improve patency of lower extremity revascularization (LER) in patients with peripheral artery disease (PAD). However, meta-analyses of randomized trials of DCD have reported increased long-term mortality compared with non-DCD. These concerns have led to warnings from regulatory agencies about the use of DCD in patients with PAD.

VOYAGER PAD was a double-blind, placebo-controlled trial of PAD patients undergoing lower extremity revascularization (LER) randomized to rivaroxaban 2.5 mg twice daily or placebo on a background of aspirin 100 mg daily. Clopidogrel was allowed per operator discretion. The primary results of this trial, presented earlier this year at ACC and published in the New England Journal of Medicine, found that rivaroxaban at a dose of 2.5 mg twice daily plus aspirin was associated with a significantly lower incidence of the composite outcome of acute limb ischemia, major amputation for vascular causes, myocardial infarction, ischemic stroke, or death from cardiovascular causes than aspirin alone.

This analysis examined the long-term safety of DCD and evaluated whether rivaroxaban 2.5 mg twice daily plus low dose aspirin versus low dose aspirin alone on the primary efficacy endpoint was consistent with versus without DCD use.

Deaths were prospectively collected and adjudicated. All-cause mortality, a prespecified secondary outcome, was the primary outcome for this analysis. Device type was collected at enrollment in patients undergoing endovascular LER.

Among 6,564 randomized patients, 66% (n=4,316) underwent endovascular index LER and were included in this analysis; median follow-up was 31 months (IQR 25, 37), and complete ascertainment of vital status was available for 99.6% of patients. During the qualifying endovascular LER, DCD was used for 31% (n=1,358) of patients. Patients receiving DCD more frequently had prior endovascular LER, had higher baseline use of dual antiplatelet therapy and statins, and were more often treated for claudication than non-DCD patients.

In the unweighted analysis, lower associated mortality was observed among patients receiving DCD versus non-DCD (2.9 vs. 3.9 per 100 patient-years; 3.5-year Kaplan-Meier cumulative incidence of 10.2% vs.13.8%). After weighting, there was no association between DCD use and mortality (3.5-year cumulative incidence 12.1% vs. 12.6%, HR 0.95, 95% CI 0.83-1.09, p=0.49). The benefit of rivaroxaban 2.5 mg twice daily with aspirin compared to aspirin alone on reducing ischemic limb and cardiovascular outcomes was also consistent regardless of whether a DCD was used.

"Inverse Probability Treatment Weighting (IPTW) successfully adjusted for known confounders and showed no mortality risk or benefit associated with DCD, including in subgroups by device type," said Connie N. Hess, MD, MHS. Dr. Hess is an Associate Professor of Medicine (Cardiology) at the University of Colorado School of Medicine, Aurora. "This analysis from VOYAGER PAD addresses many of the limitations of currently available data regarding mortality and paclitaxel and adds to the literature examining the safety of drug-coated devices."

Credit: 
Cardiovascular Research Foundation

The 'Goldilocks Day': the perfect day for kids' bone health

image: Children's activities throughout the whole 24-hour day are important for their bone health.

Image: 
Unsplash

Not too little, not too much - Goldilocks' 'just right' approach can now assess children's daily activities as new research from the University of South Australia confirms the best make up of a child's day to maximise bone health and function in children.

Examining 804 Australian children aged between 11 and 13 years old, the world-first study found that children need more moderate-to-vigorous physical activity, more sleep and less sedentary time to optimise bone health.

The study found the ideal balance of a child's activities across a 24-hour period comprises:

1.5 hours of moderate-to-vigorous physical activity (sports, running around)

3.4 hours of light physical activity (walking, doing chores)

8.2 hours of sedentary time (studying, sitting at school, reading)

10.9 hours of sleep.

Lead researcher, UniSA's Dr Dot Dumuid say that the findings provide valuable insights for parents, caregivers and clinicians.

"Children's activities throughout the whole 24-hour day are important for their bone health, but until now, we haven't known the perfect combination of exercise, sleep and sedentary time," Dr Dumuid says.

"Higher levels of physical activity are known to be good for children's bone health, yet we can't just increase children's exercise without impacting their other activities.

"In this study, we looked at the interrelating factors of physical activity (both light, and moderate-to-vigorous physical activity), sedentary time and sleep, finding an ideal combination that delivers the best daily balance.

"The 'Goldilocks Day' tells us the durations of physical activity, sleep and sitting that are 'just right' for children's optimal bone health."

"Up to 90 per cent of peak bone mass is achieved by age 18-20, which makes this especially important during childhood and adolescence.

"Optimising bone health in children is a key protector against osteoporosis, the leading preventable cause of fracture in adults and a major public health problem with considerable economic and societal costs.

Osteoporosis is common in Australia, with 1.2 million people estimated to have the condition and a further 6.3 million with low bone density. Globally, osteoporosis affects 200 million people, with 75 million cases across Europe, USA and Japan.

In this study, participants were selected from the Child Health CheckPoint study within the Longitudinal Study of Australian Children. Activity data was collected through accelerometer readings (worn for 24 hours a day over an eight-day period), supplemented by self-recorded logs for bed and wake times. Bone measures were recorded via peripheral QCT scans of the leg (ankle and shin) to identify bone density and geometric parameters.

Dr Dumuid says the study also highlights the importance of sleep, especially for boys.

"We always talk about getting enough exercise to help build bones, but for children, it's vital that they also get enough sleep.

"Curiously, the study also showed that sleep is more important for boys' bone health than for girls, with boys needing an extra 2.4 hours of sleep a day. However, boys tended to be at earlier stages of pubertal development than girls, causing us to speculate that the need for longer sleep is related to rapidly changing hormonal processes rather than gender.

"By knowing the best balances and interrelations of sleep, exercise and rest, parents and caregivers can guide their child's daily activities to put them in good stead for future bone health."

Credit: 
University of South Australia

Showcasing successful women's STEM achievements, a social vaccine against gender stereotypes

According to data published by the Organization for Economic Co-operation and Development (OECD), female participation in the labour market has risen over the past 35 years, with women now accounting for 52.5% of the total workforce. Despite this increase, gender equality in the workplace is still far from a reality. In traditionally male-dominated fields, such as those known by the STEM acronym (for science, technology, engineering and mathematics), only two of every ten positions are occupied by women.

This underrepresentation distances women from accessing leadership positions and results in the exclusion of the feminine perspective in creating and developing solutions in the digital transformation era. It also leads to an absence of role models that showcase the contributions made by women in these areas, which may in turn cause children and teens to mistakenly think that the talent and skills required to pursue STEM careers are correlated with masculinity.

As such, in a study published in the open access journal Frontiers in Psychology, a team of researchers led by the director of the GenTIC (Gender and ICT) research group at the Universitat Oberta de Catalunya (UOC) Internet Interdisciplinary Institute (IN3), Milagros Sáinz, have demonstrated the impact of female role models in influencing girls' preferences for studying STEM subjects.

The researchers evaluated the effectiveness of an intervention implemented in sixteen schools in various cities around Spain, involving the participation of 304 girls aged between twelve and sixteen. The intervention formed part of a programme developed by the Inspiring Girls Foundation to promote scientific and technological vocations for girls. This programme involves recruiting successful women working in STEM fields as volunteers to go into schools to talk to the children about their careers. The hope is that this contact with female role models will serve to prevent the perpetuation of gender stereotyping in relation to STEM subject competency and encourage girls to opt to study on university programmes in these fields.

"From a very early age, around the age of six, girls are conditioned to think that they are not as good at maths as their male counterparts. This programme, however, focuses on girls in secondary education aged between twelve and seventeen, as this represents a crucial time during which they have to make choices about which academic path to follow," explained Sáinz.

Dismantling gender stereotypes

The youngsters who participated in the study, which examined their perceptions in relation to mathematics, were asked to complete a questionnaire both before and after the talks in which they needed to rate the validity of statements, such as 'Maths is more important for boys', 'Boys are better at maths than girls,' and 'I am talented at maths.'

The aim was to analyse the extent to which the intervention - attending the talks given by successful women working in STEM - changed the girls' perceptions about whether women are able to succeed in these fields and whether it increased the likelihood of them choosing to go on to study a STEM subject at university.

"We observed how effective the sessions were in neutralizing the negative effects of gender stereotypes, which advocate that girls have less of an affinity for mathematics, in relation to their predisposition to choose to study STEM subjects," stressed Sáinz.

Thus, according to the results of the study, coming into contact with successful women working in traditionally male-dominated STEM fields helps promote an interest in these areas of study for girls. "The sessions with the role models also showed the girls a reality that was contrary to established gender stereotypes regarding the kind of people that supposedly work in these sectors and the requirements needed to enter them," the UOC researcher pointed out.

The role played by families and teachers

Sáinz has also recently published another study, again in the Frontiers in Psychology journal, on how the assessments made, often unconsciously, by parents and teachers with regard to the academic skills of adolescents help to reinforce gender stereotypes and roles. Surprisingly, the researchers identified a discrepancy between the actual academic performance of students and the perception of their abilities by parents and teachers.

In fact, the study, which involved eight focus groups made up of 39 parents and 34 secondary school teachers, showed that many adults are unaware that girls achieve higher grades across all subjects, including those traditionally associated with masculine roles, such as maths, technology, physics and chemistry.

Many of them also continue to attribute academic performance to biological or genetic differences, without reflecting on the implications of this or on how these misconceptions contribute to replicating gender biases and perpetuating a socialization process based on emphasizing the differences between men and women.

"Although some parents and teachers are aware of this, they don't possess the strategies to combat these gender biases," said Sáinz. As such, the researcher suggests that efforts still need to be made to seek strategies to effectively combat these biases through training and intervention programmes aimed at families and the educational community.

Credit: 
Universitat Oberta de Catalunya (UOC)

Changes in blood metabolite profile are visible years before diagnosis of alcohol-related disease

image: The serum metabolite profile can be used to identify individuals likely at risk of developing an alcohol-related disease in the future.

Image: 
UEF / Raija Törrönen

A new study from the University of Eastern Finland is the first in the world to show that the serum metabolite profile can be used to identify individuals likely at risk of developing an alcohol-related disease in the future. The finding also opens up new avenues for preventing alcohol-related adverse effects. The study was published in Alcoholism: Clinical and Experimental Research.

Alcohol is the cause underlying many severe diseases, such as alcohol dependence, liver cirrhosis and different types of cancer. It is estimated that alcohol accounts for approximately five per cent of the global burden of disease, and the WHO has listed the reduction of excessive consumption of alcohol as one of its most important priorities.

"However, it is challenging to identify individuals most in need of an intervention, i.e., people who will go on to develop an alcohol-related disease," Senior Researcher Olli Kärkkäinen says.

The new study from the University of Eastern Finland discovered that changes in the serum metabolite profile are visible years before an individual is diagnosed with an alcohol-related disease. The researchers used metabolomics methods to analyse serum samples collected from middle-aged Finnish men in the 1980s as part of a prospective study focusing on risk factors of coronary artery disease. They analysed baseline serum samples from individuals who were diagnosed with an alcohol-related disease in the course of a 30-year a follow-up. On average, the diagnosis was made 13.6 years after the sample was taken. The study had two control groups: one group consisted of individuals whose consumption of alcohol at baseline was equally heavy, but who were not diagnosed with an alcohol-related disease later on. The other control group consisted of individuals whose consumption of alcohol at baseline was moderate, allowing the researchers to analyse alcohol-related changes.

There were significant differences in the groups' serum metabolite profiles. After controlling for self?reported alcohol use and gamma?glutamyl transferase levels, which is a biomarker of alcohol use, the researchers found that individuals who would later develop an alcohol-related disease had significantly lower serum levels of serotonin and asparagine than individuals in the control groups.

"Serotonin is an important mediator that regulates the function of the nervous system, and lower levels of asparagine may be related to an increased risk of alcohol-induced organ damage," Senior Researcher Kärkkäinen says.

Heavy alcohol use in itself was associated with considerable changes in the blood metabolite profile, e.g., in the levels of amino acids, steroid hormones and fatty acids.

"Our study is the first to show that the serum metabolite profile could be used to identify, already in advance, individuals who are likely to develop an alcohol-related disease in the future. This would have far-reaching consequences: if we can identify these individuals sufficiently early, we can target preventive measures at them. Successful prevention of alcohol-related adverse effects and diseases is highly significant both on the individual and societal levels," Senior Researcher Kärkkäinen says.

A limitation of the study is that it only analysed middle-aged Finnish men who belonged to a risk group for alcohol-related diseases.

"Future research should focus on analysing whether these findings can be generalised to other population groups, including women, younger people and people who are not Finnish," Senior Researcher Kärkkäinen says.

Credit: 
University of Eastern Finland

Advancing wildlife genomics through the development of molecular methods

image: Scientists tested the new SIP method for genome sequencing on the koala retrovirus.

Image: 
David Clode

A team of scientists from the Leibniz Institute for Zoo and Wildlife Research (Leibniz-IZW), the Australian Museum and the Max Delbrück Center for Molecular Medicine (MDC) report a new method for identifying any genome sequence located next to a known sequence. It is often difficult to precisely determine unknown sequences close to small known fragments. Whole genome sequencing can be a solution, but it's a very cost intensive approach. In order to find a more efficient technique, the scientists developed Sonication Inverse PCR (SIP): First, DNA is cut into random pieces using ultrasound waves. After DNA fragmentation, long-range inverse PCR is performed followed by long-fragment high-throughput sequencing. SIP can be used to characterise any DNA sequence (near a known sequence) and can be applied across genomics applications within a clinical setting as well as molecular evolutionary analyses. The results are reported in the scientific journal Methods in Ecology and Evolution.

Many methods have been developed to identify sequences next to a determined sequence of interest. Inverse PCR based methods are among the most common methods and have been used for decades but suffer from bias because of the way DNA is cut apart by enzymes: They need to find specific sequence motifs that are not evenly spread across the DNA. Therefore, many neighbouring sequences to a target cannot be characterised without technical difficulty or without the expense and effort of whole genome sequencing. "Sonication Inverse PCR (SIP) circumvents this problem by using high-frequency sound waves to randomly cut the DNA, eliminating the bias resulting from the use of enzymes", Prof Alex Greenwood from Leibniz-IZW explains. "The fragments are then turned into circles and the so-called inverse PCR is applied." With the development of long-fragment sequencing, the authors were able to target 4-6 thousand base long inverse PCR fragments and sequenced them at high-throughput on the PacBio RS II sequencing platform.

The new method was tested on a complex model, the koala retrovirus (KoRV), a high copy retrovirus found in the koala (Phascolarctos cinereus) genome. Targeting the ends of the integrated virus, the full spectrum of viral integrations in the genome could be determined using a small 'known' piece of viral DNA. Mapping the integrations against reference genomes provided precise genomic locations for each integration at a resolution that would otherwise require a large sequencing effort. "Applying this method allowed us to discover a koala specific defense mechanism against KoRV", says Dr Ulrike Löber from the MDC (see also Löber et al. 2018 [1]).

"SIP is economical and can be simultaneously applied to many samples by including barcodes to the PCR primers, making the method cost efficient", adds Dr David Alquezar, former member of the Leibniz-IZW team and now manager of the Australian Centre for Wildlife Genomics at the Australian Museum. The authors continue to apply SIP to address different problems, such as how viruses become integrated into genomes and how they cause diseases. In conclusion, SIP provides a new protocol for high-throughput profiling of flanking sequences next to any region of interest coupled with long-range sequencing, allowing scientists to study complex biological systems such as mobile genetic elements.

Credit: 
Forschungsverbund Berlin

Fear of COVID-19 raises risk of depression among Soweto's deprived communities

A STUDY into the impact of the COVID-19 lockdown on the mental health of people in Soweto has found a significant link between symptoms of depression and how likely people felt they were to be infected.

Researchers also found that both the perceived risk of infection and the likelihood of depression and anxiety increased among people who had suffered childhood trauma and among those already suffering the effects of poverty and deprivation.

Associations between depression and issues such as hunger, violence, poor healthcare, and high rates of poverty have long been recognised, but this study is the first to look at the mental health effects of the pandemic and national lockdown in South Africa under those conditions.

Researchers spoke to more than 200 adults who were already part of a long-term health study in Soweto. This had surveyed 957 people in the months before the pandemic, measuring their risk of mental ill-health, including depression, by asking them to score their mood, feelings, and behaviour. The participants were also asked about day-to-day adversity, such as family strife, poverty, deprivation, and violence; about their ways of coping, including support from friends, family, and church; and about adverse experiences in childhood like abuse, neglect, and household dysfunction.

The follow-up survey was carried out over the phone after the first six weeks of lockdown. It asked people to score themselves against major symptoms of depression during the previous month, assessed their knowledge of COVID-19 and how to protect against it, and asked whether they thought they were at less risk, the same risk or a greater risk than others.

The results, published in the Cambridge journal, Psychological Medicine, showed people were two times as likely to experience significant depressive symptoms for every step increase in their perceived risk from COVID-19. It was also found that those with a history of childhood trauma were more likely to have a higher perceived risk of contracting the virus.

In all, 14.5 per cent of those surveyed were found to be at risk of depression, with 20 per cent indicating that COVID-19 caused them deep worry, anxiety, or led to them 'thinking too much' about the virus and its impact.

While the majority did not think COVID-19 affected their mental health, both the data and what people said about its impact on their lives suggested otherwise.

Dr. Andrew Wooyoung Kim of Northwestern University, who co-directed the study for the Developmental Pathways for Health Research Unit at the University of the Witwatersrand, said: "This discrepancy may be due to different ideas of mental health, including mental health stigma.

"While participants believed that the pandemic did not affect their mental health or their 'mind', the strong relationship between perceived risk and depressive symptoms raises the concern that they may not be aware of the potential threats to their mental health during COVID-19."

These threats were amplified by other pre-existing adversities, said Dr Kim and his colleagues, including hunger and violence, an overburdened healthcare system, a high prevalence of chronic and infectious disease, and alarming rates of poverty and unemployment.

They argue that the pressures of COVID-19 and lockdown risk adding to the already high levels of mental illness among people in South Africa, where one in three experience some kind of mental disorder in their lifetimes and where only 27 per cent of patients with a severe mental illness receive treatment.

Dr Kim said: "Our study re-emphasizes the importance of prioritizing and provisioning accessible mental health resources for resource-limited communities in Soweto and across South Africa."

Credit: 
Cambridge University Press

Studying new solar tracking strategies to maximize electric production

image: The University of Cordoba analyzed a new strategy for solar tracking using backtracking in order to avoid shadows being cast among solar panels in photovoltaic plants.

Image: 
Universidad de Córdoba

From making a small calculator work to generating energy to produce the entire output of an important brewery, solar energy has been undergoing significant growth in recent years, taking the place of nonrenewable energy resources that negatively affect the environment.

In addition to producing clean energy, solar plants can be adapted to different sizes and allow for self-consumption. Over the last few years, their profitability as compared to other kinds of energy has become increasingly greater due to the lowering of prices of the materials used and the continued optimizations that have been applied stemming from research in the science and technology sector.

The University of Cordoba chose to continue improving upon the technical services at solar plants by focusing on some of the still existing disadvantages: the high variability of solar resources and the shadows that are cast among the collectors. Specifically, they focused on photovoltaic plants, those that convert solar light into electricity.

At photovoltaic plants, it is common to use two-axis solar trackers. "These trackers are inspired by sunflowers and they seek to maximize solar light collection by the movement of the photovoltaic modules. However, this movement can create partial shadows among the modules, which negatively affects energy production", explains Luis Manuel Fernández Ahumada, one of the researchers working on the study.

This research integrates two methodologies that were undertaken in previous studies. On the one hand, a mathematical model was created to optimize the collection of solar light, applicable to isolated trackers. On the other hand, using a simplified geometric model, they were able to characterize possible shadows among the trackers.

Having done this analysis, specific design recommendations were proposed that were studied at photovoltaic plants with two-axis tracking systems located in Cordoba. Using the backtracking technique, the panels followed sunlight as they are programmed to do and, when they detected that one could cast a shadow on others, they backtrack, and thus avoid casting shadows upon each other.

"It was proven that with this tracking strategy, these plants could produce at least 2% more energy annually", points out Luis Manuel Fernández Ahumada. Hence, this maximizes the performance of one plant compared to others that do not account for energy loss from shadows being cast among panels.

This study is framed in the Solar Energy Assessment and Planning Tool (SEAP) service within CLARA project that aims to create an ecosystem of services to use weather forecasting data in order to improve processes. Funded by the European Union in the area of the program Horizon 2020, the project is carried out by a European consortium made up of universities, regional governments and businesses. The SEAP service focuses on improving photovoltaic production and is coordinated by the University of Cordoba, participating via the TEP-215 Physics for Renewable Energy research group.

The group continues to work in the field of optimization of solar plants. Currently, they are working on making sensor devices that provide the optimal positioning for solar trackers in real time in order to maximize energy production without creating shadows. In another line of work, they are studying the use of weather forecasting in creating strategies to get optimal solar tracking that will produce the maximum amount of energy possible.

Credit: 
University of Córdoba

New 'green' engine for lorries ahead of the demanding anti-contamination regulation

image: New configuration unites all the benefits of hybrid and dual-fuel combustion engines.

Image: 
CMT-Motores Térmicos UPV

The results of the first theorical-experimental tests are conclusive: compared to diesel, the technology proposed by the CMT-Thermal Engines researchers of the UPV decreases the levels of NOx and soot by 92% and 88% respectively, and CO2 emissions from the exhaust pipe by 15% - down to 52 g/tkm (gram per tonne and kilometre) -, thus getting ahead of the demanding anti-contamination regulation approved for 2025. These results have been published in the journal Energy Conversion and Management.

"The goal of the study was to assess the technical-economical potential of parallel hybrid technology applied alongside dual-fuel technology as an alternative to pure electrification in order to achieve a drastic decrease in CO2 emissions required by 2025 - in five years, these lorries must emit 15% less carbon dioxide. And the figures we have obtained, both for carbon dioxide and some of the other most harmful contaminating agents from combustion engines, have been very positive," highlights Antonio García, full professor at the UPV and researcher at CMT-Thermal Engines.

Along with the CMT-Thermal Engines group of the UPV, companies Volvo Group Trucks Technology (France) and Aramco Overseas Company (France), which CMT-Thermal Engines has worked with for over a decade, have also taken part in the study.

Maximum efficiency, less contamination

Uniting both technologies, dual-fuel combustion and a hybrid structure, makes it possible to maximise the benefits of both of them. "Electrical assistance prevents the use of the thermal engine in low-efficiency conditions. At the same time, the addition of the thermal engine in the full system makes it possible to obtain economically-viable vehicles compared to the purely electrical and relatively clean ones," says Antonio García.

The CMT-Thermal Engines researcher stresses that the dual-fuel parallel hybrid combustion technology makes it possible to decrease NOx emissions by over 90% compared to Diesel, with almost zero soot. Furthermore, optimisation of the electrical components allows the thermal engine to work in its areas of highest performance, with 13% less fuel consumption than a conventional diesel vehicle.

"As well as this proposal for a new engine, we are working on the use of alternative fuels, such a e-fuels, to maximise the benefit of this technology in terms of the analysis of the lifecycle of CO2, thus thinking ahead for possible changes in the upcoming regulation," says García.

Mathematical models are key

Santiago Martínez, researcher at the CMT-Thermal Engines of the UPV highlights the importance of computer simulations in the task of sizing the different electrical components to use them in a parallel hybrid structure together with the dual-fuel combustion system. In this sense, the numerical simulation has been one of the pillars in order to achieve the results of the study in a relatively short period of time.

"For this study, a virtual model of the original vehicle was created with a conventional diesel mechanism, and it was validated by making use of experimental data obtained from the lorry by company Volvo. Afterwards, we conducted the optimisation of the different electrical components such as the engine, generator and battery, taking into account the real driving cycles in which the lorry would conduct its activity. This methodology makes it possible to decrease the amount of experimental trials by a large amount, and therefore also the cost of developing any given technology," highlights Martínez.

Which battery would be the most efficient?

Furthermore, Javier Monsalve, another member of the CMT-Thermal Engines team, explains that in order to determine the potential of this technology compared to current technology, its cost must be evaluated taking into account two main factors. On one hand, the price of the batteries, and on the other, the possible savings in terms of penalisation for an excess of CO2 emissions. In this sense, in their analysis, the researchers took into account the current price of the batteries (approx. €176/kWh) as well as their estimated cost in 2025 (approx. €100/kWh), as well as the economical penalty enforced on lorry manufacturers if they do not respect the CO2 limit in 2025, which will be €4,250 per g/tkm.

"Taking into account the current price of the batteries and the penalties proposed by the European Union for 2025, the dual-fuel technology for lorries between 18 and 25 tonnes has the best benefits when using small-capacity batteries (up to 10kWh). The use of packs of larger batteries would substantially increase the end cost of the vehicle. It is true that it would drop with the foreseeable fall in the price of the lithium-ion technology in coming years. But until then, it will be hard to see purely electrical lorries manufactured in mass," concludes Monsalve.

Credit: 
Universitat Politècnica de València

Detecting early-stage failure in electric power conversion devices

image: Acoustic emission (AE) was applied to monitor wear-out failure in discrete SiC Schottky barrier diode (SBD) devices with a Ag sinter die attachment, to successfully monitor the real-time progress of failure of Al ribbons for the first time. (a) Optical image of a SiC-SBD device. (b) Cross-sectional SEM image. (c) Experimental apparatus for power cycling tests and real-time AE monitoring. (d) Waveform of collected AE signal and its characteristic, including counts and amplitude. (e) Generation, propagation, and collection of AE signals (i.e., elastic waves) in power electronics during a power cycling test.

Image: 
Osaka University

Osaka, Japan - Power electronics regulate and modify electric power. They are in computers, power steering systems, solar cells, and many other technologies. Researchers are seeking to enhance power electronics by using silicon carbide semiconductors. However, wear-out failures such as cracks remain problematic. To help researchers improve future device designs, early damage detection in power electronics before complete failure is required.

In a study recently published in IEEE Transactions on Power Electronics, researchers from Osaka University monitored in real time the propagation of cracks in a silicon carbide Schottsky diode during power cycling tests. The researchers used an analysis technique, known as acoustic emission, which has not been previously reported for this purpose.

During the power cycling test, the researchers mimicked repeatedly turning the device on and off, to monitor the resulting damage to the diode over time. Increasing acoustic emission corresponds to progressive damage to aluminum ribbons affixed to the silicon carbide Schottsky diode. The researchers correlated the monitored acoustic emission signals to specific stages of device damage that eventually led to failure.

"A transducer converts acoustic emission signals during power cycling tests to an electrical output that can be measured," explains lead author ChanYang Choe. "We observed burst-type waveforms, which are consistent with fatigue cracking in the device."

The traditional method of checking whether a power device is damaged is to monitor anomalous increases in the forward voltage during power cycling tests. Using the traditional method, the researchers found that there was an abrupt increase in the forward voltage, but only when the device was near complete failure. In contrast, acoustic emission counts were much more sensitive. Instead of an all-or-none response, there were clear trends in the acoustic emission counts during power cycling tests.

"Unlike forward voltage plots, acoustic emission plots indicate all three stages of crack development," says senior author Chuantong Chen. "We detected crack initiation, crack propagation, and device failure, and confirmed our interpretations by microscopic imaging."

To date, there has been no sensitive early-warning method for detecting fatigue cracks that lead to complete failure in silicon carbide Schottsky diodes. Acoustic emission monitoring, as reported here, is such a method. In the future, this development will help researchers determine why silicon carbide devices fail, and improve future designs in common and advanced technologies.

Credit: 
Osaka University

AI methods of analyzing social networks find new cell types in tissue

image: Messenger RNA in a small part of the hippocampus from a mouse brain. The colours represent different "social networks".

Image: 
Gabriele Partel

In situ sequencing enables gene activity inside body tissues to be depicted in microscope images. To facilitate interpretation of the vast quantities of information generated, Uppsala University researchers have now developed an entirely new method of image analysis. Based on algorithms used in artificial intelligence, the method was originally devised to enhance understanding of social networks. The researchers' study is published in The FEBS Journal.

The tissue composing our organs consists of trillions of cells with various functions. All the cells in an individual contain the same genes (DNA) in their nuclei. Gene expression occurs by means of "messenger RNA" (mRNA) - molecules that carry messages from the nucleus to the rest of the cell, to direct its activities. The mRNA combination thus defines the function and identity of every cell.

RNA transcripts are obtainable through in situ sequencing. The researchers behind the new study had previously been involved in developing this method, which shows millions of detected mRNA sequences as dots in microscope images of the tissue. The problem is that distinguishing all the important details may be difficult. This is where the new AI-based method may come in useful, since it allows unsupervised detection of cell types as well as detection of functions within an individual cell and of interactions among cells.

"We're using the latest AI methods - specifically, graph neural networks, developed to analyse social networks; and adapting them to understand biological patterns and successive variation in tissue samples. The cells are comparable to social groupings that can be defined according to the activities they share in their social networks like Twitter, sharing their Google search results or TV recommendations," says Carolina Wählby, professor of quantitative microscopy at the Department of Information Technology, Uppsala University.

Earlier analytical methods of this type of data depend on knowing which cell types the tissue contains, and identifying the cell nuclei in it, in advance. The method conventionally used, known as "single-cell analysis", may lose some mRNA and miss certain cell types. Even with advanced automated image analysis, it is often difficult to find the various cell nuclei if, for example, the cells are packed densely together.

"With our analysis, which we call 'spage2vec', we can now get corresponding results without any previous knowledge of expected cell types. And what's more, we can find new cell types and intra- or intercellular functions in tissue," Wählby says.

The research group are now working further on its analytical method by investigating differentiation and organisation of various types of cells during the early development of the heart. This is pure basic research, intended to provide more knowledge of the mechanisms that govern development, both when everything is functioning as it should and when a disease is present. In another project, a collaboration with cancer researchers, the Uppsala group are hoping to be able to apply the new methods to gain a better understanding of how tumour tissue interacts, at molecular level, with surrounding healthy tissue. The aim is that, in the long term, this will culminate in better treatments that can be adapted to individual patients.

Credit: 
Uppsala University

Gut bacteria could be responsible for side effect of Parkinson's drug

image: Levodopa (top right) is converted by gut bacteria to DHPPA (middle right), which has an inhibitory effect on the acetylcholine-induced gut motility (depicted on the bottom right). These findings are of importance to Parkinson's patients, who often experience gastrointestinal problems such as constipation, and show the potential side effects of unabsorbed medication such as levodopa.

Image: 
University of Groningen

Bacteria in the small intestine can deaminate levodopa, the main drug that is used to treat Parkinson's disease. Bacterial processing of the unabsorbed fractions of the drug results in a metabolite that reduces gut motility. These findings were described in the journal BMC Biology on 20 October by scientists from the University of Groningen. Since the disease is already associated with constipation, processing of the drug by gut bacteria may worsen gastrointestinal complications.

Patients with Parkinson's disease are treated with levodopa, which is converted into the neurotransmitter dopamine in the brain. Levodopa is absorbed in the small intestine, although not all of it. Eight to ten per cent travels further to a more distal part of the gut and this percentage increases with age and administered drug dosage. In this distal part of the gut, it may encounter bacterial species such as Clostridium sporogenes, which can deaminate (remove an -NH2 group from) aromatic amino acids.

Intestinal motility

'Last year, other scientists demonstrated this bacterium's deamination activity on aromatic amino acids,' says Sahar El Aidy, assistant professor of Microbiology at the University of Groningen. El Aidy knew that the chemical structure of levodopa is similar to that of the aromatic amino acid tyrosine. 'This suggested that the bacterium could metabolize levodopa, which may affect the intestinal motility of individuals with Parkinson's disease.'

Studies by El Aidy and her research team revealed that the bacterium C. sporogenes does indeed break down levodopa into 3-(3,4-dihydroxy phenyl)propionic acid (DHPPA). 'This process involves four steps, three of which were already known. However, we uncovered the initial step, which is mediated by a transaminase enzyme.'

Coffee

Next, the team investigated whether DHPPA has an effect on motility of the distal small bowel, using an ex vivo model system for gut motility. Gut motility was induced by adding acetylcholine, after which DHPPA was added. 'Within five minutes, this decreased the motility by 69 percent, rising to 73 percent after ten to fifteen minutes.' This clearly showed that the levodopa metabolite can reduce gut contractions, which could lead to constipation.

To test whether these findings are relevant to Parkinson's disease patients, Sebastiaan van Kessel, a PhD student in El Aidy's research team, tested patients' stool samples for the presence of DHPPA. 'Because it is also produced as a breakdown product of coffee and fruits, we compared samples from patients with those from healthy controls with a comparable diet,' explains El Aidy. The result showed significantly higher DHPPA levels in stool samples of Parkinson's disease patients who were treated with levodopa. To confirm that this metabolite resulted from the presence and activity of the gut bacterium C. sporogenes, or other gut bacteria capable of anaerobic deamination, bacteria from stool samples were cultured and fed with the precursor of DHPPA. This experiment showed that the bacteria can indeed metabolize levodopa to produce DHPPA.

Inhibitors

All of these results suggest that a residue of the drug levodopa, which is not absorbed early on in the gut, can be metabolized by gut bacteria into DHPPA, which then reduces the motility of the distal gut. As constipation is already one of the symptoms of Parkinson's disease, it is unfortunate that the drug to treat the symptoms can itself further reduce gut motility due to gut bacterial metabolization. 'However, now that we know this, it is possible to look for inhibitors of the enzymes in the deamination pathway identified in our study.'

Simple Science Summary

Individuals with Parkinson's disease are treated with the drug levodopa, which is absorbed in the gut. Microbiologist Sahar El Aidy and her research team from the University of Groningen discovered that some of the levodopa is broken down by bacteria in the gut into a substance (DHPPA) that reduces gut motility. Therefore, the drug that is used to treat Parkinson's disease can cause constipation, which is unfortunate because constipation is already a symptom of the disease. However, the results of this study could inspire the discovery of inhibitors that stop the breakdown of levodopa into DHPPA.

Credit: 
University of Groningen

Criteria to predict cytokine storm in COVID-19 patients identified by Temple Researchers

(Philadelphia, PA) - Like a cold front that moves in, setting the stage for severe weather, coronavirus infection triggers showers of infection-fighting immune molecules - showers that sometimes escalate into a chaotic immune response known as a cytokine storm. About 20 to 30 percent of patients hospitalized with COVID-19 develop severe immune manifestations, in some instances leading to cytokine storm, with life-threatening organ damage and high risk of death.

Predicting which COVID-19 patients will develop cytokine storm is challenging, owing to the many variables that influence immune function. But now, in breakthrough work, researchers at the Lewis Katz School of Medicine at Temple University (LKSOM) have developed and validated predictive criteria for early identification of COVID-19 patients who are developing hyperimmune responses, raising the possibility for early therapeutic intervention.

"If we can anticipate cytokine storm, we can apply treatment sooner and possibly decrease mortality," explained Roberto Caricchio, MD, Chief of the Section of Rheumatology, Director of the Temple Lupus Program, Professor of Medicine and Microbiology and Immunology at LKSOM, and lead author on the new report.

The report, published online in the Annals of the Rheumatic Diseases, is the first to identify criteria that can be readily used in clinical practice to potentially head off the worst of the hyperimmune attack against COVID-19.

The breakthrough is the result of an extensive collaboration between researchers and clinicians across multiple departments in the Lewis Katz School of Medicine and Temple University Hospital, constituting the Temple University COVID-19 Research Group.

According to Dr. Caricchio, large numbers of COVID-19 patients have been treated at Temple since the pandemic emerged in the United States. "We have a significant amount of data in terms of variables to predict cytokine storm," he said.

Since early March, every patient admitted to Temple University Hospital (TUH) has had data on more than 60 different laboratory variables collected daily until the time of recovery or time of death. Among variables measured every day are factors like white blood cell count, metabolic enzyme activity, and markers of inflammation and respiratory function. Importantly these markers are commonly used in hospitals across the globe and therefore are readily available.

The research group carried out statistical analyses on laboratory data for 513 COVID-19 patients hospitalized at TUH in March and April, 64 of whom developed cytokine storm. A genetic algorithm was used to identify cut-off values for each individual laboratory variable to define the predictive requirements for cytokine storm. Genetic algorithms mimic the processes of natural selection and evolution in analyzing the data, and in this case, over multiple iterations, the algorithm turned up variables indicating which patients are most likely to develop cytokine storm.

Overall, the analyses yielded six predictive criteria comprising three clusters of laboratory results relating to inflammation, cell death and tissue damage, and electrolyte imbalance. In particular, patients in cytokine storm exhibited a proinflammatory status and elevated levels of enzymes indicating significant systemic tissue damage. Moreover, patients who met the criteria had extended hospital stays and were at increased risk of death from COVID-19, with almost half of patients who experienced cytokine storm meeting all criteria within the first day of hospitalization.

The researchers validated the criteria in a subsequent cohort of 258 patients admitted to TUH for COVID-19 infection. "The algorithm correctly predicted cytokine storm in almost 70 percent of patients," Dr. Caricchio said.

"The ability to reproduce our results in a second cohort of patients means that our group of variables are effective criteria for cytokine storm diagnosis in COVID-19 patients," he added. The final step now is to have the criteria validated by other centers where COVID-19 patients are admitted for care.

Dr. Caricchio noted that the criteria could be applied to COVID-19 patients at any hospital or level of hospitalization anywhere in the world. "This makes the criteria very valuable for guiding decisions about how to treat COVID-19 patients worldwide," he said. Applied more broadly, the criteria could greatly facilitate early diagnosis and intervention, helping save many lives.

"This was a truly collective effort between frontline clinicians, researchers, and statisticians, and the results are one of the many testaments to the exceptional work Temple University and the Temple University Health System have performed," Dr. Caricchio concluded.

Credit: 
Temple University Health System