Culture

The Lancet Public Health: Modelling study estimates impact of physical distancing measures on progression of COVID-19 epidemic in Wuhan

Study suggests extending school and workplace closures in Wuhan until April, rather than March, would likely delay a second wave of cases until later in the year, relieving pressure on health services

New modelling research, published in The Lancet Public Health journal, suggests that school and workplace closures in Wuhan, China have reduced the number of COVID-19 cases and substantially delayed the epidemic peak--giving the health system the time and opportunity to expand and respond.

Using mathematical modelling to simulate the impact of either extending or relaxing current school and workplace closures, researchers estimate that by lifting these control measures in March, a second wave of cases may occur in late August, whereas maintaining these restrictions until April, would likely delay a second peak until October--relieving pressure on the health services in the intervening months.

However, the authors caution that given the large uncertainties around estimates of the reproduction number (how many people an individual with the virus is likely to infect), and how long a person is infected on average, the true impact of relaxing physical distancing measures on the ongoing COVID-19 epidemic cannot be precisely predicted.

"The unprecedented measures the city of Wuhan has put in place to reduce social contacts in school and the workplace have helped to control the outbreak", says Dr Kiesha Prem from the London School of Hygiene & Tropical Medicine, UK, who led the research. "However, the city now needs to be really careful to avoid prematurely lifting physical distancing measures, because that could lead to an earlier secondary peak in cases. But if they relax the restrictions gradually, this is likely to both delay and flatten the peak."

In December 2019, a novel coronavirus (SARS-CoV-2) emerged in Wuhan, China. In mid-January 2020, schools and workplace were closed as part of the Lunar New Year holidays. These closures were then extended to reduce person-to-person contact and prevent the spread of SARS-CoV-2.

In the study, researchers developed a transmission model to quantify the impact of school and workplace closures using information about how often people of different ages mix with each other in different locations, and to assess their effects on bringing the outbreak under control.

Using the latest data on the spread of COVID-19 in Wuhan and from the rest of China on the number of contacts per day by age group at school and work, they compared the effect of three scenarios: no interventions and no holidays (a hypothetical scenario); no physical distancing measures but school winter school break and Lunar New Year holidays as normal; and intense control measures with school closed and only about 10% of the workforce--eg, health-care personnel, police, and other essential government staff--working during the control measures (as started in Wuhan in mid-January). They also modelled the impact of lifting control measures in a staggered way, and during different stages of the outbreak (in March and April).

The analyses suggest that the normal school winter break and Lunar New Year holidays would have had little impact on the progression of the outbreak had schools and workplaces opened as usual (figure 3). However, putting extreme measures in place to reduce contacts at school and workplaces, could reduce case numbers and the size of the epidemic peak, whilst also delaying the peak (figure 4). The effects of these distancing measures seem to vary by age, with the greatest reductions in new cases among school children and the elderly, and lowest among working-aged adults (figures 4 and 5). However, once these interventions are relaxed, case numbers are expected to rise.

Further analysis suggests that physical distancing measures are likely to be most effective if the staggered return to work commences at the beginning of April--potentially reducing the median number of new infections by 24% up to the end of 2020, and delaying a second peak until October.

"Our results won't look exactly the same in another country, because the population structure and the way people mix will be different. But we think one thing probably applies everywhere: physical distancing measures are very useful, and we need to carefully adjust their lifting to avoid subsequent waves of infection when workers and school children return to their normal routine. If those waves come too quickly, that could overwhelm health systems", says co-author Dr Yang Liu from London School of Hygiene & Tropical Medicine. [1]

Despite these important findings, the study has some limitations, including that it assumed no difference in susceptibility between children, and that the extreme distancing measures used in Wuhan may have increased the transmission within households. Finally, the model did not capture individual-level differences in contact rates, which could be important in super-spreading events, particularly early on in an epidemic.

Writing in a linked Comment, Dr Tim Colbourn from University College London, UK (who was not involved in the study) says: "The study by Kiesha Prem and colleagues in The Lancet Public Health is crucial for policy makers everywhere, as it indicates the effects of extending or relaxing physical distancing control measures on the coronavirus disease 2019 (COVID-19) outbreak in Wuhan, China."

He continues: "Given many countries with mounting epidemics now potentially face the first phase of lockdown, safe ways out of the situation must be identified... New COVID-19 country-specific models should incorporate testing, contract tracing, and localised quarantine of suspected cases as the main alternative intervention strategy to distancing lockdown measures, either at the start of the epidemic, if it is very small, or after the relaxation of lockdown conditions, if lockdown had to be imposed, to prevent health-care system overload in an already mounting epidemic."

Credit: 
The Lancet

Thirty risk factors found during and after pregnancy for children developing psychosis

More than 30 significant risk factors have been identified for the development of psychotic disorders in offspring in research led by the NIHR Maudsley BRC. It is the first comprehensive meta-analysis of pre- and perinatal risk factors for psychosis in nearly 20 years.

These prenatal and perinatal environmental risks, meaning risks during pregnancy and seven days after birth, have a significant effect on the likelihood of their child developing psychosis. As a result, researchers suggest women at risk should be screened early on in their pregnancy so that those with these identified risks can be given additional support. The findings have been published today (Tuesday 24 March 2020) in Lancet Psychiatry.

Gathering data from 152 studies published between 1977 and July 2019 and looking at 98 factors, researchers identified 30 significant risk factors and five protective factors.

Psychotic disorders are severe mental illnesses which cause abnormal thoughts, such as hallucinations or delusions, but they can affect each person in different ways. In 2014, a survey found the 6% of people in England said they had experienced at least on symptom of psychosis.

Factors can be split into four categories; parental and familial, pregnancy, labour and delivery, and foetal growth and development. Significant protective factors were mothers being aged between 20 - 29, first time mothers and higher birthweights in babies.

For risk factors, previous mental health conditions in either parents, nutritional deficiencies, low birthweight and giving birth in the colder months were found to increase the probability of a child developing psychosis. Age related risk factors were either parent being under 20, mothers between 30-34 and fathers over 35. Researchers also found that a lack of prenatal care visits poses a risk and marked this as a potential risk factor to combat with outreach campaigns.

This study confirmed the importance of factors during labour and delivery, such as a foetus' brain being deprived of oxygen and ruptured membranes, which are historically among the most consistently implicated risk factors. Conversely, despite previous studies focusing on infections during pregnancy causing psychosis, this study found significant associations only for HSV-2 and maternal infections 'not otherwise specified' and found that influenza had no indication of a significant effect.

This study will help guide future research in the field of psychosis, as well as form the basis for psychosis risk prediction models which could advance preventative strategies.

Dr Paolo Fusar-Poli, Reader in Psychiatry and Youth Mental Health at Institute of Psychiatry, Psychology & Neuroscience (IoPPN), King's College London said: 'This study is confirming that psychotic disorders originate in the early phases of life with the accumulation of several environmental risk factors during the perinatal and prenatal phases. The results of this study will advance our ability to detect individuals at risk of developing psychosis, predict their outcomes and eventually offer them preventive care.'

Whilst this study focused on the environmental factors there may also be genetic or epigenetic risks factors that are implicated in the onset of psychosis.

Credit: 
NIHR Maudsley Biomedical Research Centre

Wuhan study shows lying face down improves breathing in severe COVID-19

image: Prone ventilation shown to improve breathing in severe COVID-19 patients.

Image: 
ATS

March 24, 2020-- In a new study of patients with severe COVID-19 (SARS-CoV-2) hospitalized on ventilators, researchers found that lying face down was better for the lungs. The research letter was published online in the American Thoracic Society's American Journal of Respiratory and Critical Care Medicine.

In "Lung Recruitability in SARS--CoV-2 Associated Acute Respiratory Distress Syndrome: A Single-Center, Observational Study," Haibo Qiu, MD, Chun Pan, MD, and co-authors report on a retrospective study of the treatment of 12 patients in Wuhan Jinyintan Hospital, China, with severe COVID-19 infection-related acute respiratory distress syndrome (ARDS) who were assisted by mechanical ventilation. Drs. Qiu and Pan were in charge of the treatment of these patients, who were transferred from other treatment centers to Jinyintan Hospital.

A majority of patients admitted to the ICU with confirmed COVID-19 developed ARDS.

The observational study took place during a six-day period the week of Feb. 18, 2020.

"This study is the first description of the behavior of the lungs in patients with severe COVID-19 requiring mechanical ventilation and receiving positive pressure," said Dr. Qiu, professor, Department of Critical Care Medicine, Zhangda Hospital, School of Medicine, Southeast University, Nanjing, China. "It indicates that some patients do not respond well to high positive pressure and respond better to prone positioning in bed (facing downward)."

The clinicians in Wuhan used an index, the Recruitment-to-Inflation ratio, that measures the response of lungs to pressure (lung recruitability). Members of the research team, Lu Chen, PhD, and Laurent Brochard, PhD, HDR, from the University of Toronto, developed this index prior to this study.

The researchers assessed the effect of body positioning. Prone positioning was performed for 24-hour periods in which patients had persistently low levels of blood oxygenation. Oxygen flow, lung volume and airway pressure were measured by devices on patients' ventilators. Other measurements were taken, including the aeration of their airway passages and calculations were done to measure recruitability.

Seven patients received at least one session of prone positioning. Three patients received both prone positioning and ECMO (life support, replacing the function of heart and lungs). Three patients died.

Patients who did not receive prone positioning had poor lung recruitability, while alternating supine (face upward) and prone positioning was associated with increased lung recruitability.

"It is only a small number of patients, but our study shows that many patients did not re-open their lungs under high positive pressure and may be exposed to more harm than benefit in trying to increase the pressure," said Chun Pan, MD, also a professor with Zhongda Hospital, School of Medicine, Southeast University. "By contrast, the lung improves when the patient is in the prone position.

Considering this can be done, it is important for the management of patients with severe COVID-19 requiring mechanical ventilation."

The team consisted of scientists and clinicians affiliated with four Chinese and two Canadian hospitals, medical schools and universities.

Credit: 
American Thoracic Society

Many buds to a blossom: A synchronization approach to sensing using many oscillators

image: Each sensor node consists of a circuit made of just a photovoltaic source, a variable resistor, a capacitor, two inductors and a bipolar transistor (top). One inductor is realized as a printed layer onto the circuit board and used for coupling (bottom, left). The overall design is quite compact, with the majority of the 32 × 32 mm board area taken up by the solar cells.

Image: 
Minati L

When one thinks of measuring something, the first idea that comes to mind is that of taking a tool, such as a caliper, reaching out to a specific place or thing, and noting down a number. There is no doubt that, in many domains of engineering and science, taking reliable measurements at well-defined locations is fundamentally important. However, this is changing in today's connected world because we attempt to distribute technology everywhere for improving sustainability. One quickly emerging need is that of efficiently making measurements over relatively large surfaces or objects, for example, comprehensively assessing the soil water content over an entire cultivated plot, checking for cracks throughout the whole volume of a concrete pillar, or sensing tremors across all limb segments in a patient.

In such cases, a measurement taken at a single location is not enough. There is a need to use many sensors, scattered approximately evenly over the area or object of interest, giving rise to a set of techniques termed "distributed sensing." However, this technique has a potential problem: reading out data from each individual sensor may require considerable infrastructure and power. Especially in situations where only a reliable average or maximum value needs to be computed, it would be preferable if sensors could simply interact between themselves as a population, effectively "coming to an agreement" over the desired statistics, which could then be read out in a way that does not require interrogating each node individually.

However, implementing this electronically is not easy. Digital radio and processing technology is always an option, but is very demanding in terms of size, power, and complexity. An alternative approach is to rely on analog oscillators of a peculiar type, which are very simple but endowed with a remarkable ability to generate complex behaviors, separately and collectively: the so-called "chaotic oscillators." Now, researchers in Japan and Italy propose a new approach to distributed measurement based on networks of chaotic oscillators [1]. This research was the result of a collaboration between scientists from the Tokyo Institute of Technology, in part funded by the World Research Hub Initiative, the Universities of Catania and Trento, Italy, and the Bruno Kessler Foundation, also in Trento, Italy.

The research team started from the idea, introduced in a previous study [2], that coupling chaotic oscillators, even very weakly as in the case of over-the-air using inductor coils or other antennas, makes it easy for them to create meaningful collective activity. Surprisingly, similar principles seem to arise in networks of neurons, people, or, indeed, electronic oscillators, wherein the activity of their constituents is synchronized. By rendering each oscillator responsive to a particular physical magnitude, such as light intensity, movement, or opening of a crack, it is effectively possible to engender a "collective intelligence" via synchronization, effectively responding to changes that emphasize the sensitivity to an aspect of interest while being robust against perturbations like sensor damage or loss. This is similar to the functioning principles of biological brains.

The key to realizing the proposed circuit was to start from one of the smallest chaotic oscillators known, involving just a single bipolar transistor, two inductors, one capacitor, and one resistor. This circuit, introduced four years ago by Dr. Ludovico Minati, the lead author of the study, and co-workers, was remarkable for its rich behaviors that contrasted with its simplicity [3]. Such a circuit was modified so that its power source would be a compact solar panel rather than a battery, and so that one of its inductors could enable coupling via its magnetic field, effectively acting as an antenna. The resulting prototype device was found to be able to reliably produce chaotic waves depending on the level of light. Moreover, bringing multiple devices closer would cause them to generate consonant activity in a manner representative of the average light level. "Effectively, we could do spatial averaging over-the-air just with a handful of transistors. That's incredibly fewer compared to the tens of thousands that would be required to implement a digital processor at each node," explain together Dr. Hiroyuki Ito, head of the laboratory where the device prototype was built, and Dr. Korkut Tokgoz from the same laboratory. The circuit design and results are carefully detailed in the article that has recently appeared on the IEEE Access journal [1].

But perhaps even more remarkable was the discovery that the best way of harvesting information from these nodes was not just listening to them, but gently stimulating them with an "exciter" signal, generated by a similar circuit and applied using a large coil. Depending on many factors, such as coil distance and circuit settings, it was possible to create various behaviors in response to the level and pattern of illumination. In some situations, the effect was an increased synchronization, in others dissipated synchronization; similarly, there were cases in which one sensor would "pull" the entire networks towards irregular, chaotic oscillation, and others when the opposite happened. Most importantly, accurate and robust measurements could be obtained from the sensors via the activity of the "exciter" circuit, acting as a proxy. Because providing the exciter signal allows observing many aspects of dynamics otherwise "hidden" inside the sensor nodes, the researchers felt that it resembled the process of watering flower buds so they can open up and form a blossom (a collective feature). The sensor and exciter circuits were respectively dubbed "Tsubomi" and "Ame", which mean flower bud and rain in Japanese. "Because it is easy to apply this approach with many sensors interacting collectively on the scale of a human body, in the future we would like to apply this new technique for reading out subtle movements and biological signals," explain Prof. Yasuharu Koike and Dr. Natsue Yoshimura, from the Biointerfaces laboratory where some proof-of-concept tests were carried out.

"This circuit draws its beauty from a truly minimalistic design gently attuned for operating collectively in a harmonious manner, giving rise to something which is so much more than the individual components, like how a myriad of small flowers creates a blossom," says Dr. Ludovico Minati, whose entire research is now dedicated to emergence in nonlinear electronic circuits. This, he explains, is yet another example of how Nature can inspire and guide new engineering approaches, less grounded in prescriptive specifications and more focused on emergent behaviors. The difficulties encountered while applying this approach remain considerable, but the potential rewards are enormous in terms of realizing complex functions in the most economical and sustainable manner. "Multidisciplinary integration is truly the key to success of precursory research such as this one," notes Prof. Mattia Frasca from the University of Catania, Italy, whose work on complex circuits and networks was a fundamental basis for this collaborative research.

Credit: 
Tokyo Institute of Technology

Simulated 'frankenfish brain-swaps' reveal senses control body movement

video: Purple and mustard dots mark the peaks and troughs of traveling waves of the ribbon fin from the tail and head, respectively. The blue arrow y(t) is the movement of the fish and green arrow u(t) is the movement of the nodal point where the two traveling waves meet.

Image: 
NJIT/JHU

Plenty of fictional works like Mary Shelly's Frankenstein have explored the idea of swapping out a brain from one individual and transferring it into a completely different body. However, a team of biologists and engineers has now used a variation of the sci-fi concept, via computer simulation, to explore a core brain-body question.

How can two people with vastly different-sized limbs and muscles perform identical fine-motor tasks equally well, such as hitting a baseball or sending a text? Is it a unique tuning between our brain and nervous system with the rest of our body that controls these complex motions, or is feedback from our senses taking charge?

In a new study featured in the journal eLife, researchers have computationally modeled the various brains and bodies of a species of weakly electric fish, the glass knifefish (Eigenmannia virescens), to successfully simulate "fish brain transplants" and investigate.

The team's simulations, which involved swapping models of the fishes' information processing and motor systems, revealed that after undergoing a sudden jump into the different body of their tank-mate, the "Frankenfish" quickly compensated for the brain-body mismatch by heavily relying on sensory feedback to resume control of fine-motor movements required for swimming performance.

Researchers say the findings provide new evidence that animals can lean on feedback from the senses to aid the interplay of the brain, body and stimulus from their external environment in guiding locomotor movement, rather than depending on precise tuning of brain circuits to the mechanics of the body's muscles and skeleton. The team also says the findings reinforce the case for the future design of advanced robotics that employ robust sensory feedback control systems; such systems may better adapt to unexpected events in their environment.

"What this study shows is the deep role of sensory feedback in everything we do," said Eric Fortune, professor at NJIT's Department of Biological Sciences and author of the study, funded by the National Science Foundation. "People have been trying to figure out how the animal movement works forever. It turns out that swapping brains of these fishes is a great way to address this fundamental question and gain a better understanding for how we might control our bodies."

"The Frankenfish experiment demonstrates a common idea in control theory, which is that many of the details of how sensation is converted to action in a closed feedback loop don't matter," said Noah Cowan, professor at John's Hopkins University's (JHU) Department of Mechanical Engineering, co-author and longtime collaborator of Fortune. "While not any random brain would work, the brain has a lot of freedom in its control of the body."

In the study, the team set out to specifically explore how behavioral performance of the fish might change if they experimentally altered the fishes' connection between controller, or the sensory systems and neural circuits used to process information to generate motor commands, and plant, the musculoskeletal components that interact with the environment to generate movement.

Using experimental tanks outfitted with high-res cameras in the lab, the researchers tracked the subtle movements of three glass knifefish of different shapes and sizes as they as shuttled back and forth within their tunnel-like refuges -- a common behavior among electric fish that includes rapid and nuanced adjustments to produce sensory information that the fish need for keeping a fixed position within the safety of their hidden habitats, also known as station-keeping.

The team collected various sensory and kinematic measurements linked to the exercise -- most notably, the micromovements of the fishes' ribbon-like fins that are critical to locomotor function during shuttling activity -- and applied the data to create computer models of the brain and body of each fish.

"We took advantage of the animal's natural station-keeping behavior using a novel virtual reality setup, where we can control the movements of the refuge and record the movements of the fish in real time," explained Ismail Uyanik, assistant professor of engineering at Hacettepe University, Turkey, and former postdoctoral researcher involved in the study at NJIT. "We showed that movements of the ribbon fin could be used as a proxy of the neural controller applied by the central nervous system. The data allowed us to estimate the locomotor dynamics and to calculate the controllers that the central nervous system applies during the control of this behavior."

"The ribbon fin was the key to our success in modeling the motor system, which others have been trying to do using other sophisticated techniques for decades," said Fortune. "We were able to track this virtually invisible fin and the counter-propagating waves it creates in slow motion using our cameras and machine-learning algorithms. ... Without those technologies it wouldn't have been possible.

"We logged nearly 40,000 ribbon-fin movements per fish during their shuttling to get the data we ended up using to help build models of each fish's locomotor plant and controller."

With their models, the team began computationally swapping controllers and plants between the fish, observing that the brain swaps had virtually no effect on the models' simulated swimming behaviors when they included sensory feedback data. However, without the sensory feedback data included in the models, the fishes' swimming performance dropped off completely.

"We found that these fish perform badly... They just can't adjust to having the wrong brain in the wrong body. But once you add feedback to the models to close the loop, suddenly they continue their swimming movements as if nothing happened. Essentially, sensory feedback rescues them," explained Fortune.

The team says the findings could help inform engineers in the design of future robotics and sensor technology, and similar further studies of the electric fish's ribbon fin may improve our understanding of muscle physiology and complex statistical relations between muscle activations that allow humans to outperform robots when it comes to controlling body movement.

"Robots are a lot better than humans at generating specific velocities and forces that are far more precise than a human, but would you rather shake hands with a human or robot? Which is safer?" said Fortune. "The problem is of control. We want to be able to make robots that perform as well as humans, but we need better control algorithms, and that's what we are getting at in these studies."

Credit: 
New Jersey Institute of Technology

Unequal access codes

image: The proportion of students choosing the academic path after the 9th grade in 2013, by region (%)

Image: 
Zakharov, Adamovich, Economic Sociology

Researchers at the HSE Institute of Education have used regional data to describe, for the first time in Russia, how inequality in access to education affects different parts of the Russian Federation. The research findings reveal that the key determining factors are the local economy and the proportion of people with a university degree: urbanised regions with well-developed economies and educated inhabitants are more likely to have good-quality schools, with a large proportion of students scoring highly in the Unified State Exam and going on to university. In contrast, poorer regions with low human capital see many of their school students drop out after the 9th grade, limiting their chances of further education.

Factors Determining Differences

Multiple factors determine whether or not young people have access to good education. Their own abilities and motivation certainly play a role, as well as family background. Indeed, parents' educational, financial and occupational status and cultural capital have all been found to 'program' their children's academic success.

According to a recent study, children raised in families with a high occupational and educational status have double the chances of enrolling in a prestigious university compared to their peers from low-resource families. Teens from more advantaged families are often in a better position, since their parents tend to value good education and invest in their children's schooling. In contrast, students from less educated families, although they may perform fairly well academically, often make no attempt to enter a prestigious university, because they lack parental support and tend to underestimate their own capabilities.

In addition to this, teachers' skills and school characteristics certainly play a role. Many students attending ordinary general schools switch to vocational colleges after the 9th grade. In contrast, students in higher-status schools such as gymnasiums, lyceums and schools offering advanced courses in certain subjects are much more likely to continue through to the 11th grade and then go on to university.

Andrey Zakharov and Kseniya Adamovich examined regional socioeconomic differences for their role in either enhancing or limiting access to educational resources, determining students' choice between the academic track (i.e. eleven grades of general school plus university) and the vocational track (i.e. nine grades of general school plus vocational college/technical school), and attaining certain learning outcomes (reflected in USE scores in Russian and mathematics).

The research is based on Rosstat's 2013-2015 regional statistics and on data available from federal and regional departments of education.

Baseline Imbalances

The main reason why some regional educational systems perform better than others is the broader disparity across Russian regions in terms of economic development (measured by per capita GRP, gross regional product), urbanisation (percentage of urban population), human capital, and other indicators.

Thus, in Moscow, 48% of inhabitants have completed higher education, compared to 22% in Chechnya. In terms of economic development, the inequalities across Russian regions are even greater, with the GRPs of Russia's richest 10% exceeding those of the poorest 10% by nearly 4.5 times.

Similarly, the financing of education varies from 40,400 to 114,000 roubles per student per year depending on whether a region is rich or poor.

There is little difference across regions in terms of student coverage by lyceums, gymnasiums and schools offering advanced courses in certain subjects, which is 10% or less for each type of such elite schools. It is common, however, for ordinary schools to have classes with advanced curricula in certain subjects, and as many as 25% of school students attend such classes in some parts of Russia, such as Ivanovo, Murmansk and Kemerovo regions.

However, the proportion of students who drop out after the 9th grade varies from more than 60% in Chechnya and Orenburg and Astrakhan Regions to less than 40% in Moscow, St. Petersburg, Kalmykia and Tuva.

Access to quality education is linked to the state of regional economies. Having analysed regional per capita funding of schools, teacher expertise and availability of advanced schooling options, the authors found that schools in highly urbanised regions with well-developed local economies and better educated populations tend to be more generously financed and to employ better qualified teachers - i.e. those having a high qualification category and work experience of five and more years; such regions also tend to have plenty of elite schools and courses available.

'These regions include Moscow, St. Petersburg and Tatarstan and, to a lesser extent, Novgorod and Nizhny Novgorod regions', according to Adamovich.

In contrast, the education prospects are not so good for young people in depressed regions with a less educated population, due to the limited number of advanced school courses and highly qualified teachers, resulting in fewer students staying in school for the 10th and 11th grades.

According to the researchers, these 'outsider regions' include the republics of Altai and Tuva and certain parts of the North Caucasus.

Adults' Education Affects Children's Prospects

The proportion of residents with higher education appears to be the single most important characteristic positively associated with the proportion of students attending elite schools such as lyceums, gymnasiums and those with a strong focus on humanities. The availability of highly qualified teachers also tends to be greater in regions with higher overall human capital.

According to the researchers, this association between human capital and access to quality schooling may be due to the fact that educated families are more likely to value higher education as a way 'to maintain and perhaps enhance the family's socioeconomic status'. On the other hand, well-educated and affluent parents can afford to hire private tutors and pay for university preparation courses for their child. 'By doing so, they create a demand for quality education and advanced curricula', Adamovich notes.

Accordingly, regions with more developed economies tend to have a higher proportion of students attending elite schools which offer advanced courses in humanities and mathematics - or attending advanced classes in ordinary schools.

The choice of educational path is also largely determined by factors such as human capital and urbanisation: the larger a region's urban and university-educated population groups, the greater the proportion of school students likely to choose the academic track and go on to university.

The researchers found that regional economies, in particular the per capita financing of schools, have direct implications for the average Unified State Exam (USE) score in mathematics (but no statistically significant correlation was found for USE results in Russian).

In addition to this, the USE results both in mathematics and Russian were found to be positively associated with the proportion of students attending lyceums and gymnasiums, while the results in mathematics were better in regions with more schools offering advanced courses in science and engineering, and the results in Russian were positively associated with the proportion of schools focusing on the humanities.

The average USE score in the Russian language was also found to be higher in regions with more school dropouts after the 9th grade, explained by the fact that only well-performing students in such regions choose to go to the 10th and 11th grades.

Double Advantage - Double Deficit

The researchers note that socioeconomic differences, as well as regional disparities in access to education, tend to exacerbate already existing inequalities.

As a result, young people in more affluent regions enjoy a double advantage created by their parents' human capital and higher income leading to investment in children's education and by better institutional access to elite education resources.

In contrast, young people in depressed regions face the double disadvantage of having lower-income, less educated parents and limited opportunities of high-quality schooling.

Stratification and Sorting

'Greater access to advanced education resources in regions with higher human capital confirms the validity of the effectively maintained inequality theory', the researchers argue, 'while the high school dropout rate after the 9th grade in regions with lower human capital is consistent with the maximally maintained inequality theory'.

According to the latter theory, differences in access to a certain level of education are maintained as long as there is competition for this type of education, and children from higher socioeconomic status backgrounds are more likely to win this competition. If, however, access to a certain level of education is universal, inequalities are transferred to the next level of the education system, e.g. to higher education.

As a manifestation of effectively maintained inequality, the formally universal access to education comes with substantial differences , e.g. in school curricula, for different social strata. It is no accident that some researchers describe schools as 'sorting machines' which divide students into categories based not only on their academic performance but also on their parents' socioeconomic status. Resource-rich families tend to choose more prestigious schools for their children, a choice which is likely to result in better education and successful careers. In contrast, children of poorer parents are less likely to benefit from schooling as a social elevator.

The resulting situation 'cannot be accepted as natural from the education policy perspective', according to Zakharov and Adamovich, who emphasise the importance of equal access to resources for creating a universal education space and conclude that a country as big as Russia 'needs to find ways to smooth out the regional imbalances in access to education'.

Credit: 
National Research University Higher School of Economics

A study investigating genetic mechanisms underlying resistance to leishmaniasis

image: Diagram summarizing the methodology and results of the experiment.

Image: 
Juliana Ide Aoki

In an article published
in Scientific Reports, researchers affiliated with the University of São Paulo (USP) in Brazil describe experiments performed with mice that show how genetic factors can determine whether an individual is susceptible or resistant to leishmaniasis. According to the authors, the results will contribute to a better understanding of leishmaniasis in humans and help to clarify why only some of those individuals infected develop the disease.

Leishmaniasis is caused by protozoan parasites of the genus Leishmania. Depending on the species involved, infection leads to skin ulcerations (cutaneous leishmaniasis) or lesions in internal organs such as the liver and spleen (visceral leishmaniasis). The parasites are transmitted to humans and other mammals by insect bites. There are no available vaccines for the disease, and its treatment is lengthy, expensive and complicated.

"We set out to observe gene regulation in our study model in order to investigate the strategy used by an organism capable of resisting infection and find out how it differs from a susceptible organism in this regard," said Lucile Maria Floeter-Winter principal investigator of the project in the Physiology Laboratory of the University of São Paulo's Bioscience Institute (IB-USP).

The study was conducted as part of two Thematic Projects supported by São Paulo Research Foundation - FAPESP: "Biochemical, physiological and functional genomic studies of Leishmania-macrophage interaction" and "The Leishmania-host relationship from the 'omics' perspective".

The researchers used two mouse strains: BALB/c, which is naturally sensitive to the parasite, and C57BL/6, which is naturally resistant. They infected both strains with Leishmania amazonensis, the species that causes cutaneous leishmaniasis. In the article, they stress that the identification of molecular markers for resistance to the disease can be useful for diagnosis, prognosis and clinical intervention.

Gene expression mapping

Leishmania species have developed the strategy of infecting host macrophages, which are one of the types of immune cells that should be involved in combatting the parasite. Infected macrophages may burst, releasing the multiplying protozoans, which then infect other nearby macrophages. At this point, the infection is considered established. Alternatively, the parasites are killed by the macrophages, and the disease fails to progress.

To create an experimental model of infection by the parasite, the researchers took cells from the bone marrow of the mice, differentiated these cells into macrophages, and infected them. After four hours, the RNA of the infected macrophages was extracted and sequenced. Gene expression mapping based on the analysis of the transcriptomes of both murine strains showed which genes were expressed during the first four hours of infection in each group.

"We focused on the start of the infection, when the response mechanisms are being triggered," said Juliana Ide Aoki, a researcher at IB-USP and lead author of the article, supported by a postdoctoral scholarship)from FAPESP.

"When the organism is infected, it alerts the macrophages to produce a group of molecules that are responsible for responding to the infection. Our analysis of the transcriptome showed which segments of the genome understood this signal and effected a response to combat the parasite," Floeter-Winter said.

A total of 12,641 genes were expressed by macrophages cultured in the laboratory, but only 22 genes were found to be expressed in response to the parasite in the susceptible murine strain (BALB/c), compared with 497 in the resistant strain (C57BL/6).

"This is the key finding from our study. The susceptible organism activated only a few genes and was unable to stop the infection. The resistant organism activated a large number of genes, inducing the production of molecules to control the infection, and succeeded in doing so," Floeter-Winter said

"Our results show that the development of the disease depends on the genetics of the host and not just of the parasite. This may explain why the infection progresses in some patients whereas others are resistant."

According to Floeter-Winter, the mice received the same treatment and food, ruling out the influence of environmental factors on the results. Future research may be aimed at determining why the resistant strain (C57BL/6) activates more genes to combat the infection.

The identification of molecules present in an infected organism that is capable of controlling the infection will help scientists to suggest markers for the evaluation and determination of the prognosis of human patients. "For example, it would be possible to see which molecules an infected patient expresses and to predict whether the infection will last a relatively long time, whether it will be severe, and whether the patient is producing molecules that combat it," she said.

Furthermore, the study's scientific contribution can be extrapolated to other aspects of the disease. "A better understanding of how Leishmania establishes infection extends our knowledge of response mechanisms, which can be used in treating other infectious diseases and can help other researchers conduct studies with different models," Floeter-Winter said.

The recently published study is part of a suite of research projects led by Floeter-Winter in pursuit of deeper insights into the interactions between these parasites and their hosts, such as a project in which potential targets for the treatment of leishmaniasis were identified (read more at: agencia.fapesp.br/26308).

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Shining light on sleeping cataclysmic binaries

Almost 35 years ago, scientists made the then-radical proposal that colossal hydrogen bombs called novae go through a very long-term life cycle after erupting, fading to obscurity for hundreds of thousands of years and then building back up to become full-fledged novae once more. A new study is the first to fully model the work and incorporate all of the feedback factors now known to control these systems, backing up the original prediction while bringing new details to light. Published this week in the journal Nature Astronomy, the study confirms that the novae we observe flashing throughout the universe represent just a few percent of these cataclysmic variables, as they are known, with the rest "hiding" in hibernation.

"We've now quantified the suggestion from decades ago that most of these systems are deeply hibernating, waiting to wake up, and we haven't yet identified them," said Michael Shara, a curator in the Department of Astrophysics at the American Museum of Natural History who was the lead author on the original study and is one of the coauthors of the new work. "The novae we observe are just the tip of the iceberg. We've been wrong in thinking that the novalike binaries and dwarf novae that make novae represent everything out there. The systems that make novae are much more common than we've thought."

Cataclysmic binary systems occur when a star like our Sun--a red dwarf--is being cannibalized by a white dwarf, a dead star. The white dwarf builds up a critical layer of hydrogen that it steals from the red dwarf, and that hydrogen explodes as a gigantic bomb. This explosion produces a burst of light that makes the white dwarf star up to 1 million times brighter than the Sun for a period of time that can last from a few days to a few months.

Shara's original work proposed that, after an eruption, a nova becomes "nova-like," then a dwarf nova, and then, after a hibernation as a so-called detached binary, it comes back to being a dwarf nova, nova-like, and then a nova, repeating the cycle over and over again, up to 100,000 times over billions of years. "In the same way that an egg, a caterpillar, a pupa, and a butterfly are all life stages of the same organism, these binaries are all the same objects seen at different phases of their lives," Shara said.

For the new study, Shara and his colleagues at Ariel University and Tel-Aviv University in Israel built a set of simulations to follow thousands of novae eruptions and their effects on their red dwarf companions. The goal is to show, quantitatively, that the evolution of cataclysmic binary systems is cyclical and driven by feedback between the two stars.

"There just wasn't the computing power to do this 30 years ago, or 20 years ago, or even 10 years ago," Shara said.

They found that cataclysmic binaries do not simply alternate through each of the four states--nova, nova-like, dwarf nova, and detached binary--their whole lives. Newborn binaries, during the first few percent of a system's life, only alternate between nova and nova-like states. Then, for the next 10 percent of their lifetimes, the binaries alternate through three states: nova, nova-like, and dwarf nova. For the remaining 90 percent of their lifetimes, they continuously cycle through all four states.

Further, the study showed that almost all the novae we observe today occur near the beginning of a binary system's life as opposed to the end--at a rate of about once every 10,000 years rather than once every few million years.

"Statistically, that means that the systems we observe--the ones that are popping off all of the time--are the newborn ones," Shara said. "And that's just about 5 percent of the total binaries out there. The vast majority are in the detached state, and we've been ignoring them because they're so faint and common. We know that they're there. Now we just have to work hard to find them and connect them to novae."

Credit: 
American Museum of Natural History

What motivates sales of pollinator-friendly plants?

An analysis out of the University of Georgia details the relationship between consumer awareness and the attentiveness and care given to pollinator-friendly plant purchases.

Benjamin Campbell and William Steele studied the decline of pollinators, the factors that can be recognized as potential contributors to said decline, and a common-sense approach for combatting it. Their findings are revealed in their article Impact of Information Type and Source on Pollinator-friendly Plant Purchasing found in the open access journal HortTechnology, published by the American Society for Horticultural Science.

The number of pollinators has been reported to be decreasing for the past several decades. Among other reasons, climate change, pesticides, and loss of habitat have been noted as probable contributing factors. As agricultural producers seek to ensure pollinators are accessible to pollinate their crops, this research attempts to scientifically detail the reasons behind the steady loss occurring over the past several decades.

Pollinator issues have emerged as critical within public awareness. As a result, many consumers and activists have advocated for the removal of commonly used pesticides. Some retail outlets have banned their suppliers from using neonicotinoids on plants to be sold in their stores. The impact of pesticides on pollinator decline and the consumer response to this impact is of crucial importance.

As various media and activist groups provide information (positive, neutral, and negative) about the impact of pesticides on pollinators, no information exists regarding how consumer behavior is altered based on such information. Campbell and Steele determined how both information source and information type have an impact on a consumer's decision to purchase pollinator-friendly plants in the future.

Existing literature suggests there is no concrete evidence that correct use of neonicotinoids is the main cause of pollinator decline. However, consumers face a barrage of information from various groups that often contain conflicting messages about the effects of pesticides on pollinators. The decision about which sources to trust can be difficult. Often the level of trust an individual consumer has about a news source and/or type of information (neutral or negative) is dependent upon varying characteristics of that individual.

The researchers hypothesized that information from seemingly "unbiased" sources, such as from universities and the federal government, would have a significant impact compared with providing no information, whereas information from seemingly "biased" sources, such as from the green industry and environmental activist groups, would not be different from the no-information designation.

Furthermore, they hypothesized that linking pesticides to pollinator decline would increase the likelihood of a consumer purchasing a pollinator-friendly plant compared with not providing any information.

By understanding the impact of messaging from different sources, industry associations, policymakers, and stakeholders gain a better understanding of how their messaging will influence consumer decision making and of policies at the state and regional levels.

Campbell and Steele gained consumer information by using an online survey designed to reveal many aspects about potential consumers, the degree of trust carried by various information sources, and information types rated for their probable level of success. A total of 785 respondents completed their survey.

They discovered that to consumers, all sources of information are not the same. The level of trust, or lack thereof, consumers place in a source is influenced by a multitude of factors. Firms that can identify which sources of information are trusted by their consumer base and which sources are not, can develop effective marketing strategies that will aid their marketing efforts.

The researchers surmise that providing information about pollinator decline and the detriment that fact contains should increase the demand for pollinator-friendly plants, although it would most likely be at the expense of nonpollinator-friendly plants.

The results show that information from the federal government, nursery/greenhouse industry associations, and environmental activist groups has the same impact on self-reported future pollinator-friendly plant purchasing as the no-information group.

Campbell further reflects, "What I find extremely interesting is that information alone can have a positive impact on consumers' willingness to purchase more pollinator friendly plants compared to no information. For instance, generic information with no source as well as information linking and not linking pesticides increases willingness to purchase pollinator friendly plants. Further, it is interesting that only universities and major media outlets drive changes in consumer behavior, while other sources of information had no effect. So from this we see that information can move the needle on what consumers purchase, but producers/retailers need to be aware that not all information sources are the same."

Credit: 
American Society for Horticultural Science

How fire causes office-building floors to collapse

image: Inside a fireproof compartment, NIST researchers subjected full-scale replicas of office building floors to fires produced by three gas-fueled burners.

Image: 
NIST

Engineers and technicians at the National Institute of Standards and Technology (NIST) spent months meticulously recreating the long concrete floors supported by steel beams commonly found in high-rise office buildings, only to deliberately set the structures ablaze, destroying them in a fraction of the time it took to build them.

These carefully planned experiments produced cracked concrete slabs and contorted steel beams, but from the rubble arose a wealth of new insights into how real-world structures behave and can eventually fail in uncontrolled building fires. The results of the study, reported in the Journal of Structural Engineering, indicate that structures built to code are not always equipped to survive the forces induced by extreme shifts in temperature, but the data gained here could help researchers develop and validate new design tools and building codes that bolster fire safety.

In the United States, fireproofing materials are sprayed or painted onto weight-bearing beams or columns to slow their temperature rise in case of a fire. These materials, which are typically the only fire-resistance measures integrated into the skeletons of buildings, are required by building codes to be thick enough to delay structural deterioration for a certain number of hours. The responsibility of putting fires out or preventing them from spreading, however, typically falls on measures outside of the structural design, such as sprinkler systems and local fire departments.

The current approach to fire safety is typically sufficient to protect most buildings from collapse; however, there are rare situations in which fire protection systems and firefighting efforts are not enough. In dire circumstances like these, where fires rage in an uncontrolled fashion, flames can sometimes burn so hot that they overwhelm the defense of the fireproofing and seal the structure's fate.

Just like the red liquid in a thermometer rises on a hot day, components of a building will undergo thermal elongation at elevated temperatures. But whereas the liquid has room to expand, steel beams, like those used to hold up floors in office buildings, are typically bound at their ends to support columns, which typically stay cool and maintain their shape for longer because of additional fireproofing and the reinforcement of the surrounding structure. With very little wiggle room, beams that heat up during fires could press up against their uncompromising boundaries, potentially breaking their connections and causing floors to collapse.

To better prepare buildings for worst-case scenarios, structural designs may need to account for the forces introduced by fires. But because the behavior of a burning building is complex, structural engineers need help predicting how their designs would hold up in an actual fire. Computer models that simulate building fires could provide invaluable guidance, but for those tools to be effective, a considerable amount of experimental data is needed first.

"The main purpose of this experiment is to develop data from realistic structure and fire conditions that can be used for developing or validating computational programs," said Lisa Choe, NIST structural engineer and lead author of the study. "Then the programs can be expanded to different building configurations and used for design."

Structures are seldom fire-tested at a realistic scale. Standard tests make use of laboratory furnaces that typically only accommodate individual components or small assemblies without the kinds of end connections that are used in buildings. Size is less of an issue for NIST, however. Within the National Fire Research Laboratory (NFRL), engineers can build and safely burn structures as tall as two stories and have a plethora of tools available to inspect the destruction.

Mimicking the design of floors from high-rise office buildings, Choe and her colleagues at the NFRL formed concrete slabs atop steel beams spanning 12.8 meters (42 feet) -- a typical length in office buildings and also the longest fire-tested in the United States. The floors were suspended in the air, fastened at their ends to support columns either by double angle or shear tab connections, which are differently shaped but both commonplace.

To make the test conditions even more true to life, the engineers used a hydraulic system to pull down on the floors, simulating the weight of occupants and moveable objects like furniture. The beams were also coated in fireproofing material with a two-hour fire-resistance rating to meet building code requirements, Choe said.

Inside a fireproof compartment, three natural-gas fueled burners torched the floors from below, releasing heat as rapidly as a real building fire. While the compartment warmed up, various instruments measured the forces felt by the beams along with their deformation and temperature.

As temperatures within the compartment surpassed 1,000 C, the expanding beams, having been constrained between two support columns, began to buckle near their ends.

No floor came out of the fire tests scot-free, but some withstood more than others. After around one hour of heating, the shear tab connections of one beam -- now having dipped down by more than two feet -- fractured, leading to collapse. The beams with double angle connections, however, beat the heat and remained intact. That is, until they tumbled down hours after the furnaces were shut off, as the beams cooled and contracted back upwards, breaking the double angle connections.

While the study's small sample size means conclusions about buildings in general could not be drawn, Choe and her team did find that the beams with double angle connections endured greater forces and deformations from the temperature changes than those with shear tab connections.

"The influence of the thermal elongation and contraction is something that we shouldn't ignore for the design of steel structures exposed to fires. That's the big message," Choe said.

Toward the goal of more robust designs, these results provide invaluable data for researchers developing predictive fire models that could lay a foundation for buildings that resist not only burns, but the force of fire.

Credit: 
National Institute of Standards and Technology (NIST)

Researchers unveil framework for sharing clinical data in AI era

OAK BROOK, Ill. - Clinical data should be treated as a public good when it is used for secondary purposes, such as research or the development of AI algorithms, according to a special report published in the journal Radiology.

"This means that, on one hand, clinical data should be made available to researchers and developers after it has been aggregated and all patient identifiers have been removed," said study lead author David B. Larson, M.D., M.B.A., from the Stanford University School of Medicine in Stanford, California. "On the other hand, all who interact with such data should be held to high ethical standards, including protecting patient privacy and not selling clinical data."

The rapid development of AI, coming on the heels of the widespread adoption of electronic medical records, has opened up exciting possibilities in medicine. AI can potentially streamline and improve the analysis of medical images, but first it must be trained on large troves of data from mammograms, CT scans and other imaging exams. One of the current limitations of the advancement of AI-based tools is the lack of broad consensus on an ethical framework for sharing clinical data.

"Now that we have electronic access to clinical data and the data processing tools, we can dramatically accelerate our ability to gain understanding and develop new applications that can benefit patients and populations," Dr. Larson said. "But unsettled questions regarding the ethical use of the data often preclude the sharing of that information."

To help answer those questions, Dr. Larson and his colleagues at Stanford University developed a framework for using and sharing clinical data in the development of AI applications.

Arguments regarding the sharing of clinical data traditionally have fallen into one of two camps: either the patient owns the data or the institution does. Dr. Larson and colleagues advocate for a third approach based on the idea that, when it comes to secondary use, nobody truly owns the data in the traditional sense.

"Medical data, which are simply recorded observations, are acquired for the purposes of providing patient care," Dr. Larson said. "When that care is provided, that purpose is fulfilled, so we need to find another way to think about how these recorded observations should be used for other purposes. We believe that patients, provider organizations, and algorithm developers all have ethical obligations to help ensure that these observations are used to benefit future patients, recognizing that protecting patient privacy is paramount."

The authors' framework supports the release of de-identified and aggregated clinical data for research and development, as long as those receiving the data identify themselves and act as ethical data stewards. Individual patient consent would not be required, and patients would not necessarily be able to opt out of allowing their clinical data to be used for research or AI algorithm development--so long as their privacy is protected.

"When used in this manner," the article states, "clinical data are simply a conduit to viewing fundamental aspects of the human condition. It is not the data, but rather the underlying physical properties, phenomena and behaviors that they represent, that are of primary interest."

According to the authors, it is in the best interest of future patients for researchers to be able to look "through" the data available in electronic medical records to develop insights into anatomy, physiology and disease processes in populations, as long as they are not looking "at" the identity of the individual patients.

The framework states that it is not ethical for clinical providers to sell clinical data for profit, especially under exclusive arrangements. Corporate entities could profit from AI algorithms developed from clinical data, provided they profit from the activities that they perform rather than from the data itself. In addition, provider organizations could share clinical data with industry partners who financially support their research, if the support is for research rather than for the data.

Safeguards to protect patient privacy include stripping the data of any identifying information.

"We strongly emphasize that protection of patient privacy is paramount. The data must be de-identified," Dr. Larson said. "In fact, those who receive the data must not make any attempts to re-identify patients through identifying technology."

Additionally, if a patient's name was unintentionally made visible--for instance, on a necklace seen on a CT scan--the receiver of the information would be required to notify the party sharing the data and to discard the data as directed.

"We extend the ethical obligations of provider organizations to all who interact with the data," Dr. Larson said.

Dr. Larson and his Stanford colleagues are putting the framework into the public domain for consideration by other individuals and parties, as they navigate the ethical questions surrounding AI and medical data-sharing.

"We hope this framework will contribute to more productive dialogue, both in the field of medicine and computer science, as well as with policymakers, as we work to thoughtfully translate ethical considerations into regulatory and legal requirements," Dr. Larson said.

Credit: 
Radiological Society of North America

Images reveal how bacteria form communities on the human tongue

image: Bacterial biofilm scraped from the surface of the tongue and imaged using CLASI-FISH. Human epithelial tissue forms a central core (gray). Colors indicate different bacteria: Actinomyces (red) occupy a region close to the core; Streptococcus (green) is localized in an exterior crust and in stripes in the interior. Other taxa (Rothia, cyan; Neisseria, yellow; Veillonella, magenta) are present in clusters and stripes that suggest growth of the community outward from the central core.

Image: 
Steven Wilbert and Gary Borisy, The Forsyth Institute

Using a recently developed fluorescent imaging technique, researchers in the United States have developed high-resolution maps of microbial communities on the human tongue. The images, presented March 24 in the journal Cell Reports, reveal that microbial biofilms on the surface of the tongue have a complex, highly structured spatial organization.

"From detailed analysis of the structure, we can make inferences about the principles of community growth and organization," says senior author Gary Borisy, of the Forsyth Institute and the Harvard School of Dental Medicine. "Bacteria on the tongue are a lot more than just a random pile. They are more like an organ of our bodies."

The human oral microbiome is a complex ecosystem. The spatial organization of microbial communities in the mouth is affected by a variety of factors, including temperature, moisture, salivary flow, pH, oxygen, and the frequency of disturbances such as abrasion or oral hygiene. In addition, microbes influence their neighbors by acting as sources and sinks of metabolites, nutrients, and inhibitory molecules such as hydrogen peroxide and antimicrobial peptides. By occupying space, microbes can physically exclude one another from desirable habitats, but their surfaces also present binding sites to which other microbes may adhere.

Yet spatial patterning has received relatively little attention in the field of microbial ecology. "We think that learning who is next to who will help us understand how these communities work," says co-author Jessica Mark Welch (@JMarkWelch), a microbial ecologist at the Marine Biological Laboratory in Woods Hole, Massachusetts. "The tongue is particularly important because it harbors a large reservoir of microbes and is a traditional reference point in medicine. 'Stick out your tongue' is one of the first things a doctor says."

In the new study, the researchers used a technique called Combinatorial Labeling and Spectral Imaging - Fluorescence in situ Hybridization (CLASI-FISH), which was recently developed in the Borisy lab. This strategy involves labeling a given type of microorganism with multiple fluorophores, greatly expanding the number of different kinds of microbes that can be simultaneously identified and localized in a single field of view.

"Our study is novel because no one before has been able to look at the biofilm on the tongue in a way that distinguishes all the different bacteria, so that we can see how they arrange themselves," Borisy says. "Most of the previous work on bacterial communities used DNA sequencing-based approaches, but to get the DNA sequence, you have to first grind up the sample and extract the DNA, which destroys all the beautiful spatial structure that was there. Imaging with our CLASI-FISH technique lets us preserve the spatial structure and identify the bacteria at the same time."

First, the researchers used analyzed sequence data to identify major bacterial taxa contained within small samples scraped from the tongues of 21 healthy participants. Guided by sequence analysis, the imaging approach targeted major genera and selected species to obtain a comprehensive view of microbiome structure. The researchers identified 17 bacterial genera that were abundant on the tongue and present in more than 80% of individuals. The samples consisted of free bacteria, bacteria bound to host epithelial cells, and bacteria organized into consortia--structurally complex, multi-layer biofilms.

The consortia showed patchiness in community structure, consisting of spatially localized domains dominated by a single taxon. Although they varied in shape, they were typically tens to hundreds of microns long and had a core of epithelial cells and a well-defined perimeter. The tongues of all subjects had consortia consisting of three genera: Actinomyces, Rothia, and Streptococcus. Actinomyces frequently appeared near the core, while Rothia was often observed in large patches toward the exterior of the consortium. Streptococcus was observed forming a thin crust on the exterior of the consortia and also formed veins or patches in their interior.

"Collectively, our species-level imaging results confirm and deepen our understanding of habitat specificity of key players and show the value of investigating microbiomes at high imaging and identification resolution," Mark Welch says.

Taken together, the results suggest a model for how the structured microbial communities harbored on our tongues are generated. First, bacterial cells attach to the epithelium of the tongue's surface singly or in small clusters. During population growth, differing taxa push on one another and proliferate more rapidly in microenvironments that support their physiological needs. This differential growth results in the patch mosaic organization observed in larger, more mature structures.

The images also revealed that some taxa capable of nitrate reduction--Actinomyces, Neisseria, Rothia, and Veillonella--are prominent in tongue consortia. This raises the possibility that small bumps on the surface of the human tongue are structured to encourage the growth of bacteria that convert salivary nitrate to nitrite--a function not encoded by the human host genome.

Credit: 
Cell Press

National study finds diets remain poor for most American children; disparities persist

BOSTON (March 24, 2020, 11:00 a.m. EDT)--Despite consuming fewer sugar-sweetened beverages and more whole grains, most American children and adolescents still eat poorly - and sociodemographic disparities persist, according to an 18-year national study between 1999 and 2016 of U.S. children's dietary trends.

Led by Junxiu Liu and Dariush Mozaffarian of the Friedman School of Nutrition Science and Policy at Tufts University, the study is published today in JAMA. The research team analyzed the diets of more than 31,000 U.S. youth, 2-19 years old, based on national data across nine cycles of the National Health and Nutrition Examination Survey (NHANES) between 1999 and 2016. They assessed each child's diet as poor, intermediate or ideal, based on three validated dietary scores, all of which are designed to measure adherence to accepted nutritional guidelines.

The study finds that a majority -- 56 percent -- of American children and adolescents had diets of poor nutritional quality in 2016. This was despite improvements over the 18-year study period including:

The proportion of children and adolescents with poor diets declined, 77 percent to 56 percent.

The proportion of children and adolescents with intermediate diets increased, 23 percent to 44 percent.

At the end of the study period, adolescents (12-19 years old) had the worst diet of three age categories, with 67 percent found to have a poor diet, compared with 53 percent of children aged 6-11 and 40 percent of children aged 5 and under.

Key dietary disparities persisted, especially based on parental education and household food security status, and worsened by household income. For example, at the end of the study period, 65 percent of children from households in the lowest income category had a poor diet, compared with 47 percent of children in the highest income category.

"This is a classic 'glass half full or half empty' story," said Mozaffarian, dean of the Friedman School and senior author of the study. "Kids' diets are definitely improving, and that's very positive. On the other hand, most still have poor diets, and this is especially a problem for older youth and for kids whose households have less education, income, or food security."

When the study authors investigated the influence of individual foods and nutrients, they discovered that improvements between 1999 and 2016 amounted to the daily equivalent of:

Eight fewer ounces of sugar-sweetened beverages (which translated into eight fewer teaspoons of added sugar).

A half serving more of whole grains (for example, a half slice of whole grain bread or a quarter cup of rolled oats).

One-fifth serving more of whole fruit (about seven grapes or part of an apple).

"Overall, added sugar intake among American children was reduced by a third, largely because sugary beverages were cut in half," said first author Junxiu Liu, a postdoctoral scholar at the Friedman School. "But, there was little reduction in added sugars consumed from foods, and by 2016, American kids were still eating about 18 teaspoons or about 71.4 grams of added sugar each day - equivalent to one out of every seven calories. That's much too high."

The researchers found that while intakes of some healthful foods increased, they remained far below general national recommendations. By 2016, Mozaffarian noted, kids were eating:

About 1.8 daily servings of fruits and vegetables (less than half the recommendation of 4.5 servings).

One daily serving of whole grains (less than one-third the recommendation of three servings).

Just under half a daily serving of fish/seafood (less than one-fourth of the recommendation of two servings per week).

The team also found that children's salt intake increased and continued to greatly exceed the recommended daily amount, possibly due to more reliance on processed foods and foods prepared away from home.

The authors point out that several national policy efforts to improve the diets of American children during the study period could have contributed to progress, yet children also continue to be marketed foods with low nutritional value. "Food is the number one cause of chronic illness and death in our country, and these results affect our children--our future," Mozaffarian said.

"Our findings of slowly improving, yet still poor, diets in U.S. children are consistent with the slowing of rises in childhood obesity but not any reversal. Understanding these updated trends in diet quality is crucial to informing priorities to help improve the eating habits and long-term health of all of America's youth," Mozaffarian added.

This is a companion study to Mozaffarian and team's research, published in JAMA in 2016, which evaluated the diets of U.S. adults between 1999 and 2014.

Methodology

The researchers used data for U.S. children 2-19 years old from nine cycles of the National Health and Nutrition Examination Survey (NHANES) between 1999 and 2016. NHANES is a nationally representative study maintained by the National Center for Health Statistics. Children aged 12 and older completed the dietary recall on their own. Proxy-assisted interviews (for example, a parent) were conducted for children 6-11 years old, and proxy respondents reported diets for children aged 5 and younger. Respondents are representative of the national population and completed at least one valid 24-hour dietary recall questionnaire.

The study authors used the USDA Food Patterns Equivalents Database (FPED) and MyPyramid Equivalents Database (MPED) to assess changes in food groups. The authors used the USDA Food and Nutrient Database for Dietary Studies (FNDDS) to assess nutrient intake.

The authors assessed dietary quality using the validated American Heart Association (AHA) diet score, which includes a primary score for consumption of fruits and vegetables, fish/shellfish, whole grains, sodium, and sugar-sweetened beverages, and a secondary AHA score further adding the intakes of nuts/seeds/legumes, processed meat, and saturated fat. As a complement to the AHA score, they used the Healthy Eating Index (HEI)-2015, which measures adherence to the 2015-2020 Dietary Guidelines for Americans.

Limitations of the study include the fact that self-reported food recall data may be inaccurate due to the possibility that respondents will over-report or under-report certain foods. Also, the dietary scores used have been validated against disease outcomes among adults.

Credit: 
Tufts University, Health Sciences Campus

Is step count associated with lower risk of death?

What The Study Did: Researchers looked at whether taking more steps and higher intensity stepping were associated with reduced risk of death in this observational study that included almost 4,900 adults (40 and over) who wore a device called an accelerometer to measure their step count and step intensity (steps/minute).

Authors: Pedro F. Saint-Maurice, Ph.D., of the National Cancer Institute in Rockville, Maryland, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jama.2020.1382)

Editor's Note: The article includes conflict of interest and funding/support disclosures. Please see the articles for additional information, including other authors, author contributions and affiliations, conflicts of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

Efficiency of non-invasive brain stimulation for memory improvement: Embracing the challenge

A group of scientists from the Research Center of Neurology and Skoltech showed that human working memory can be tweaked using non-invasive magnetic stimulation of the brain. Also, they discovered that the effect of magnetic stimulation weakens as the brain works on a cognitive task under stimulation.

Working memory (WM) stores and processes the information we need for daily use. The WM mechanisms get activated when, for example, we memorize a phone number until we find a scrap of paper or a smartphone to write it down. WM disorders are a frequent occurrence in many nervous system diseases, whereas in healthy people, the WM capacity is associated with an individual's learning ability and general intelligence level.

The transcranial magnetic stimulation (TMS) is regarded as one of the promising non-pharmacological WM enhancement methods leveraging the effect of the alternating magnetic field which painlessly penetrates through the scalp and skull bones, with an electric field forming in the cortex. As TMS can influence the mechanisms of neuroplasticity, it is used as a therapeutic method for various nervous system diseases. The TMS effects are known to depend both on the stimulation parameters and the brain activity during stimulation. Combining TMS with concurrent cognitive activity has evolved into a cognitive enhancement technique for patients with Alzheimer's disease. However, data are still lacking on how exactly the brain activity influences the TMS efficiency.

The researchers compared the effects of TMS on WM when stimulation was applied with and without a cognitive load. The WM performance was evaluated before and after a 20-minute stimulation session. The stimulation area was selected based on the individual brain activation pattern which formed during a WM-engaging task. The results suggest that WM does not respond to any stimulation other than TMS with no cognitive load.

"The results of our research lead us to conclude that cognitive activity can reduce rather than increase the TMS efficiency. This should be borne in mind when developing new stimulation protocols for cognitive enhancement in both healthy volunteers and patients suffering from various nervous system diseases," says Natalya Suponeva, Head of Department of Neurorehabilitation and Physiotherapy at the Research Center of Neurology and Associate Member of RAS.

Maxim Fedorov, Director of the Skoltech Center for Computational and Data-Intensive Science and Engineering (CDISE), is inspired by the research outcomes and the ensuing opportunities: "The results attest to the efficiency of interdisciplinary research in biomedicine and cognitive sciences, benefiting from advanced data processing methods. We at CDISE have much interest in collaborating with the Research Center of Neurology and studying WM mechanisms for a number of reasons. First, this would be an exciting experience and a good opportunity to apply some of the findings in practice in the short term (better memory is what many of us need). Second, modern biomedical research tools open up broad horizons for data and AI scientists. Data are abundant but sometimes too noisy and the data samples are often heterogeneous. Generally speaking, we are faced with non-trivial tasks that prompt ideas for new research targets in our field. Third, many ideas in Big Data and AI, such as neural networks, were born out of research into the human higher nervous activity. And this is very interesting. Currently, we are busy working on many projects at the crossroads of neuroscience, simulation and Big Data. Personally, I believe that man is as boundless as the Universe, and we are just beginning to understand how interesting we are and how much potential we have. I am convinced that we have a lot of unexpected discoveries ahead of us. We strongly hope that our collaboration with the Research Center of Neurology will be a continued success."

Currently, the study is moving forward with a larger number of healthy volunteers in order to validate the recent findings and evaluate the long-term effect of TMS on WM performance.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)