Culture

Fish school by randomly copying each other, rather than following the group

image: A school of trevallies.

Image: 
Photo by Milos Prelevic on Unsplash.

Fish school by copying each other and changing directions randomly, rather than calculating and adapting to an average direction of the group, a group of scientists co-led by UNSW has shown.

In a study published today in Nature Physics, an international team from Australia, India and UK has shed light on the behavioural dynamics that govern alignment, or collective motion, of cichlid fish - offering new insights into the dynamics of schooling, and potentially the coordinated behaviour of other animals.

"In the fish that we have studied, schooling turns out to be noise-induced. It's not what anyone traditionally thought it was," says Dr Richard Morris from UNSW Science, co-leader of the study and EMBL Australia group leader in UNSW's Single Molecule Science.

"Noise, in this setting, is simply the randomness arising from interactions between individual fish."

In the study, the researchers present the first experimental evidence of noise-induced ordering, which previously only existed as a theoretical possibility. The interdisciplinary team of ecologists, physicists and mathematicians achieved this by combining the capabilities of their scientific disciplines to integrate experiments with computer simulations and analysis.

"Everyone's been aware of noise-induced phenomena, theoretically, but it's quite rare to find in practice. You can only observe it when the individuals in a study can actually make decisions. For example, you wouldn't find this type of noise-induced behaviour studying electrons or particles," says Dr Morris.

This new model proposed contradicts the standard 'moving average' theories for schooling and herding behaviour, which assume that the animals are capable of estimating the overall direction of the group.

"Every fish only interacts with one other fish at any given time. They either spontaneously change direction, or copy the direction of a different fish. Calculating an average direction of the group - which was the popular theory until now - is likely too complicated for a fish to compute," explains Dr Morris.

To study the behavioural dynamics, the researchers filmed schools of 15, 30 and 60 cichlid fish, tracking their trajectories to analyse the mechanism behind mutual alignment, or schooling.

"Smaller groups of fish schooled more coherently than large groups. This is counterintuitive, since the randomness, or noise, from individual interactions plays a bigger role in smaller groups than larger ones," Dr Morris says.

When researchers interpret data, noise is usually an unrelated factor that obscures and distracts from the information, like glare from the sun that you would try to eliminate to get a clearer photo.

In this case, Dr Morris explains that the random copying between pairs of fish gives rise to a different class of noise, and is actually what drives their highly coordinated behaviour. This new insight highlights the importance of noise, showing that noise may encode some important information about behavioural dynamics of fish and other animals.

"Here the signal is the noise. If you ignored the fluctuations completely, you couldn't explain schooling at all."

Beyond fish behaviour, the discovery has the power to reshape the understanding of collective motion in animals, and calls for a revision of how noise is treated in studies of behaviour dynamics.

Credit: 
University of New South Wales

Blood test method may predict Alzheimer's protein deposits in brain

Researchers report an advance in the development of a blood test that could help detect pathological Alzheimer's disease in people who are showing signs of dementia. This approach could be less invasive and less costly than current brain imaging and spinal fluid tests. The blood test detects the abnormal accumulation of a form of tau protein known as phosphorylated-tau-181 (ptau181), which is a biomarker that suggests brain changes from Alzheimer's. The study, funded by the National Institutes of Health, was published on March 2 in Nature Medicine.

Over the past 15 years, research advances in the development of biomarkers like tau protein have enabled investigators to more accurately diagnose Alzheimer's disease, select research participants, and measure response to investigational therapies. Tau and other biomarkers can be detected with PET scans of the brain and lab tests of spinal fluid. However, PET imaging is expensive and involves radioactive agents, and spinal fluid tests require spinal taps, which are invasive, complex and time-consuming. Simpler biomarker tests are still needed.

"The considerable time and resources required for screening research participants with PET scans and spinal taps slow the pace of enrollment for Alzheimer's disease treatment studies," said Richard J. Hodes, M.D., director of NIH's National Institute on Aging (NIA), which funded much of the study. "The development of a blood test would enable us to rapidly screen a much larger and more diverse group of volunteers who wish to enroll in studies."

An international team of researchers led by Adam Boxer, M.D., Ph.D., at the University of California, San Francisco, used the new test to measure the concentration of ptau181 in plasma, which is the liquid part of blood that carries the blood cells. The samples were collected from more than 400 participants from the University of California, San Francisco Memory and Aging Center, part of the NIA-funded Alzheimer's Disease Research Center; the NIH-supported Advancing Research and Treatment for Frontotemporal Lobar Degeneration (ARTFL) consortium; and a research study sponsored by Eli Lilly.

Their analysis demonstrated that the ptau181 in plasma could differentiate healthy participants from those with Alzheimer's pathology, and differentiate those with Alzheimer's pathology from a group of rare neurodegenerative diseases known collectively as frontotemporal lobar degeneration (FTLD).

"It has become clear that there are many possible biological pathways to dementia," said Roderick Corriveau, Ph.D., program director at NIH's National Institute of Neurological Disorders and Stroke (NINDS), which also supported the study. "Finding a blood test that specifically identifies the presence of Alzheimer's pathology in the brain should greatly help researchers develop better treatments for the many who suffer from dementia."

In addition, the results with the plasma ptau181 test mirrored results with two established biomarker tests for Alzheimer's -- a spinal fluid ptau181 test and a PET brain scan biomarker known as amyloid protein. The research team, which includes the NIH's ARTFL-LEFFTDS Longitudinal Frontotemporal Lobar Degeneration (ALLFTD) research consortium that was announced last year, is now aiming to refine and improve the ptau181 blood test method.

"Because of NIH's investments, we are poised to make dramatic advances in biomarker development for Alzheimer's disease, FTLD, and related neurodegenerative disorders," said Eliezer Masliah, M.D., director of NIA's Division of Neuroscience.

In the future, improved biomarkers like ptau181 may help not just researchers but also physicians to detect and diagnose Alzheimer's and related neurodegenerative disorders earlier, when interventions are more likely to be effective.

"This research is an example of how studies on rare diseases, in this case FTLD, may provide important insights into common disorders such as Alzheimer's disease, which affects millions of people," said Tiina Urv, Ph.D., program officer in the Office of Rare Diseases Research at the NIH's National Center for Advancing Translational Sciences (NCATS), which also supported the study.

A different international team, this one led by Oskar Hansson, M.D., Ph.D., at Lund University in Sweden and supported in part by NIH, reported similar findings. Using the same plasma ptau181 test, these researchers were able to differentiate between Alzheimer’s and other neurodegenerative diseases nearly as well as they could with a spinal fluid ptau181 test and a PET brain scan for tau protein. In addition, they followed participants for several years and observed that high levels of plasma ptau181 among those who were cognitively normal or had mild cognitive impairment may be used to predict later development of Alzheimer’s dementia. These results were also published today in Nature Medicine.

Credit: 
NIH/National Institute on Aging

Memory concerns? Blood test may put mind at ease or pave way to promising treatments

A blood test that may eventually be done in a doctor's office can swiftly reveal if a patient with memory issues has Alzheimer's disease or mild cognitive impairment and can also distinguish both conditions from frontotemporal dementia. If approved, the blood test could lead to a jump in the number of Alzheimer's patients enrolling in clinical trials and be used to monitor response to those investigational treatments.

In a study led by UC San Francisco, researchers measured blood levels of phosphorylated tau 181 (pTau181), a brain protein that aggregates in tangles in patients with Alzheimer's. They found that pTau181 was 3.5-times higher in people with the disease compared to their healthy peers. In contrast, in patients with frontotemporal dementia, a condition that is often misdiagnosed as Alzheimer's, pTau181 was found to be within the same range as the control group.

The study publishes in Nature Medicine on March 2, 2020.

"This test could eventually be deployed in a primary care setting for people with memory concerns to identify who should be referred to specialized centers to participate in clinical trials or to be treated with new Alzheimer's therapies, once they are approved," said senior author Adam Boxer, MD, PhD, of the UCSF Memory and Aging Center. Being able to easily diagnose Alzheimer's disease at early stages may be especially beneficial to patients with mild cognitive impairment, some of whom may have early Alzheimer's disease. Individuals with early Alzheimer's are more likely to respond to many of the new treatments that are being developed."

Current Alzheimer's Testing Expensive, Invasive

Existing methods for diagnosing Alzheimer's include measurement of the deposits of amyloid, another protein implicated in dementia, from a PET scan; or using lumbar puncture to quantify amyloid and tau in cerebrospinal fluid. PET scans are expensive, only available in specialized centers and currently not covered by insurance, and lumbar punctures are invasive, labor intensive and not easy to perform in large populations, the authors noted.

There are 132 drugs in clinical trials for Alzheimer's, according to a 2019 study, including 28 that are being tested in 42 phase-3 trials -- the final part of a study before approval is sought from the federal Food and Drug Administration. Among those phase-3 drugs is aducanumab, which some experts believe may be the first drug approved to slow the progression of Alzheimer's.

In the study, participants underwent testing to measure pTau181 from plasma, the liquid part of blood. They were aged from 58 to 70 and included 56 who had been diagnosed with Alzheimer's, 47 with mild cognitive impairment and 69 of their healthy peers. Additionally, participants included 190 people with different types of frontotemporal dementia, a group of brain disorders caused by degeneration of the frontal and temporal lobes, areas of the brain associated with decision-making, behavioral control, emotion and language. Among adults under 65, frontotemporal dementia is as common as Alzheimer's.

Blood Test Measures Up to Established Tool

The researchers found that blood measures of pTau181 were 2.4 pg/ml among healthy controls, 3.7 pg/ml among those with mild cognitive impairment and 8.4 pg/ml for those with Alzheimer's. In people with variants of frontotemporal dementia, levels ranged from 1.9 to 2.8 pg/ml. These results gave similar information to the more established diagnostic tools of PET scan measures of amyloid or tau protein, Boxer said.

The study follows research by other investigators published last year that found high levels of plasma amyloid were a predictor of Alzheimer's. However, amyloid accumulates in the brain many years before symptoms emerge, if they emerge, said Boxer, who is affiliated with the UCSF Weill Institute for Neurosciences.

"In contrast, the amount of tau that accumulates in the brain is very strongly linked to the onset, the severity and characteristic symptoms of the disease," he said.

A companion study by Oskar Hansson, MD, PhD, of Lund University, Sweden, published in the same issue of Nature Medicine corroborated the results of the UCSF-led study. It concluded that pTau181 was a stronger predictor of developing Alzheimer's in healthy elders than amyloid.

The researchers said they hope to see the blood test available in doctor's offices within five years.

Credit: 
University of California - San Francisco

5,000-year-old milk proteins point to the importance of dairying in eastern Eurasia

image: These are sheep and goat herds in Mongolia.

Image: 
Björn Reichhardt

Today dairy foods sustain and support millions around the world, including in Mongolia, where dairy foods make up to 50% of calories consumed during the summer. Although dairy-based pastoralism has been an essential part of life and culture in the eastern Eurasian Steppe for millennia, the eastward spread of dairying from its origin in southwest Asia and the development of these practices is little understood. The current study, led by Shevan Wilkin and Jessica Hendy of the Max Planck Institute for the Science of Human History, presents the earliest evidence for dairy consumption in East Asia, circa 3000 BCE, and offers insights into the arrival and evolution of dairy pastoralism in prehistoric Mongolia.

Earliest dairy consumption & a possible path of entry

The highly mobile nature of pastoralist societies and the severe winds of the Eastern Steppe make detecting occupied sites with direct evidence into the lives and culture of ancient Mongolians exceedingly rare. Instead, the researchers looked for clues in ritual human burial mounds, often marked by stone monuments and occasionally featuring satellite animal graves.

In collaboration with the National University of Mongolia, researchers analyzed dental calculus from individuals ranging from the Early Bronze Age to the Mongol Period. Three-quarters of all individuals contained evidence that they had consumed dairy foods, which demonstrates the widespread importance of this food source in both prehistoric and historic Mongolia. The study's results include the earliest direct evidence for dairy consumption in East Asia, identified in an individual from the Afanasievo site of Shatar Chuluu, which dates to roughly 3000 BCE. Previous DNA analysis on this individual revealed non-local genetic markers consistent with Western Steppe Herder populations, presenting Early Bronze Age Afanasievo migrations westward via the Russian Altai as a viable candidate for the introduction of dairy and domestic livestock into eastern Eurasia.

Multiple different animal species were used for their milk

By sequencing the milk proteins extracted from the dental calculus, the scientists were able to determine which animal species were being used for dairy production, and thereby help to trace the progression of domestication, dairying, and pastoralism in the region. "Modern Mongolians use cow, sheep, goat, yak, camel, horse and reindeer for milk today, yet when each of these species were first utilized for dairy in Mongolia remains unclear," says Shevan Wilkin, lead author of the study. "What is clear is that the crucial renewable calories and hydration made available through the incorporation of dairying would have become essential across the arid and agriculturally challenging ancient Eastern Steppe."

The earliest individuals to show evidence of dairy consumption lived around 5000 years ago and consumed milk from ruminant species, such as cattle, sheep, and goats. A few thousand years later, at Bronze Age sites dated to after 1200 BCE, the researchers find the first evidence of horse milk consumption, occurring at the same time as early evidence for horse bridling and riding, as well as the use of horses at ritual burial sites. In addition, the study shows that during the Mongol Empire circa 1200-1400 CE, people also consumed the milk of camels. "We are excited that through the analysis of proteins we are able to see the consumption of multiple different animal species, even sometimes in the same individual. This gives us a whole new insight into ancient dairying practices" says Jessica Hendy, senior author of the study.

Millenia after the first evidence of horse milk consumption, horses remain vital to the daily lives of many in modern Mongolia, where mounted pastoralists rely on them to manage large herds of livestock, transport people and supplies, and provide a primary source of meat and milk. "Our findings suggest that the incorporation of horses into dairy pastoralism in Eastern Eurasia was closely linked to a broader economic transformation in the use of horses for riding, movement, and diet," says William Taylor of the University of Colorado-Boulder, one of the study's coauthors.

Although the earliest individual sampled in this study showed evidence of dairy consumption, the researchers hope future studies will examine individuals from previous time periods. "In order to form a clearer picture of the origins of dairying in this region, we need to understand the impact of western steppe herder migrations and confirm whether dairying was occurring in Mongolia prior to their arrival," Shevan Wilkin concludes.

Credit: 
Max Planck Institute of Geoanthropology

The neural basis of sensory hypersensitivity

CAMBRIDGE, MA -- Many people with autism spectrum disorders are highly sensitive to light, noise, and other sensory input. A new study in mice reveals a neural circuit that appears to underlie this hypersensitivity, offering a possible strategy for developing new treatments.

MIT and Brown University neuroscientists found that mice lacking a protein called Shank3, which has been previously linked with autism, were more sensitive to a touch on their whiskers than genetically normal mice. These Shank3-deficient mice also had overactive excitatory neurons in a region of the brain called the somatosensory cortex, which the researchers believe accounts for their over-reactivity.

There are currently no treatments for sensory hypersensitivity, but the researchers believe that uncovering the cellular basis of this sensitivity may help scientists to develop potential treatments.

"We hope our studies can point us to the right direction for the next generation of treatment development," says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience at MIT and a member of MIT's McGovern Institute for Brain Research.

Feng and Christopher Moore, a professor of neuroscience at Brown University, are the senior authors of the paper, which appears today in Nature Neuroscience. McGovern Institute research scientist Qian Chen and Brown postdoc Christopher Deister are the lead authors of the study.

Too much excitation

The Shank3 protein is important for the function of synapses -- connections that allow neurons to communicate with each other. Feng has previously shown that mice lacking the Shank3 gene display many traits associated with autism, including avoidance of social interaction, and compulsive, repetitive behavior.

In the new study, Feng and his colleagues set out to study whether these mice also show sensory hypersensitivity. For mice, one of the most important sources of sensory input is the whiskers, which help them to navigate and to maintain their balance, among other functions.

The researchers developed a way to measure the mice's sensitivity to slight deflections of their whiskers, and then trained the mutant Shank3 mice and normal ("wild-type") mice to display behaviors that signaled when they felt a touch to their whiskers. They found that mice that were missing Shank3 accurately reported very slight deflections that were not noticed by the normal mice.

"They are very sensitive to weak sensory input, which barely can be detected by wild-type mice," Feng says. "That is a direct indication that they have sensory over-reactivity."

Once they had established that the mutant mice experienced sensory hypersensitivity, the researchers set out to analyze the underlying neural activity. To do that, they used an imaging technique that can measure calcium levels, which indicate neural activity, in specific cell types.

They found that when the mice's whiskers were touched, excitatory neurons in the somatosensory cortex were overactive. This was somewhat surprising because when Shank3 is missing, synaptic activity should drop. That led the researchers to hypothesize that the root of the problem was low levels of Shank3 in the inhibitory neurons that normally turn down the activity of excitatory neurons. Under that hypothesis, diminishing those inhibitory neurons' activity would allow excitatory neurons to go unchecked, leading to sensory hypersensitivity.

To test this idea, the researchers genetically engineered mice so that they could turn off Shank3 expression exclusively in inhibitory neurons of the somatosensory cortex. As they had suspected, they found that in these mice, excitatory neurons were overactive, even though those neurons had normal levels of Shank3.

"If you only delete Shank3 in the inhibitory neurons in the somatosensory cortex, and the rest of the brain and the body is normal, you see a similar phenomenon where you have hyperactive excitatory neurons and increased sensory sensitivity in these mice," Feng says.

Reversing hypersensitivity

The results suggest that reestablishing normal levels of neuron activity could reverse this kind of hypersensitivity, Feng says.

"That gives us a cellular target for how in the future we could potentially modulate the inhibitory neuron activity level, which might be beneficial to correct this sensory abnormality," he says.

Many other studies in mice have linked defects in inhibitory neurons to neurological disorders, including Fragile X syndrome and Rett syndrome, as well as autism.

"Our study is one of several that provide a direct and causative link between inhibitory defects and sensory abnormality, in this model at least," Feng says. "It provides further evidence to support inhibitory neuron defects as one of the key mechanisms in models of autism spectrum disorders."

He now plans to study the timing of when these impairments arise during an animal's development, which could help to guide the development of possible treatments. There are existing drugs that can turn down excitatory neurons, but these drugs have a sedative effect if used throughout the brain, so more targeted treatments could be a better option, Feng says.

"We don't have a clear target yet, but we have a clear cellular phenomenon to help guide us," he says. "We are still far away from developing a treatment, but we're happy that we have identified defects that point in which direction we should go."

Credit: 
Massachusetts Institute of Technology

Sex differences in salaries of department chairs at state medical schools

What The Study Did: Researchers investigated pay differences by sex at the highest ranks of academic medicine among clinical department chairs at 29 state medical schools in 12 states.

Authors: Eleni Linos, M.D., Dr.P.H., of the Stanford University School of Medicine in California, is the corresponding author.

To access the embargoed study:  Visit our For The Media website at this link https://media.jamanetwork.com/ 

(doi:10.1001/jamainternmed.2019.7540)

Editor's Note: The article includes conflict of interest disclosures. Please see the articles for additional information, including other authors, author contributions and affiliations, conflicts of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

Two stars merged to form massive white dwarf

image: Artist's impression of two white dwarfs in the process of merging. Depending on the combined mass, the system may explode in a thermonuclear supernova, or coalesce into a single heavy white dwarf, as with WDJ0551+4135.

Image: 
University of Warwick/Mark Garlick

A massive white dwarf star with a bizarre carbon-rich atmosphere could be two white dwarfs merged together according to an international team led by University of Warwick astronomers, and only narrowly avoided destruction.

They have discovered an unusual ultra-massive white dwarf around 150 light years from us with an atmospheric composition never seen before, the first time that a merged white dwarf has been identified using its atmospheric composition as a clue.

The discovery, published today (2 March) in the journal Nature Astronomy, could raise new questions about the evolution of massive white dwarf stars and on the number of supernovae in our galaxy.

This star, named WDJ0551+4135, was identified in a survey of data from the European Space Agency's Gaia telescope. The astronomers followed up with spectroscopy taken using the William Herschel Telescope, focusing on those white dwarfs identified as particularly massive - a feat made possible by the Gaia mission. By breaking down the light emitted by the star, the astronomers were able to identify the chemical composition of its atmosphere and found that it had an unusually high level of carbon present.

Lead author Dr Mark Hollands, from the University of Warwick Department of Physics, said: "This star stood out as something we had never seen before. You might expect to see an outer layer of hydrogen, sometimes mixed with helium, or just a mix of helium and carbon. You don't expect to see this combination of hydrogen and carbon at the same time as there should be a thick layer of helium in between that prohibits that. When we looked at it, it didn't make any sense."

To solve the puzzle, the astronomers turned detective to uncover the star's true origins.

White dwarfs are the remains of stars like our own Sun that have burnt out all their fuel and shed their outer layers. Most are relatively lightweight, around 0.6 times the mass of our Sun, but this one weighs in at 1.14 solar masses, nearly twice the average mass. Despite being heavier than our Sun, it is compacted into two-thirds the diameter of Earth.

The age of the white dwarf is also a clue. Older stars orbit the Milky Way faster than younger ones, and this object is moving faster than 99% of the other nearby white dwarfs with the same cooling age, suggesting that this star is older than it looks.

Dr Hollands adds: "We have a composition that we can't explain through normal stellar evolution, a mass twice the average for a white dwarf, and a kinematic age older than that inferred from cooling. We're pretty sure of how one star forms one white dwarf and it shouldn't do this. The only way you can explain it is if it was formed through a merger of two white dwarfs."

The theory is that when one star in a binary system expands at the end of its life it will envelope its partner, drawing its orbit closer as the first star shrinks. The same will happen when the other star expands. Over billions of years, gravitational wave emission will shrink the orbit further, to the point that the stars merge together.

While white dwarf mergers have been predicted to occur, this one would be particularly unusual. Most of the mergers in our galaxy will be between stars with different masses, whereas this merger appears to be between two similarly sized stars. There is also a limit to how big the resulting white dwarf can be: at more than 1.4 solar masses it is thought that it would explode in a supernova though it may be possible for that these explosions can occur at slightly lower masses, so this star is useful in demonstrating how massive a white dwarf can get and still survive.

Because the merging process restarts the cooling of the star, it is difficult to determine how old it is. The white dwarf probably merged around 1.3 billion years ago but the two original white dwarfs may have existed for many billions of years prior.

It is one of only a handful of merged white dwarfs to be identified so far, and the only one via its composition.

Dr Hollands adds: "There aren't that many white dwarfs this massive, although there are more than you would expect to see which implies that some of them were probably formed by mergers.

"In the future we may be able to use a technique called asteroseismology to learn about the white dwarf's core composition from its stellar pulsations, which would be an independent method confirming this star formed from a merger.

"Maybe the most exciting aspect of this star is that it must have just about failed to explode as a supernova - these gargantuan explosions are really important in mapping the structure of the Universe, as they can be detected out to very large distances. However, there remains much uncertainty about what kind of stellar systems make it to the supernova stage. Strange as it may sound, measuring the properties of this 'failed' supernova, and future look-alikes, is telling us a lot about the pathways to thermonuclear self-annihilation."

Credit: 
University of Warwick

Study shows rising age of first drug use in teens, young adults

SPOKANE, Wash. - The average age at which teens and young adults start using drugs has been rising, according to a study published today in JAMA Pediatrics.

The study examined changes in the average age of first drug use for 18 different drugs--including alcohol and tobacco products--between 2004 and 2017 and found that average ages had increased for the majority of those drugs.

For example, the study showed that the average age at which young people first consumed alcohol or smoked cigarettes rose from 16 in 2004 to 17 in 2017. Those who reported using heroin or cocaine for the first time had an average age of just over 17 in 2004, which had risen to about 18 for heroin and close to 19 for cocaine by 2017.

"This is great news, because delaying drug use prevents early exposure, which is associated with a variety of negative health consequences, including increased risk of drug use disorder and long-term impairments such as depression, neurocognitive deficits, involvement in risky behaviors, and sexually transmitted diseases, said lead author Karl Alcover, a postdoctoral research associate in Washington State University's Elson S. Floyd College of Medicine.

In their study, the researchers used publicly available data collected as part of the National Survey on Drug Use and Health, an annual survey that looks at drug use in a representative sample of US residents aged 12 and older. They included data on 84,317 respondents between the ages of 12 and 21 who were surveyed between 2004 and 2017 and had reported first-time drug use in the previous 12 months.

The researchers analyzed the data to estimate the average age at first-time use for 18 internationally regulated drugs for each year included in the study. Looking at year-to-year trends, they found that the average age at first use had increased for 12 out of 18 drugs, including alcohol, cocaine, ecstasy, hallucinogens, heroin, inhalants, LSD, marijuana, stimulants, and tobacco products such as cigars, cigarettes and smokeless tobacco. For the other six drugs--crack cocaine, methamphetamines, opioids, PCP, sedatives, and tranquilizers--they found no statistically significant changes in the age at first use.

Increases were relatively consistent from year to year for all but two drugs: alcohol and LSD. In 2004, the average age at which young people first consumed alcohol was approximately 16 years. It increased consistently through 2014, after which it leveled off at about age 17 in 2017. The average age of first use of LSD increased significantly through 2014 and subsequently declined, but still showed an overall increase over the entire timeframe studied. These findings suggest that trends toward starting to use at a later age may have already ended for those two drugs, Alcover said.

The earliest average age of first drug use across the study timeframe was 15.4 for inhalants, whereas the latest average age was 18.0 for cocaine and crack cocaine.

"Our study shows that since 2004 fewer individuals started using drugs at age 15 and younger, which is what we would typically consider as early-onset drug use," Alcover said. "These promising trends may serve as early evidence that prevention strategies--especially those focused on teens and young adults--are working."

Alcover said the next step is to investigate what drives the trends seen in this study. The success of prevention efforts is one possible explanation, but it could also be that young people's preferences have switched to new drugs such as e-cigarettes, which were not included in the survey data. He also noted that further research should be done to understand why some drugs did not show an increase in average age, which could help improve prevention strategies for those drugs.

"Prevention of drug use is the best approach to reducing drug-related burden in the population," Alcover said.

Credit: 
Washington State University

Fallowing cattle-feed farmland simplest way to alleviate western water shortage

All over the world, the rate at which humans consume fresh water is now approaching or surpassing the rate at which water sources are being naturally replenished, creating water shortages for people and ecosystems. In the western US, water shortages are becoming more frequent and more severe, and are putting many species of fish inhabiting western rivers at risk--but the scarcity of water is also risking the growth of cities in the region like Los Angeles and Phoenix.

An important new study published this week in Nature Sustainability finds that irrigated crop production accounts for 86 percent of all water consumed in the western US--and of all the water used on western farms, by far the largest portion goes to cattle-feed crops such as alfalfa and grass hay. To alleviate the severe shortage of water in the region--especially in the Colorado River basin--the study's authors suggest that rotational fallowing farmland, leaving the land uncultivated for a period of time, could be a simple and affordable means of dramatically reducing water use in the region.

Study co-author and principal investigator Ben Ruddell, who is also director of NAU's School of Informatics, Computing, and Cyber Systems, leads the FEWSION project, a multi-institutional team effort launched in 2016 and funded through the National Science Foundation (NSF), to assess the nation's food, energy and water systems. The broader FEWSION research team contributed the data-intensive maps it has produced of these coupled human-natural systems. NAU assistant research professor Richard Rushforth, the lead data scientist on FEWSION, also co-authored the study.

Beef and dairy production depleting water supply

The study set out to assess river flow depletion across the US, identify the factors driving this depletion and evaluate options to reduce vulnerability to water shortages. The researchers estimate that two-thirds of the cattle feed being irrigated from western US rivers ends up as beef products, with the remainder going to dairy products.

"The groundbreaking maps produced by FEWSION made it possible to link river depletion through the supply chain to irrigated alfalfa and hay and to beef and dairy production, then to urban consumers of beef and dairy in each city and county in the US," Ruddell said.

According to the study, the team's findings "led to closer examination of the water use and ecological impacts associated with irrigation of cattle-feed crops. We pinpointed locations where these crops were being grown and modelled their associated depletion of river flow in local sub-watersheds. We then conducted an international supply-chain analysis to identify the locations where cattle-feed crops are transported and where the resulting beef products are consumed, thereby specifically linking end consumers of beef to effects on rivers. We subsequently explored the benefits and consequences of reduced feed-crop production and beef consumption through the lenses of water security, river ecosystem health, food security and agricultural economies."

"We're using a lot of water to grow the cows that are the source of our burgers, steaks and milk," Ruddell points out. "In the Colorado River basin, that cattle feed water use is nearly three times greater than all the water used for urban, industrial and electrical power purposes combined."

Along with the study's lead author and FEWSION contributor Brian Richter, Ruddell was surprised by some of their findings.

"I can hardly believe that such a large fraction of our western water problems are linked to irrigation of cattle feed, or that such a large fraction of our western water problems could be fixed with a single prescription--fallowing. It's rare that science clearly finds a 'silver bullet' that solves such a big problem so well, and so affordably," Ruddell said.

"Although the idea for this study of the US food energy and water system was proposed as part of the FEWSION project," he noted, "the roots of the ideas go back decades and involve many of the pioneers of river science and environmental sustainability--including Brian Richter, who is one of the founders of the science of river management for environmental flows. It takes a long time, generous research funding, and a broad team with diverse interdisciplinary skills for synthetic ideas like this to become a reality."

Water security will depend on collaboration, choice, policy

Scientists from 12 universities worldwide collaborated on the study, including Columbia University, Baylor University, the National University of Singapore, Nanjing Audit University and the University of Twente.

Ultimately, they conclude, "Water security and river health in the western US will depend on the willingness of urban and rural water users to collaborate in design of demand-management strategies, the ability of political leaders to secure funding to implement those strategies and the willingness of beef and dairy consumers to reduce their consumption or select products that do not depend on irrigated cattle-feed crops.

"My favorite food is cheeseburgers!" Ruddell admitted. "Individual choice, in addition to collective politics and policy, are important here. We need to be willing to pay a little more for more sustainable beef and dairy products, and we must strongly support politicians and policies that are willing to invest in solutions to the western water problem--including infrastructure, environmental flows and smart economic solutions like fallowing. Act with your votes and with your dollars. This is a problem we can afford to solve!"

Credit: 
Northern Arizona University

Radionuclide levels in freshwater fish differ between lakes and rivers

image: When it comes to fishing, risk management should be conducted separately for rivers and lakes, for greater accuracy.

Image: 
NIES

In 2011, when the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident occurred, radioactive materials leaked out into the surrounding land and water bodies, and these became highly contaminated. Consequently, to ensure no imminent risks to the health and safety of the people living in the region, fishing in lakes and rivers in the area was restricted, with no indication of when the ban will be lifted. Scientific efforts to measure the contamination levels of the natural resources of the region, and predict when it will become safe to use them, began soon after the incident and have been ongoing. Research--conducted in the aftermaths of the FDNPP incident and others that came before it, such as the Chernobyl accident--has, so far, determined the biotic and abiotic factors affecting the accumulation of radionuclides in fish. The insights thus gained have helped predict and manage contamination in the environment at Fukushima.

But what remains to be studied is whether these underlying factors differ among ecosystems, and if they do, then how. Addressing this question, a group of scientists from the National Institute for Environmental Studies, Japan, led by Dr Yumiko Ishii, analyzed the monitoring data of 30 species of fish and aquatic organisms from five rivers and three lakes in Fukushima. This they did two to four years after the FDNPP accident. In their study, published in Journal of Environmental Radioactivity, they statistically correlated radiocesium measurements with a number of biotic and abiotic factors. Radiocesium, particularly cesium-137, has a long half-life, or decay period, of about 30 years, and is the primary contaminant in the area. As Dr Ishii explains: "After the FDNPP accident, radiocesium has become a major contaminant in Fukushima, and the risk of exposure to its radiation has become a topic of considerable concern."

The factors that the scientists considered were fish characteristics--feeding habit, body size, and habitat--and water chemistry--salinity, total organic carbon, and suspended solids concentration. Their analysis revealed that the factors affecting radiocesium levels in riverine organisms did not necessarily influence radiocesium levels in organisms from the lake. Specifically, suspended solids concentration, total organic carbon, and salinity were significant factors in rivers, but not in lakes. Feeding habits had a major influence in the case of piscivorous fish in lakes, but not in rivers; this was evident from the fact that significant biomagnification of radiocesium (i.e., the increase in its concentration as it travels up the food chain) was observed only in lakes. Lastly, fish size had noticeable influence in both lakes and rivers.

Overall, these findings show that biotic and abiotic factors affecting radionuclide accumulation in fish are clearly dependent on the ecosystem--and they differ between lakes and rivers. The findings of this study could potentially lead to the implementation of better and more efficient environmental disaster response strategies in the future. As Dr Ishii concludes, "Considering lakes and rivers separately when looking at the effects of radioactive contamination will lead to better and more accurate environmental risk management."

Credit: 
National Institute for Environmental Studies

Biometric devices help pinpoint factory workers' emotions and productivity

image: A time series of subjects' emotional status. Green indicates happiness, red indicates anger, and yellow indicates relaxation. The blue bar below shows the amount of time series conversation of the subject. The horizontal axis represents time series, and the vertical axis represents emotion and conversation volume in that time zone. The gray portions indicate neutral emotion or time periods where measurement could not be performed well due to poor contact with the device.

Image: 
Yoshihiko Kadoya

Happiness, as measured by a wearable biometric device, was closely related to productivity among a group of factory workers in Laos, reveals a recent study.

The team of researchers from the School of Economics at Hiroshima University conducted a study to examine relationships between toy painters' productivity and on-the-job emotional states.

While employee productivity has already been linked to job conditions, mental health, and other demographic factors, this study adds to a deeper understanding of how emotional states affect productivity.

Professor Yoshihiko Kadoya, the lead researcher on the paper, said the findings have implications for both operational and human resources strategies.

"Organizations need to consider employees' emotionality when producing workflow designs that could help ensure a pleasant working environment," he said.

In the study, 15 workers answered a questionnaire and wore a device on their wrist with built-in sensors to detect movement, pulse waves, environmental ultraviolet light, body temperature, and sound through which it continuously recorded physical activity, beat-to-beat pulse intervals, skin temperature, and sleep. The device, Silmee(TM)W20, is produced by the TDK Corporation Tokyo, Japan.

Employees' emotional states were measured for three working days through a complex process of beat-to-beat pulse intervals via custom software developed by NEC Corporation Tokyo, Japan. The researchers followed a common model in the field--Russel's circumplex model--to measure employees' emotion in four states: happy, angry, relaxed, and sad.

Using a random effect panel regression model, they found people's happy emotional state was positively related to their productivity. Meanwhile, no other emotional states were found to be related to productivity.

"The use of wearable biometric devices, which can track employees' emotional states provides an opportunity to examine more objective components of the emotion-productivity link," Kadoya adds.

The study's limitations included the possibility of device errors, the number of observations throughout the day, and the gender distribution (14 out of 15 workers in this study identified as female), therefore the results should not be over-generalized. In the future, however, researchers hope to apply similar methods to explore the links between emotional states and different types of work.

Credit: 
Hiroshima University

Youth exposure to tobacco outlets and cigarette smoking

A new study led by researchers at the Prevention Research Center of the Pacific Institute for Research and Evaluation explores these questions using real time data from 100 youth participants from 16-20 years old to assess the effect of exposure to tobacco outlets on same-day smoking and the number of cigarettes consumed.

Across two weeks, participants carried GPS-enabled smartphones that recorded their location at one-minute intervals and invited youth to respond to brief daily surveys.

The measurements focused on any cigarette smoking by youth on a given day, the number of cigarettes smoked, the number of tobacco outlets within 100m of activity space, the number of minutes participants spent within 100m of outlets each day, and demographic characteristics.

The results indicate that day-to-day exposure to tobacco outlets within youth activity space -- or the broader environments where youths spend their time -- is not related to whether a youth smokes a cigarette on a given day, but it is associated with the number of cigarettes smoked on that day.

Says lead author, Dr. Sharon Lipperman-Kreda: "The results of this study go beyond previous research and highlight the importance of policies to regulate youth exposure to tobacco outlets beyond residential or school neighborhoods. Regulating exposure to outlets limits access and availability of cigarettes through retail outlets for the youth population and, in particular, for youth in socially disadvantaged areas who encounter high exposure to tobacco outlets in their daily activity spaces."

Credit: 
Pacific Institute for Research and Evaluation

Length of pregnancy alters the child's DNA

Researchers from Karolinska Institutet in Sweden have together with an international team mapped the relationship between length of pregnancy and chemical DNA changes in more than 6,000 newborn babies. For each week's longer pregnancy, DNA methylation changes in thousands of genes were detected in the umbilical cord blood. The study is published in Genome Medicine.

Premature birth, that is before 37 consecutive weeks' of pregnancy, is common. Between 5 and 10% of all children in the world are born prematurely. Most children will develop and grow normally, but premature birth is also linked to respiratory and lung disease, eye problems and neurodevelopmental disorders. This is especially true for children who are born very or extremely prematurely. During the fetal period, epigenetic processes, i.e., chemical modification of the DNA, are important for controlling development and growth. One such epigenetic factor is DNA methylation, which in turn affects the degree of gene activation and how much of a particular protein is formed.

"Our new findings indicate that these DNA changes may influence the development of fetal organs," says Simon Kebede Merid, first author of the study and PhD student at Karolinska Institutet, Department of Clinical Science and Education, Södersjukhuset.

The majority of observed DNA methylations at birth tended not to persist into childhood, but in 17% the levels were completely stable from birth to adolescence. The levels that you are born with in certain genes thus track with age.

"Now we need to investigate whether the DNA changes are linked to the health problems of those born prematurely," says Professor Erik Melén, at the Department of Clinical Science and Education, Södersjukhuset.

Epigenetics is a hot research topic that links genes, the environment and health. This work was done within the international Pregnancy and Childhood Epigenetics (PACE) consortium. The work represents contributions from 26 studies. Professor Melén's group also contributed to the first PACE paper which showed that mother's smoking during pregnancy changes DNA in newborns and lead two PACE studies showing effects of air pollution. Links to diseases such as asthma, allergy, obesity and even aging have also been shown.

"We hope that our new findings will contribute valuable knowledge about fetal development, and in the long term new opportunities for better care of premature babies to avoid complications and adverse health effects," says Erik Melén.

Credit: 
Karolinska Institutet

Mapping childhood malnutrition

image: Prevalence of stunting in children under five in low- and middle-income countries (LMICs) (2000-2017).

Image: 
Local Burden of Disease Child Growth Failure Collaborators / <em>Nature</em> / CC BY 4.0

The scope of childhood malnutrition has decreased since 2000, although millions of children under five years of age are still undernourished and, as a result, have stunted growth. An international team of researchers analysed the scope of global childhood malnutrition in 2000 and 2017, and estimated the probability of achieving the World Health Organization Global Nutrition Targets by 2025.

According to a UN report, in 2018, one out of nine people in the world experienced hunger. The total number of hungry people exceeded 821 million globally, of which almost 514 million lived in Asia, over 256 million in Africa, and 42 million in Latin America and the Caribbean.

World Health Organization data as of 2018 show that almost half (45%) of mortality among children under the age of 5 is due to malnutrition. 3.1 million children die of hunger annually. Malnutrition leads to child growth failure (CGF), which is expressed as stunting, wasting, and underweight.

In addition to risks of literally dying of starvation, CGF causes cognitive and physical developmental impairments that can lead to later cardiovascular disease, reduced intellectual ability and school attainment, as well as reduced economic productivity in adulthood.

'Childhood malnutrition is an essential reason for children's vulnerability to infections and, accordingly, their high mortality,' said Vasily Vlassov, Professor in the HSE Department of Health Care Administration and Economics and one of the study's authors. 'This is not a temporary suffering in childhood, but a tragedy for the whole future life. Malnutrition decreases an individual's ability to learn.'

CGF is spread unevenly, with 99% of hungry children living in 105 low- and middle-income countries, most of which are located in Africa and Asia.

Russia, as well as many other countries with average-high income, was not included in the study, since, according to Prof. Vlassov,
serious childhood hunger is rather a rare phenomenon and is not a threat to public health.

Severe malnutrition leads to stunting. Even though estimated childhood stunting prevalence decreased from 36% to 26% over 17 years in the countries analysed in the report, in 2017, more than 176 million kids were shorter than medical standards presume for their age. Half of them lived in India, Pakistan, Nigeria, and China.

In the 21st century, countries of Central America and the Caribbean, North Africa, and East Asia achieved the most progress in fighting childhood stunting. In these regions, estimated stunting prevalence of at least 50% in 2000 had reduced to 30% or less by 2017. In sub-Saharan regions, Central and South Asia, as well as Oceania, up to 40% of children under five were affected by stunting in 2017.

Wasting, or low body weight indices, were diagnosed in 58.3 million children in the 105 countries in 2017. This is 2% less than in 2000. On average, about 6.5% of children in these countries suffered from wasting. Most of them live in India, Pakistan, Bangladesh, and Indonesia. The highest shares of children with wasting (up to 20%) are in Africa, in areas of countries stretching from Mauritania to Sudan, as well as in South Sudan, Ethiopia, Kenya, and Somalia.

According to data for 2017, 13% of children are underweight for their age. In 2000, their share reached almost 20%. Researchers observed the most significant improvements in this indicator in Central and South America, sub-Saharan Africa, North Africa and Southeast Asia Central Asia and Central Africa remain troubled regions.

The World Health Organization aims to reduce childhood stunting by 40% by 2025. According to the researchers, this is quite achievable in Central America and the Caribbean, South America, North Africa, and East Asia, despite regions in some of these countries continuing to have high shares of children who suffer stunting and wasting.

Meanwhile, in many countries analysed in the study, the probability of achieving the WHO targets is low, especially as it relates to stunting and wasting. This primarily concerns sub-Saharan regions, South Asia, and Oceania.

The global community joins forces to fight malnutrition within the framework of international organizations. The World Food Programme (UN WFP), which distributes 12.6 billion meals in 80 countries every year, is considered one of the leaders. In addition to direct food aid, WFP carries out projects aimed at development and restoring living conditions in areas suffering from conflict and natural disasters.

The UN goals for 2030 include achieving a zero level of hunger. To do so, money is being invested in agricultural development and production. In particular, small farms capable of providing food for local markets are being created.

In addition, the UN is implementing technologies that allow crop yields to be increased by means of conserving soil and water resources, protecting plants from pests, and using new breeds of plants that are resistant to disease and are enriched with essential vitamins and minerals.

Credit: 
National Research University Higher School of Economics

How three genes rule plant symbioses

image: Mycorrhizal and plant symbiosis.

Image: 
Pierre-Marc Delaux

For billions of years life on Earth was restricted to aquatic environments, the oceans, seas, rivers and lakes. Then 450 million years ago the first plants colonized land, evolving in the process multiple types of beneficial relationships with microbes in the soil.

These relationships, known as symbioses, allow plants to access additional nutrients. The most intimate among them are intracellular symbioses that result in the accommodation of microbes inside plant cells.

A study published in Nature Plants, led by scientists from the John Innes Centre in the UK and the University of Toulouse/CNRS in France, describes the discovery of a common genetic basis for all these symbioses.

It is hypothesised that the colonization of land by plants was made possible through a type of symbiosis that plants form with a group of fungi called mycorrhizal fungi. Even today 80% of plants we find on land can form this mycorrhizal symbiosis. Plants have also evolved the ability to engage in intracellular symbiosis with a large diversity of other microbes.

Over the past two decades, studies on mycorrhizal symbiosis and another type of symbiosis, formed by legumes such as peas and beans with soil bacteria, have allowed the identification of a dozen plant genes that are required for the recognition of beneficial microbes and their accommodation inside plant cells. By contrast, other types of intracellular symbioses have been poorly studied.

To address this, the team compared the genomes of nearly 400 plant species to understand what is unique to those that can form intracellular symbioses. Surprisingly, they discovered that three genes are shared exclusively by plants forming intracellular symbiosis and lost in plants unable to form this type of beneficial relationship.

"Our study demonstrates that diverse types of intracellular symbioses that plants form with different symbiotic partners are built on top of a conserved genetic program." said Dr Guru Radhakrishnan, lead author of the study and a BBSRC Discovery Fellow at the John Innes Centre.

The research, led by Dr Radhakrishnan in the UK and Dr Pierre-Marc Delaux in France, was conducted as part of the Engineering Nitrogen Symbiosis for Africa (ENSA) project sponsored by the Bill & Melinda Gates foundation.

ENSA is an international collaboration aiming at transferring naturally occurring symbioses to cereal crops to limit the use of chemical fertilizers and to improve yield in small-holder farms of sub-Saharan Africa where access to these fertilizers is limited.

"By demonstrating that different plant symbioses share a common genetic basis, our ambitious goal has become more realistic," says Dr Radhakrishnan.

An ancestral signaling pathway is conserved in plant lineages forming intracellular symbioses is in Nature Plants

Credit: 
John Innes Centre