Culture

When pregnant moms are stressed out, babies' brains suffer

image: This is Catherine Limperopoulos, Ph.D., director of the Center for the Developing Brain at Children's National and the study's corresponding author.

Image: 
Children's National Hospital

WASHINGTON-(Jan. 13, 2020)-Knowing that your unborn fetus has congenital heart disease causes such pronounced maternal stress, anxiety and depression that these women's fetuses end up with impaired development in key brain regions before they are born, according to research published online Jan. 13, 2020, in JAMA Pediatrics.

While additional research is needed, the Children's National Hospital study authors say their unprecedented findings underscore the need for universal screening for psychological distress as a routine part of prenatal care and taking other steps to support stressed-out pregnant women and safeguard their newborns' developing brains.

"We were alarmed by the high percentage of pregnant women with a diagnosis of a major fetal heart problem who tested positive for stress, anxiety and depression," says Catherine Limperopoulos, Ph.D., director of the Center for the Developing Brain at Children's National and the study's corresponding author. "Equally concerning is how prevalent psychological distress is among pregnant women generally. We report for the first time that this challenging prenatal environment impairs regions of the fetal brain that play a major role in learning, memory, coordination, and social and behavioral development, making it all the more important for us to identify these women early during pregnancy to intervene," Limperopoulos adds.

Congenital heart disease (CHD), structural problems with the heart, is the most common birth defect.

Still, it remains unclear how exposure to maternal stress impacts brain development in fetuses with CHD.

The multidisciplinary study team enrolled 48 women whose unborn fetuses had been diagnosed with CHD and 92 healthy women with uncomplicated pregnancies. Using validated screening tools, they found:

65% of pregnant women expecting a baby with CHD tested positive for stress

27% of women with uncomplicated pregnancies tested positive for stress

44% of pregnant women expecting a baby with CHD tested positive for anxiety

26% of women with uncomplicated pregnancies tested positive for anxiety

29% of pregnant women expecting a baby with CHD tested positive for depression and

9% women with uncomplicated pregnancies tested positive for depression

All told, they performed 223 fetal magnetic resonance imaging sessions for these 140 fetuses between 21 and 40 weeks of gestation. They measured brain volume in cubic centimeters for the total brain as well as volumetric measurements for key regions such as the cerebrum, cerebellum, brainstem, and left and right hippocampus.

Maternal stress and anxiety in the second trimester were associated with smaller left hippocampi and smaller cerebellums only in pregnancies affected by fetal CHD. What's more, specific regions -- the hippocampus head and body and the left cerebellar lobe - were more susceptible to stunted growth. The hippocampus is key to memory and learning, while the cerebellum controls motor coordination and plays a role in social and behavioral development.

The hippocampus is a brain structure that is known to be very sensitive to stress. The timing of the CHD diagnosis may have occurred at a particularly vulnerable time for the developing fetal cerebellum, which grows faster than any other brain structure in the second half of gestation, particularly in the third trimester.

"None of these women had been screened for prenatal depression or anxiety. None of them were taking medications. And none of them had received mental health interventions. In the group of women contending with fetal CHD, 81% had attended college and 75% had professional educations, so this does not appear to be an issue of insufficient resources," Limperopoulos adds. "It's critical that we routinely to do these screenings and provide pregnant women with access to interventions to lower their stress levels. Working with our community partners, Children's National is doing just that to help reduce toxic prenatal stress for both the health of the mother and for the future newborns. We hope this becomes standard practice elsewhere."

Adds Yao Wu, Ph.D., a research associate working with Limperopoulos at Children's National and the study's lead author: "Our next goal is exploring effective prenatal cognitive behavioral interventions to reduce psychological distress felt by pregnant women and improve neurodevelopment in babies with CHD."

Credit: 
Children's National Hospital

Study sheds light on link between cannabis, anxiety and stress

image: Sachin Patel, M.D., Ph.D., the paper's corresponding author and director of the Division of General Psychiatry at Vanderbilt University Medical Center.

Image: 
Vanderbilt University Medical Center

A molecule produced by the brain that activates the same receptors as marijuana is protective against stress by reducing anxiety-causing connections between two brain regions, Vanderbilt University Medical Center researchers report.

This finding, published today in Neuron, could help explain why some people use marijuana when they're anxious or under stress. It could also mean that pharmacologic treatments that increase levels of this molecule, known as "2-AG," in the brain could regulate anxiety and depressive symptoms in people with stress-related anxiety disorders, potentially avoiding a reliance on medical marijuana or similar treatments.

When mice are exposed to acute stress, a break in an anxiety-producing connection between the amygdala and the frontal cortex caused by 2-AG temporarily disappears, causing the emergence of anxiety-related behaviors.

"The circuit between the amygdala and the frontal cortex has been shown to be stronger in individuals with certain types of anxiety disorders. As people or animals are exposed to stress and get more anxious, these two brain areas glue together, and their activity grows stronger together," said Sachin Patel, MD, PhD, the paper's corresponding author and director of the Division of General Psychiatry at Vanderbilt University Medical Center.

"We might predict there's a collapse in the endocannabinoid system, which includes 2-AG, in the patients that go on to develop a disorder. But, not everyone develops a psychiatric disorder after trauma exposure, so maybe the people who don't develop a disorder are able to maintain that system in some way. Those are the things we're interested in testing next."

The study also found that signaling between the amygdala and the frontal cortex can be strengthened through genetic manipulations that compromise endogenous cannabinoid signaling in this pathway, causing mice to become anxious even without exposure to stress in some cases. This finding demonstrates that the cannabinoid signaling system that suppresses information flow between these two brain regions is critical for setting the level of anxiety in animals.

"We don't know how or why this cannabinoid signaling system disappears or disintegrates in response to stress, but it results in the strengthening of the connection between these two regions and heightened anxiety behaviors in mice. Understanding what's causing that compromise, what causes the signaling system to return after a few days, and many other questions about the molecular mechanisms by which this is happening are things we're interested in following up on," said Patel, also the James G. Blakemore Professor of Psychiatry and Behavioral Sciences, Molecular Physiology and Biophysics and Pharmacology.

David Marcus, Neuroscience graduate student and first author on the paper, and Patel are also interested in how the system reacts to more chronic forms of stress and determining whether there are other environmental exposures that compromise or enhance this system to regulate behavior.

Credit: 
Vanderbilt University Medical Center

Nano-objects of desire: Assembling ordered nanostructures in 3D

image: A schematic of the programmable assembly of 3-D ordered nanostructures from material voxels that can carry inorganic or organic nanoparticles with different functions, such as light emitters and absorbers, proteins, and enzymes with chemical activity. Material voxels are fabricated from DNA and nano-objects of different kinds, and their assembly is guided by the voxel design and DNA-programmable interactions.

Image: 
Brookhaven National Laboratory

UPTON, NY--Scientists have developed a platform for assembling nanosized material components, or "nano-objects," of very different types--inorganic or organic--into desired 3-D structures. Though self-assembly (SA) has successfully been used to organize nanomaterials of several kinds, the process has been extremely system-specific, generating different structures based on the intrinsic properties of the materials. As reported in a paper published today in Nature Materials, their new DNA-programmable nanofabrication platform can be applied to organize a variety of 3-D materials in the same prescribed ways at the nanoscale (billionths of a meter), where unique optical, chemical, and other properties emerge.

"One of the major reasons why SA is not a technique of choice for practical applications is that the same SA process cannot be applied across a broad range of materials to create identical 3-D ordered arrays from different nanocomponents," explained corresponding author Oleg Gang, leader of the Soft and Bio Nanomaterials Group at the Center for Functional Nanomaterials (CFN)--a U.S. Department of Energy (DOE) Office of Science User Facility at Brookhaven National Laboratory--and a professor of Chemical Engineering and of Applied Physics and Materials Science at Columbia Engineering. "Here, we decoupled the SA process from material properties by designing rigid polyhedral DNA frames that can encapsulate various inorganic or organic nano-objects, including metals, semiconductors, and even proteins and enzymes."

The scientists engineered synthetic DNA frames in the shape of a cube, octahedron, and tetrahedron. Inside the frames are DNA "arms" that only nano-objects with the complementary DNA sequence can bind to. These material voxels--the integration of the DNA frame and nano-object--are the building blocks from which macroscale 3-D structures can be made. The frames connect to each other regardless of what kind of nano-object is inside (or not) according to the complementary sequences they are encoded with at their vertices. Depending on their shape, frames have a different number of vertices and thus form entirely different structures. Any nano-objects hosted inside the frames take on that specific frame structure.

To demonstrate their assembly approach, the scientists selected metallic (gold) and semiconducting (cadmium selenide) nanoparticles and a bacterial protein (streptavidin) as the inorganic and organic nano-objects to be placed inside the DNA frames. First, they confirmed the integrity of the DNA frames and formation of material voxels by imaging with electron microscopes at the CFN Electron Microscopy Facility and the Van Andel Institute, which has a suite of instruments that operate at cryogenic temperatures for biological samples. They then probed the 3-D lattice structures at the Coherent Hard X-ray Scattering and Complex Materials Scattering beamlines of the National Synchrotron Light Source II (NSLS-II)--another DOE Office of Science User Facility at Brookhaven Lab. Columbia Engineering Bykhovsky Professor of Chemical Engineering Sanat Kumar and his group performed computational modeling revealing that the experimentally observed lattice structures (based on the x-ray scattering patterns) were the most thermodynamically stable ones that the material voxels could form.

"These material voxels allow us to begin to use ideas derived from atoms (and molecules) and the crystals that they form, and port this vast knowledge and database to systems of interest at the nanoscale," explained Kumar.

Gang's students at Columbia then demonstrated how the assembly platform could be used to drive the organization of two different kinds of materials with chemical and optical functions. In one case, they co-assembled two enzymes, creating 3-D arrays with a high packing density. Though the enzymes remained chemically unchanged, they showed about a fourfold increase in enzymatic activity. These "nanoreactors" could be used to manipulate cascade reactions and enable the fabrication of chemically active materials. For the optical material demonstration, they mixed two different colors of quantum dots--tiny nanocrystals that are being used to make television displays with high color saturation and brightness. Images captured with a fluorescence microscope showed that the formed lattice maintained color purity below the diffraction limit (wavelength) of light; this property could allow for significant resolution improvement in various display and optical communication technologies.

"We need to rethink how materials can be formed and how they function," said Gang. "Material redesign may not be necessary; simply packaging existing materials in new ways could enhance their properties. Potentially, our platform could be an enabling technology 'beyond 3-D printing manufacturing' to control materials at much smaller scales and with greater material variety and designed compositions. Using the same approach to form 3-D lattices from desired nano-objects of different material classes, integrating those that would otherwise be considered incompatible, could revolutionize nanomanufacturing."

Credit: 
DOE/Brookhaven National Laboratory

How the solar system got its 'Great Divide,' and why it matters for life on Earth

Scientists, including those from the University of Colorado Boulder, have finally scaled the solar system's equivalent of the Rocky Mountain range.

In a study published today in Nature Astronomy, researchers from the United States and Japan unveil the possible origins of our cosmic neighborhood's "Great Divide." This well-known schism may have separated the solar system just after the sun first formed.

The phenomenon is a bit like how the Rocky Mountains divide North America into east and west. On the one side are "terrestrial" planet, such as Earth and Mars. They are made up of fundamentally different types of materials than the more distant "jovians," such as Jupiter and Saturn.

"The question is: How do you create this compositional dichotomy?" said lead author Ramon Brasser, a researcher at the Earth-Life Science Institute (ELSI) at the Tokyo Institute of Technology in Japan. "How do you ensure that material from the inner and outer solar system didn't mix from very early on in its history?"

Brasser and coauthor Stephen Mojzsis, a professor in CU Boulder's Department of Geological Sciences, think they have the answer, and it may just shed new light on how life originated on Earth.

A sun disk holds vital clues

The duo suggests that the early solar system was partitioned into at least two regions by a ring-like structure that formed a disk around the young sun. This disk might have held major implications for the evolution of planets and asteroids, and even the history of life on Earth.

"The most likely explanation for that compositional difference is that it emerged from an intrinsic structure of this disk of gas and dust," Mojzsis said.

Mojzsis noted that the Great Divide, a term that he and Brasser coined, does not look like much today. It is a relatively empty stretch of space that sits near Jupiter, just beyond what astronomers call the asteroid belt.

But you can still detect its presence throughout the solar system. Move sunward from that line, and most planets and asteroids tend to carry relatively low abundances of organic molecules. Go the other direction toward Jupiter and beyond, however, and a different picture emerges: Almost everything in this distant part of the solar system is made up of materials that are rich in carbon.

This dichotomy "was really a surprise when it was first found," Mojzsis said.

Many scientists assumed that Jupiter was the agent responsible for that surprise. The thinking went that the planet is so massive that it may have acted as a gravitational barrier, preventing pebbles and dust from the outer solar system from spiraling toward the sun.

But Mojzsis and Brasser were not convinced. The scientists used a series of computer simulations to explore Jupiter's role in the evolving solar system. They found that while Jupiter is big, it was probably never big enough early in its formation to entirely block the flow of rocky material from moving sunward.

"We banged our head against the wall," Brasser said. "If Jupiter wasn't the agent responsible for creating and maintaining that compositional dichotomy, what else could be?"

A solution in plain sight

For years, scientists operating an observatory in Chile called the Atacama Large Millimeter/submillimeter Array (ALMA) had noticed something unusual around distant stars: Young stellar systems were often surrounded by disks of gas and dust that, in infrared light, looked a bit like a tiger's eye.

If a similar ring existed in our own solar system billions of years ago, Brasser and Mojzsis reasoned, it could theoretically be responsible for the Great Divide.

That's because such a ring would create alternating bands of high- and low-pressure gas and dust. Those bands, in turn, might pull the solar system's earliest building blocks into several distinct sinks--one that would have given rise to Jupiter and Saturn, and another Earth and Mars.

In the mountains, "the Great Divide causes water to drain one way or another," Mojzsis said. "It's similar to how this pressure bump would have divided material" in the solar system.

But, he added, there's a caveat: That barrier in space likely was not perfect. Some outer solar system material may still have climbed across the divide. And those fugitives could have been important for the evolution of our own world.

"Those materials that might go to the Earth would be those volatile, carbon-rich materials," Mojzsis said. "And that gives you water. It gives you organics."

The rest is Earth history.

Credit: 
University of Colorado at Boulder

Rising temperatures may cause over 2,000 fatal injuries per year in the US, predict researchers

A 2 degrees Celsius rise in temperatures could result in around 2,100 additional deaths from injuries every year in the United States.

This is the finding of research from Imperial College London, Columbia University and Harvard University, published in the journal Nature Medicine.

In the study, funded by the US Environmental Protection Agency and the Wellcome Trust, the researchers calculated the number of additional fatal injuries that would occur in the US if temperatures rose by 1.5 and 2 degrees Celsius. The results revealed an additional 1,600 and 2,100 fatal injuries every year in these two scenarios.

Most of these deaths would be among young men, between the ages of 15-34 years. The three states with potentially the highest number of deaths would be California, Texas, and Florida.

The researchers studied the number of deaths from injuries a year in every state and county in the mainland United States (excluding Hawaii and Alaska) between the years of 1980 to 2017. These injuries were classed as unintentional, which include those from transport, falls and drowning, and intentional, which include assault and suicide.

The team then tracked unusual, or anomalous, temperature changes in every month in every county in mainland United States over this 38-year period. By comparing unusual temperatures with injury records, the team estimated the rise in deaths from injuries associated with a rising global temperatures triggered by climate change.

Most of the additional deaths seen during times of unusual temperature rises were among young men, and caused by transport accidents, suicides, drownings and violence explained Professor Majid Ezzati, senior author from J-IDEA, the Abdul Latif Jameel Institute for Disease and Emergency Analytics at Imperial College London. "These predictions suggest we should expect to see more deaths from transport accidents, suicides, drownings and violence as temperatures rise. These new results show how much climate change can affect young people. We need to respond to this threat with better preparedness in terms of emergency services, social support and health warnings."

In the research, the team used data from the National Center for Health Statistics to calculate the number of deaths from injuries between 1980 and 2017. This revealed 4.1 million boys and men and 1.8 million girls and women died from an injury during this 38-year period. Transport, falls, drownings, assault and suicide accounted for the majority of these deaths.

Using a statistical model, the researchers then calculated the number of additional deaths from injuries caused by unusual temperatures in different months of the year. The biggest effects of warm temperature were on risk of dying from drowning and transport accidents, which the researchers say is due to increased swimming, more driving and increased alcohol consumption in warm temperatures.

The researchers then used this model to predict number of additional deaths for an increase in average temperatures of 1.5 and 2 degrees Celsius. The research group chose these temperature rises, as the Paris Climate Agreement pledged to ensure global temperatures do not exceed 1.5 or 2 degrees Celsius.

The results suggest more than 1,200 of the 1,600 excess deaths associated with a 1.5 degrees Celsius rise would be in males. However, among older men and women, warmer winter months were associated with a reduction in deaths from falls.

There were also rises in risk of dying from suicide and assault in warmer temperatures, though not as large as those seen for drowning and transport.

The researchers say the reasons for these increases are not still fully understood. One possible explanation could be people spend more time outdoors in hot weather with more chance of confrontation. People also tend to be more agitated in hot weather, and perhaps drink more alcohol - which could all lead to increased number of assaults. In terms of suicides, previous research has suggested high temperatures are associated with higher levels of mental distress, especially in young people.

Dr Robbie Parks, lead author from Columbia University's Earth Institute, said: "Our work highlights how deaths from injuries including assaults, suicides, transport and drowning deaths currently rise with warm temperature, and could also worsen by rising temperatures resulting from climate change, unless countered by social and health system infrastructure that mitigate these impacts."

Credit: 
Imperial College London

Boost to lung immunity following infection

The strength of the immune system in response to respiratory infections is constantly changing, depending on the history of previous, unrelated infections, according to new research from the Crick.

There are two types of immunity to infections. Adaptive immunity provides immune "memory", allowing a fast and strong immunological response when the same disease is encountered more than once. In contrast, innate immunity provides a broad and less specific first line of defence against all pathogens, and is vital to controlling infections the body has not experienced previously.

The study, published in Nature Immunology, found that after recovery from a respiratory infection, particular cells in the innate immune system in the lung are more effective, offering extra protection against new infections in the following weeks. Specifically, they found that mice given the flu virus were significantly less likely to catch a completely different, bacterial, infection a month later.

This heightened immunity is the result of how, during the initial infection, a particular type of immune cell travels from the bone marrow to the lungs and turns into a lung macrophage, a type of white blood cell. Once in the lungs, these cells produce cytokines, hormone-like molecules which cause inflammation and help fight pathogens.

As these special macrophages remain in a more reactive state in the lungs after the infection is resolved, they offer an extra layer of protection from future infections over the following weeks. However, over time the ability of these macrophages to produce high levels of cytokines disappears. This means that, after a couple of months, protection against infection decreases and eventually returns to the same level as in animals which had not previously been infected. The researchers believe this mechanism could also be true for humans.

"Flu is a serious disease, especially for vulnerable groups, and we're not suggesting that a flu infection is desirable. Rather, this research provides valuable insights into how a viral infection that is cleared quickly can continue to affect immunity for weeks afterwards, through long-term changes in innate immune cells. This could partially explain why our response to diseases can vary - what you could fight off with no symptoms one week, could have nasty effects a couple of weeks later," says Helena Aegerter, lead author and PhD student in the Immunoregulation Laboratory at the Crick.

The researchers plan to look further into how a history of infection affects the immune system, including in conditions such as asthma or chronic obstructive pulmonary disease, where the heightened inflammatory response explained in this research could worsen symptoms.

"Our immune system is a mosaic of protective mechanisms and the level of protection is not constant. This means it's important people take long-term precautions against infection where appropriate, such as winter flu vaccinations," says Andreas Wack, author and group leader in the Immunoregulation Laboratory.

Credit: 
The Francis Crick Institute

TESS dates an ancient collision with our galaxy

A single bright star in the constellation of Indus, visible from the southern hemisphere, has revealed new insights on an ancient collision that our galaxy the Milky Way underwent with another smaller galaxy called Gaia-Enceladus early in its history.

An international team of scientists led by the University of Birmingham adopted the novel approach of applying the forensic characterisation of a single ancient, bright star called ν Indi as a probe of the history of the Milky Way. Stars carry "fossilized records" of their histories and hence the environments in which they formed. The team used data from satellites and ground-based telescopes to unlock this information from ν Indi. Their results are published in the journal Nature Astronomy.

The star was aged using its natural oscillations (asteroseismology), detected in data collected by NASA's recently launched Transiting Exoplanet Survey Satellite (TESS). Launched in 2018, TESS is surveying stars across most of the sky to search for planets orbiting the stars and to study the stars themselves. When combined with data from the European Space Agency (ESA) Gaia Mission, the detective story revealed that this ancient star was born early in the life of the Milky Way, but the Gaia-Enceladus collision altered its motion through our Galaxy.

Bill Chaplin, Professor of Astrophysics at the University of Birmingham and lead author of the study said: "Since the motion of ν Indi was affected by the Gaia-Enceladus collision, the collision must have happened once the star had formed. That is how we have been able to use the asteroseismically-determined age to place new limits on when the Gaia-Enceladus event occurred."

Co-author Dr Ted Mackereth, also from Birmingham, said: "Because we see so many stars from Gaia-Enceladus, we think it must have had a large impact on the evolution of our Galaxy. Understanding that is now a very hot topic in astronomy, and this study is an important step in understanding when this collision occurred."

Bill Chaplin added: "This study demonstrates the potential of asteroseismology with TESS, and what is possible when one has a variety of cutting-edge data available on a single, bright star"

The research clearly shows the strong potential of the TESS programme to draw together rich new insights about the stars that are our closest neighbours in the Milky Way. The research was funded by the Science and Technology Facilities Council and the European Research Council through the Asterochronometry project.

Credit: 
University of Birmingham

Global diets are converging, with benefits and problems

Research carried out by the University of Kent has shown that diets are changing in complex ways worldwide. International food supply patterns are supporting healthier diets in parts of the world, but causing underweight and obesity elsewhere. They are also having important effects on environmental sustainability, with potentially worrying consequences.

Dr James Bentham, Lecturer in Statistics at Kent's School of Mathematics, Statistics and Actuarial Science, led the research alongside Professor Majid Ezzati from the School of Public Health at Imperial College London and other UK and international colleagues. The researchers carried out the study analysing food supply data for 171 countries from the 1960's to 2010's.

The team discovered that South Korea, China and Taiwan have experienced the largest changes in food supply over the past five decades, with animal source foods such as meat and eggs, sugar, vegetables, seafood and oilcrops all becoming a much larger proportion of diet. In contrast, in many Western countries the supply of animal source foods and sugar has declined, particularly in high-income English-speaking countries such as the UK, US, Canada and Australia. The researchers also found that many countries around the world have seen an increase in vegetable-based diets. The sub-Saharan Africa region showed the least change, with a lack of diverse food supply, and this could be an explanation for the region's malnutrition.

The declines in diets based on animal source foods and sugar and corresponding increases in vegetable availability indicate a possible trend towards more balanced and healthier foods in some parts of the world. However, in South Korea, China and Taiwan in particular, the increase in animal source and sugar availability has occurred at the same time as a dramatic rise in obesity, and also suggests that changes in diet may be having a substantial negative effect on the environment.

Dr Bentham said: 'There are clear shifts in global food supply, and these trends may be responsible for strong improvements in nutrition in some parts of the world. However, obesity remains a long-term concern, and we hope that our research will open doors to analysis of the health impacts of global diet patterns. Equally, we must also consider carefully the environmental impacts of these trends.'

Professor Ezzati added: 'Advances in science and technology, together with growing incomes, have allowed many nations to have access to a diversity of foods. We must harness these advances and set in place policies that provide healthier foods for people everywhere, especially those who can currently least afford them.'

Credit: 
University of Kent

College students use more marijuana in states where it's legal, but they binge drink less

image: Marijuana use among college students has been trending upward for years, but in states that have legalized recreational marijuana, use has jumped even higher. A study published today in Addiction shows that in states where marijuana was legalized by 2018, both occasional and frequent use among college students has continued to rise beyond the first year of legalization, suggesting an ongoing trend rather than a brief period of experimentation.

Image: 
OSU

CORVALLIS, Ore. - Marijuana use among college students has been trending upward for years, but in states that have legalized recreational marijuana, use has jumped even higher.

An Oregon State University study published today in Addiction shows that in states where marijuana was legalized by 2018, both occasional and frequent use among college students has continued to rise beyond the first year of legalization, suggesting an ongoing trend rather than a brief period of experimentation.

Overall, students in states with legal marijuana were 18% more likely to have used marijuana in the past 30 days than students in states that had not legalized the drug. They were also 17% more likely to have engaged in frequent use, defined as using marijuana on at least 20 of the past 30 days.

The differences between states with and without legalization escalated over time: Six years after legalization in early-adopting states, students were 46% more likely to have used marijuana than their peers in non-legalized states.

Between 2012 and 2018, overall usage rates increased from 14% to 17% in non-legalized states, but shot up from 21% to 34% in the earliest states to legalize the drug. Similar trends appeared in states that legalized marijuana more recently.

Conducted by Harold Bae from OSU's College of Public Health and Human Sciences and David Kerr from OSU's College of Liberal Arts, this is the first study of college students to look broadly at multiple states that have legalized recreational marijuana and to go beyond the first year following legalization.

It includes data from seven states and 135 colleges where marijuana was legalized by 2018 and from 41 states and 454 colleges where recreational use was not legal.

That scope allowed Bae and Kerr to examine trends in the earliest adopting states as well as more recent adopters - though, the data for the study is stripped of state- and college-identifying information, so does not speak specifically to any one state or institution.

The data comes from the National College Health Assessment survey from 2008 to 2018, which asks about a wide range of health behaviors including drug and alcohol use and is administered anonymously to encourage students to respond more honestly. More than 850,000 students participated.

Looking at specific demographics, researchers found that the effect was stronger among older students ages 21-26 than minors ages 18-20; older students were 23% more likely to report having used marijuana than their peers in non-legalized states. The effect was also stronger among female students and among students living in off-campus housing, possibly because universities adhere to federal drug laws that still classify marijuana as an illegal substance.

"It's easy to look at the findings and think, 'Yeah, of course rates would increase,'" Kerr said. "But we need to quantify the effects these policy changes are having."

Furthermore, he said, researchers are not finding increases in adolescents' marijuana use following legalization. "So it is surprising and important that these young adults are sensitive to this law. And it's not explained by legal age, because minors changed too."

A recent companion study published in Addictive Behaviors in November by OSU doctoral candidate Zoe Alley along with Kerr and Bae examined the relationship between recreational marijuana legalization and college students' use of other substances.

Using the same dataset, they found that after legalization, students ages 21 and older showed a greater drop in binge drinking than their peers in states where marijuana was not legal. Binge drinking was defined as having five or more drinks in a single sitting within the previous two weeks.

Researchers have not yet tested any hypotheses as to why binge drinking fell, but they have some ideas.

An outside study previously found that illegal marijuana use decreases sharply when people hit 21 - where there is a sharp increase in alcohol use.

"When you're under 21, all substances are equally illegal," Alley said. "In most states, once you reach 21, a barrier that was in the way of using alcohol is gone, while it's intact for marijuana use. But when marijuana is legal, this dynamic is changed."

Binge drinking has been on the decline among college students in recent years, but dropped more in states that legalized marijuana than in states that did not.

"So in these two studies we saw changes after legalization that really differed by substance," Kerr said. "For marijuana we saw state-specific increases that went beyond the nationwide increases, whereas binge drinking was the opposite: a greater decrease in the context of nationwide decreases."

The magnitude of effect was much larger with marijuana than with any of the other substances, Bae added. "So the changes following recreational marijuana legalization were quite specific to cannabis use."

Future research is needed to see how those trends hold up over time, as additional states legalize marijuana and existing states continue to tweak their current policies, the researchers said.

Credit: 
Oregon State University

Stars need a partner to spin universe's brightest explosions

image: This is an artist's impression of gamma-ray burst with orbiting binary star.

Image: 
University of Warwick/Mark Garlick

When it comes to the biggest and brightest explosions seen in the Universe, University of Warwick astronomers have found that it takes two stars to make a gamma-ray burst.

New research solves the mystery of how stars spin fast enough to create conditions to launch a jet of highly energetic material into space, and has found that tidal effects like those between the Moon and the Earth are the answer.

The discovery, reported in Monthly Notices of the Royal Astronomical Society, has been made using simulated models of thousands of binary star systems, that is, solar systems that have two stars orbiting one another.

More than half of all stars are located in binary star systems and this new research has shown that they need to be in binary star systems in order for the massive explosions to be created.

A long gamma-ray burst (GRB), the type examined in this study, occurs when a massive star about ten times the size of our sun goes supernova, collapses into a neutron star or black hole and fires a relativistic jet of material into space. Instead of the star collapsing radially inwards, it flattens down into a disc to conserve angular momentum. As the material falls inwards, that angular momentum launches it in the form of a jet along the polar axis.

But in order to form that jet of material, the star has to be spinning fast enough to launch material along the axis. This presents a problem because stars usually lose any spin they acquire very quickly. By modelling the behaviour of these massive stars as they collapse, the researchers have been able to constrain the factors that cause a jet to be formed.

They found that the effects of tides from a close neighbour - the same effect that has the Moon and the Earth locked together in their spin - could be responsible for spinning these stars at the rate needed to create a gamma-ray burst.

Gamma-ray bursts are the most luminous events in the Universe and are observable from Earth when their jet of material is pointed directly at us. This means that we only see around 10-20% of the GRBs in our skies.

Lead author Ashley Chrimes, a PhD student in the University of Warwick Department of Physics, said: "We're predicting what kind of stars or systems produce gamma-ray bursts, which are the biggest explosions in the Universe. Until now it's been unclear what kind of stars or binary systems you need to produce that result.

"The question has been how a star starts spinning, or maintains its spin over time. We found that the effect of a star's tides on its partner is stopping them from slowing down and, in some cases, it is spinning them up. They are stealing rotational energy from their companion, a consequence of which is that they then drift further away.

"What we have determined is that the majority of stars are spinning fast precisely because they're in a binary system."

The study uses a collection of binary stellar evolution models created by researchers from the University of Warwick and Dr J J Eldridge from the University of Auckland. Using a technique called binary population synthesis, the scientists are able to simulate this mechanism in a population of thousands of star systems and so identify the rare examples where an explosion of this type can occur.

Dr Elizabeth Stanway, from the University of Warwick Department of Physics, said: "Scientists haven't modelled in detail for binary evolution in the past because it's a very complex calculation to do. This work has considered a physical mechanism within those models that we haven't examined before, that suggests that binaries can produce enough GRBs using this method to explain the number that we are observing.

"There has also been a big dilemma over the metallicity of stars that produce gamma-ray bursts. As astronomers, we measure the composition of stars and the dominant pathway for gamma-ray bursts requires very few iron atoms or other heavy elements in the stellar atmosphere. There's been a puzzle over why we see a variety of compositions in the stars producing gamma-ray bursts, and this model offers an explanation."

Ashley added: "This model allows us to predict what these systems should look like observationally in terms of their temperature and luminosity, and what the properties of the companion are likely to be. We are now interested in applying this analysis to explore different astrophysical transients, such as fast radio bursts, and can potentially model rarer events such as black holes spiralling into stars."

Credit: 
University of Warwick

High temperatures due to global warming will be dramatic even for tardigrades

image: A research group from Department of Biology, University of Copenhagen has just shown that tardigrades are very vulnerable to long-term high temperature exposures. Animals, which in their desiccated state are best known for their extraordinary tolerance to extreme environments.

Image: 
Ricardo Neves

Global warming, a major aspect of climate change, is already causing a wide range of negative impacts on many habitats of our planet. It is thus of the utmost importance to understand how rising temperatures may affect animal health and welfare. A research group from Department of Biology, University of Copenhagen has just shown that tardigrades are very vulnerable to long-term high temperature exposures. Animals, which in their desiccated state are best known for their extraordinary tolerance to extreme environments.

In a study published recently in Scientific Reports (an open access journal published by Nature Publishing Group), Ricardo Neves and Nadja Møbjerg and colleagues at Department of Biology, University of Copenhagen present results on the tolerance to high temperatures of a tardigrade species.

Tardigrades, commonly known as water bears or moss piglets, are microscopic invertebrates distributed worldwide in marine, freshwater and terrestrial microhabitats.

Ricardo Neves, Nadja Møbjerg and colleagues investigated the tolerance to high temperatures of Ramazzottius varieornatus, a tardigrade frequently found in transient freshwater habitats.

- "The specimens used in this study were obtained from roof gutters of a house located in Nivå, Denmark. We evaluated the effect of exposures to high temperature in active and desiccated tardigrades, and we also investigated the effect of a brief acclimation period on active animals", explains postdoc Ricardo Neves.

Rather surprisingly the researchers estimated that for non-acclimated active tardigrades the median lethal temperature is 37.1°C, though a short acclimation periods leads to a small but significant increase of the median lethal temperature to 37.6°C. Interestingly, this temperature is not far from the currently measured maximum temperature in Denmark, i.e. 36.4°C. As for the desiccated specimens, the authors observed that the estimated 50% mortality temperature is 82.7°C following 1 hour exposures, though a significant decrease to 63.1°C following 24 hour exposures was registered.

The research group used logistic models to estimate the median lethal temperature (at which 50% mortality is achieved) both for active and desiccated tardigrades.

Approximately 1300 tardigrade species have been described so far. The body of these minute animals is barrel-shaped (or dorsoventrally compressed) and divided into a head and a trunk with four pairs of legs. Their body length varies between 50 micrometers and 1.2 millimeters. Apart from their impressive ability to tolerate extreme environments, tardigrades are also very interesting because of their close evolutionary relationship with arthropods (e.g., insects, crustaceans, spiders).

As aquatic animals, tardigrades need to be surrounded in a film of water to be in their active state (i.e., feeding and reproducing). However, these critters are able to endure periods of desiccation (anhydrobiosis) by entering cryptobiosis, i.e., a reversible ametabolic state common especially among limno-terrestrial species. Succinctly, tardigrades enter the so-called "tun" state by contracting their anterior-posterior body axis, retracting their legs and rearranging the internal organs. This provides them with the capacity to tolerate severe environmental conditions including oxygen depletion (anoxybiosis), high toxicant concentrations (chemobiosis), high solute concentration (osmobiosis) and extremely low temperatures (cryobiosis).

The extraordinary tolerance of tardigrades to extreme environments includes also high temperature endurance. Some tardigrade species were reported to tolerate temperatures as high as 151°C. However, the exposure time was only of 30 minutes. Other studies on thermotolerance of desiccated (anhydrobiotic) tardigrades revealed that exposures higher than 80°C for 1 hour resulted in high mortality, with almost all specimens dying at temperatures above 103°C. It remained, yet, unknown how anhydrobiotic tardigrades handle exposures to high temperatures for long periods, i.e., exceeding 1 hour.

- "From this study, we can conclude that active tardigrades are vulnerable to high temperatures, though it seems that these critters would be able to acclimatize to increasing temperatures in their natural habitat. Desiccated tardigrades are much more resilient and can endure temperatures much higher than those endured by active tardigrades. However, exposure-time is clearly a limiting factor that constrains their tolerance to high temperatures.", says Ricardo Neves.

Indeed, although tardigrades are able to tolerate a diverse set of severe environmental conditions, their endurance to high temperatures is noticeably limited and this might actually be the Achilles heel of these otherwise super-resistant animals.

Credit: 
University of Copenhagen - Faculty of Science

First robust cell culture model for the hepatitis E virus

Even though hepatitis E causes over three million infections and about 70,000 deaths each year, the virus has been little studied as yet. This may be about to change, because a research team from Bochum and Hanover has developed a robust and improved cell model of the pathogen. It produces about 100 times more infectious virus particles than previous models. "As a result, we are finally able to study the virus in depth," says Professor Eike Steinmann, Head of the Department for Molecular & Medical Virology at Ruhr-Universität Bochum (RUB). The researchers published their results in the journal PNAS from 2 January 2020.

Mutation leads to increased proliferation

The lack of a robust cell culture model is one of the reasons why the hepatitis E virus (HEV) has been little investigated to date. "The number of infectious virus particles produced in previous models was simply too small to generate reproducible results," explains Dr. Daniel Todt, author from Bochum.

In previous studies, the research team analysed virus populations resulting from genetic mutations of the virus in patients and identified a specific genetic change that leads to a significantly higher proliferation of the pathogen. The scientists inserted this mutation into the previously used cell lines and were thus able to increase the production of new virus particles by a factor of five to ten.

In their current article, they optimised the cell culture conditions by adding special culture media and using different liver cell lines. These measures resulted in approximately 100 times more infectious virus particles than previously published.

Extensive tests show that the model works

In order to verify whether the new cell culture model can be used to study the virus, the researchers carried out several experiments. For example, they tested whether enveloped and naked viruses are produced in the same way. "Both variants of the virus occur in HEV patients," explains study author Martina Friesland from the Experimental Virology at Twincore Centre for Experimental and Clinical Infection Research in Hanover. "However, they are responsible for different routes of infection. While the enveloped virus is transmitted by blood-blood contact, such as transfusions, the naked virus is excreted via the stool and causes infection for example through contaminated drinking water." Both variants can now be studied with the new cell culture model.

In a previous study, the authors had shown that the mutation leads to increased proliferation in all hepatitis E viruses, and that this is also the case in various liver cell lines used in the research. In the current study, they optimised the model once again. "To this end, we have used insights gained in the clinic to improve a preclinical in-vitro model," elaborates Daniel Todt. The effect of increased proliferation is also evident in healthy human liver cells, as well as in the animal model. Here, virus particles could be detected in the blood and faeces of rodents for more than a month. "In previous models, detection was only ever possible in faeces, because the number of virus particles produced was too low," points out Daniel Todt. "Now, we can produce infectious viruses in almost unlimited quantities for research purposes and do not have to resort to virus isolates from patients."

Full grasp of details

Since this was the first time that they were able to infect cells isolated from healthy liver tissue reproducibly with cell culture-derived HEV, the researchers performed deep sequencing: they analysed the entire genetic information of the virus at different points in time of infection, both under and without the influence of drugs. Moreover, they studied the altered expression of different proteins of affected liver cells in response to the infection, both under the influence of drugs and without the influence of drugs. "We wanted to know how the cell reacts to the infection," says Daniel Todt. "Our aim was to gain a full understanding of the details," points out Eike Steinmann. "This is the only way to identify genes that are particularly important for the course of the infection and take them under consideration as possible targets for therapeutic approaches in future." The researchers are making the data set and the optimised protocol available to the public, in order to enable the entire scientific community to conduct follow-up research using the model and the previous findings.

Hepatitis E

The hepatitis E virus (HEV) is the main cause of acute viral hepatitis. After the first documented epidemic outbreak in 1955-1956, more than 50 years passed before researchers took up in-depth research into the disease. In patients with an intact immune system, acute infections usually heal on their own. However, HEV can become chronic in patients with reduced or suppressed immune systems, such as organ transplant recipients or HIV-infected patients. HEV is also particularly dangerous for pregnant women.

Credit: 
Ruhr-University Bochum

Cell growth: Intricate network of potential new regulatory mechanisms has been decoded

image: Structure and interactions of the EGF receptors. The research work is decoding a new network of factors that can regulate interactions with the juxtamembrane segment.

Image: 
HHU / Manuel Etzkorn

In the cover article of the Cell Press journal Structure, the authors - among them Dr. Manuel Etzkorn (HHU/FZJ) and Prof. Dr. Michael Famulok (Bonn) - now describe how the interface functions and what substances can interact with it.

One of the things that control cell growth are proteins in the cell membrane. In this regard, EGF receptors ('EGF' stands for Epidermal Growth Factor) form a central interface between the cell and its environment. This is why disruption to this system is a frequent cause of cancers, which arise from incorrectly controlled cell growth.

Many drugs have a direct impact on EGF receptors. These drugs work by focusing on two key areas: The first is the sensory domain, which reaches out of the cell and interacts directly with messenger substances that bind to the cell externally. The second is the kinase domain, which is located inside of the cell and transmits the signal. In certain cancers, the cells have already developed a resistance to active ingredients that target these two domains.

The EGF receptor additionally comprises one other domain: the juxtamembrane (JM) segment between the external sensory domain and the kinase domain. We know that molecules also interact with this segment and can influence the transmission of signals as a result. But so far we know very few interaction partners. It is also unclear exactly how these interactions take place.

Researchers from HHU and FZJ as well as the University of Bonn have now identified a network of interaction partners for the JM segment. They also obtained high-resolution insights into the molecular architecture underlying the interaction. This means that receptor's third domain can now also become more significant for the development of new active ingredients. In particular, it offers a new therapeutic approach for cancers that have become resistant to current active ingredients.

"Our research findings are initially of fundamental nature; they show new possibilities for influencing the EGF receptor system under defined laboratory conditions. This means that we are opening the door to developing new drugs, but there is still a very long way to go to a new therapy", says Dr. Manuel Etzkorn from the Biomolecular NMR Center, which is jointly run by the Institute of Physical Biology at HHU and the Institute of Complex Systems at FZJ. In this newly published study, his team focused on shedding light on the aspects of biological structure. The colleagues in Bonn under the leadership of Prof. Dr. Michael Famulok from the LIMES Institute and the Center of Advanced European Study and Research initiated the project and carried out the biochemical and molecular biological characterisation of the systems studied.

Credit: 
Heinrich-Heine University Duesseldorf

Directed evolution of endogenous genes opens door to rapid agronomic trait improvement

image: (a) STEME-mediated C >T and A >G base-editing strategy. (b) Distribution of edited DNA sequencing reads in rice protoplasts for STEME-1. (c) Procedure for mutating the OsACC CT domain via STEME using groups of individual sgRNAs. (d) Effects of STEME-induced mutations on herbicide resistance

Image: 
IGDB

A research team led by Profs. GAO Caixia and LI Jiayang from the Institute of Genetics and Developmental Biology of the Chinese Academy of Sciences have engineered five saturated targeted endogenous mutagenesis editors (STEMEs) and generated de novo mutations to facilitate the directed evolution of plant genes. Their study was published in Nature Biotechnology on Jan. 13.

Heredity and variation are the basis of organismic evolution. Random mutagenesis by physical or chemical methods has long been applied to improve traits in plants, but it is labor-intensive and time-consuming.

In higher organisms, especially in plants, a target gene is usually transferred into a bacterial or yeast cell to generate the required diversity for selection, but once a target gene is no longer in situ, the functional consequences of such a change may not be the same as in the native context. Moreover, most important agronomic traits cannot be selected in bacteria or yeast.

"To establish powerful tools for directly inducing saturated targeted mutations and selection in plants will accelerate the development of agronomic traits and important functional genes," said Prof. GAO Caixia.

The researchers fused cytidine deaminase with adenosine deaminase to obtain four STEMEs. All four STEMEs efficiently produced simultaneous C>T and A>G conversions using only a sgRNA.

They also produced the fifth dual cytosine and adenine base editor - STEME-NG - to expand the targeting scope. With only 20 sgRNAs in rice protoplasts, STEME-NG can produce near-saturated mutagenesis for a 56-amino-acid portion of the rice acetyl-coenzyme A carboxylase gene (OsACC).

In a proof-of-concept experiment, the researchers used STEMEs to direct the evolution of OsACC gene in rice plants. They sprayed the regenerated rice seedlings with haloxyfop as the selection pressure. The scientists then identified three novel (P1927F, W2125C, and S1866F) and one known (W2125C) amino acid substitutions for herbicide resistance. These mutations were found to affect the haloxyfop-binding pocket directly or indirectly, based on the homology model of the CT domain of yeast ACC.

The development of STEME paves the way for directed evolution of endogenous plant genes in situ, which is important for breeding via molecular design.

Moreover, this STEME process might also be applicable beyond plants. For example, it may be useful for screening drug resistance mutations, altering cis elements on noncoding regions and correcting pathogenic SNVs in cell lines, yeast or animals.

Credit: 
Chinese Academy of Sciences Headquarters

Risk of lead exposure linked to decreased brain volume in adolescents

image: The cortex, visible here as folds, forms the outer layer of the brain and is important for information processing. The study led by Dr. Elizabeth Sowell of Children's Hospital Los Angeles shows that the cortex is adversely affected by high risk of lead exposure in children from lower income families. Image courtesy of Eric Kan of CHLA.

Image: 
Eric Kan of Children's Hospital Los Angeles

Though leaded gas and lead-based paint were banned decades ago, the risk of lead exposure is far from gone. A new study led by Elizabeth Sowell, PhD, shows that living in neighborhoods with high risk of lead exposure is associated with differences in brain structure and cognitive performance in some children. Her findings, published by Nature Medicine, also show a deeper trend - children in lower income families may be at increased risk.

Dr. Sowell and her team at The Saban Research Institute of Children's Hospital Los Angeles hypothesized that children in lower income families could be particularly vulnerable to the effects of living in high lead-risk environments. Their previous findings show that the socioeconomic status of families affects brain development. Here, they examined the association of lead exposure risk with cognitive scores and brain structure in more than 9,500 children.

Dr. Sowell's laboratory is part of the Adolescent Brain Cognitive Development (ABCD) Study, which has enrolled nearly 12,000 children from 21 sites across the United States. ABCD follows participants from the age of 9-10 into adulthood, collecting health and brain development information. It is the largest and most comprehensive study of its kind. The wealth of data collected through ABCD allows investigators like Dr. Sowell to ask questions about factors that affect adolescent brains.

Their results showed that an increased risk of lead exposure was associated with decreases in cognitive performance and in the surface area and volume of the cortex - the surface of the brain, responsible for initiating conscious thought and action. But this was not true for children from mid- or high-income families.

No amount of lead is safe. Even at very low levels, cognitive deficits have been attributed to lead exposure. More than 72,000 neighborhoods in the United States have been assigned risk estimates for lead exposure, based on the age of homes and poverty rates. Though new houses haven't used lead-based paint since 1978, many older homes still contain lead hazards.

"Professional lead remediation of a home can cost $10,000," says Dr. Sowell, who is also a Professor of Pediatrics at the Keck School of Medicine of USC. "So, family income becomes a factor in lead exposure." Indeed, as her study reveals, the associations between lead risk and decreases in cognitive performance and brain structure are more pronounced in lower income families.

"We were interested in how lead exposure influences brain anatomy and function," says Andrew Marshall, PhD, a postdoctoral research fellow in Dr. Sowell's lab and first author of the publication. "Cognition is affected by low-level lead exposure, but there weren't any published studies about brain structure in these children."

Decreased cognitive scores and structural brain differences were only observed in lower-income families. "What we're seeing here," says Dr. Marshall, "is that there are more pronounced relationships between brain structure and cognition when individuals are exposed to challenges like low income or risk of lead exposure." The ABCD study has not yet examined blood lead levels in these children, but the authors of this publication showed that risk of lead exposure is predictive of blood lead levels. Further studies are needed to determine the precise cause for these differences, such as whether lead exposure itself or other factors associated with living in a high lead-risk environment is contributing to this association, but the study unveils a clear correlation between family income and the effects of living in high lead-risk census tracts.

However, Dr. Sowell emphasizes that income and risk of lead exposure do not define a child. "It's absolutely not a foregone conclusion that these risks make you less intellectually capable," she says. "Many children who live in low-income, high-risk areas will be successful." Her goal is to promote awareness of how environmental toxins affect children. Understanding what our children face is the first step in helping them.

"Even though lead levels are reduced from three decades ago in the environment, it's still a highly significant public health issue," says Dr. Sowell. Despite this, there are kids in high-risk environments that do not show these deficits, indicating that it is possible to mitigate lead effects.

"The take home point is that this can be fixed," she says. "Lead does not have to be in the environment. We can remove it and really help kids get healthier."

Credit: 
Children's Hospital Los Angeles