Earth

Instant death from heart attack more common in people who do not exercise

Sophia Antipolis, 12 February 2021: An active lifestyle is linked with a lower chance of dying immediately from a heart attack, according to a study published today in the European Journal of Preventive Cardiology, a journal of the European Society of Cardiology (ESC).1

Heart disease is the leading cause of death globally and prevention is a major public health priority. The beneficial impact of physical activity in stopping heart disease and sudden death on a population level is well documented. This study focused on the effect of an active versus sedentary lifestyle on the immediate course of a heart attack - an area with little information.

The researchers used data from 10 European observational cohorts including healthy participants with a baseline assessment of physical activity who had a heart attack during follow-up - a total of 28,140 individuals. Participants were categorised according to their weekly level of leisure-time physical activity as sedentary, low, moderate, or high.

The association between activity level and the risk of death due to a heart attack (instantly and within 28 days) was analysed in each cohort separately and then the results were pooled. The analyses were adjusted for age, sex, diabetes, blood pressure, family history of heart disease, smoking, body mass index, blood cholesterol, alcohol consumption, and socioeconomic status.

A total of 4,976 (17.7%) participants died within 28 days of their heart attack - of these, 3,101 (62.3%) died instantly. Overall, a higher level of physical activity was associated with a lower risk of instant and 28-day fatal heart attack, seemingly in a dose-response-like manner. Patients who had engaged in moderate and high levels of leisure-time physical activity had a 33% and 45% lower risk of instant death compared to sedentary individuals. At 28 days these numbers were 36% and 28%, respectively. The relationship with low activity did not reach statistical significance.

Study author Dr. Kim Wadt Hansen of Bispebjerg Hospital, Copenhagen, Denmark said: "Almost 18% of patients with a heart attack died within 28 days, substantiating the severity of this condition. We found an immediate survival benefit of prior physical activity in the setting of a heart attack, a benefit which seemed preserved at 28 days."

He noted: "Based on our analyses, even a low amount of leisure-time physical activity may in fact be beneficial against fatal heart attacks, but statistical uncertainty precludes us from drawing any firm conclusions on that point."

The authors said in the paper: "Our pooled analysis provides strong support for the recommendations on weekly physical activity in healthy adults stated in the 2016 European Guidelines on cardiovascular disease prevention in clinical practice;2 especially as we used cut-off values for physical activity comparable to those used in the guidelines."

The guidelines recommend that healthy adults of all ages perform at least 150 minutes a week of moderate intensity or 75 minutes a week of vigorous intensity aerobic physical activity or an equivalent combination thereof.

Dr. Hansen concluded: "There are many ways to be physically active at little or no cost. Our study provides yet more evidence for the rewards of exercise."

Credit: 
European Society of Cardiology

Here comes the new generation of climate models: the future of rainfall in the Alps

Less intense mean daily precipitation, more intense and localised extreme events. This is what the future climate scenarios indicate for the Eastern Alps, according to the study "Evaluation and Expected Changes of Summer Precipitation at Convection Permitting Scale with COSMO-CLM over Alpine Space", published by the CMCC Foundation in the journal Atmosphere. The research is conducted in the context of the European project H2020 EUCP (European Climate Prediction system) and contributes to the work of the international scientific community for the development of climate models that can support decision makers in a proper assessment of extreme events and their evolution considering climate change, with the ultimate goal of limiting its negative impacts on societies and economies.

Climate change adaptation plans and measures existing worldwide are based on future scenarios made available to decision-makers by the world of research. These scenarios currently provide a good representation of extreme events at daily scale, but still have limited predictive capabilities at sub-daily time scale. For some sectors, such as infrastructure, there is insufficient with which to develop adequate climate change adaptation policies: very intense and rapid rainfall, concentrated in small areas and in a few hours, can have strong impacts on infrastructure, causing the overflow of water bodies and flooding, undermining systems and revealing the inability of sewerage to handle large flows of water. Some extreme events can last for a few hours and affect very small areas (in the order of a few kilometres). The need to understand such phenomena is even greater in some specific geographical contexts, such as the Alpine area, where extreme rainfall events - typical of the summer season - can have serious consequences.

"In recent decades there has been an ongoing debate among climatologists about the added value of very high-resolution climate simulations, representing the next generation of the regional climate-models'" explains Paola Mercogliano, director of the REMHI (Regional Models and geo-Hydrological Impacts) division at the CMCC Foundation. "These climate simulations, which are run with regional models at a very high spatial and temporal resolution, have a high computational cost and require significant investments in terms of research time. Given the high costs, the scientific community is questioning whether this is the right way to go to better support climate change adaptation policies. Our study demonstrates the added value of this direction and confirms that it is worth investing in it, especially in areas with complex orography or where uncertainty is still wide, such as the Alps. With these new generation models, we can not only observe what happens at very high resolutions in terms of mean daily precipitation, but we can also make statistical analyses on a sub-daily basis, looking at different hours of the same day. These models will also be able to provide information on the effects of climate change on hourly precipitation: results that would have been unthinkable just two or three years ago."

The study shows a better representation of precipitation frequency and intensity in very high-resolution simulations ('convection permitting') than in lower resolution simulations, especially at sub-daily scale.

"In agreement with existing literature, our preliminary results for the Alpine area in the summer season show a decrease in mean daily precipitation, especially at high altitudes, and localised intensifications of extreme events along the Eastern Alps. It will rain less frequently but more intensely, both on a daily and hourly time scale. Given the increased intensity of these events, it is clear that understanding the distribution of rainfall at hourly scale can bring great added value in our support for decision-makers," explains Marianna Adinolfi, CMCC researcher and lead author of the paper.

Next generation climate models are developed and applied by the CMCC Foundation in several international projects and contexts. Some examples include the study of urban heatwaves and the evolution of rainfall extremes in support of adaptation policies on an urban scale: all contexts that will benefit from having simulations on hourly scales.

Furthermore, to support adaptation policies, the CMCC created products such as the Climate Scenarios for Italy, which allows visualising in maps the expected climate until the end of the century using high-resolution climate models, and climate services such as Dataclime, which provides customized climate analysis on multiple temporal and spatial scales.

This study was carried out within the Horizon 2020 research project EUCP - European Climate Prediction system, in which the CMCC Foundation participates. The project aims to support the scientific community in the development of high-quality climate data and projections on a European scale to be provided to policy makers, stakeholders and planners to address the challenges and opportunities brought by climate change.

Credit: 
CMCC Foundation - Euro-Mediterranean Center on Climate Change

Identifying risk factors for elevated anxiety in young adults during COVID-19 pandemic

A new study has identified early risk factors that predicted heightened anxiety in young adults during the coronavirus (COVID-19) pandemic. The findings from the study, supported by the National Institutes of Health and published in the Journal of the American Academy of Child and Adolescent Psychiatry, could help predict who is at greatest risk of developing anxiety during stressful life events in early adulthood and inform prevention and intervention efforts.

The investigators examined data from 291 participants who had been followed from toddlerhood to young adulthood as part of a larger study on temperament and socioemotional development. The researchers found that participants who continued to show a temperament characteristic called behavioral inhibition in childhood were more likely to experience worry dysregulation in adolescence (age 15), which in turn predicted elevated anxiety during the early months of the COVID-19 pandemic when the participants were in young adulthood (around age 18).

"People differ greatly in how they handle stress," said Daniel Pine, M.D., a study author and chief of the National Institute of Mental Health (NIMH) Section on Development and Affective Neuroscience. "This study shows that children's level of fearfulness predicts how much stress they experience later in life when they confront difficult circumstances, such as the pandemic."

Behavioral inhibition is a childhood temperament characterized by high levels of cautious, fearful, and avoidant responses to unfamiliar people, objects, and situations. Previous studies have established that children who display behavioral inhibition are at increased risk of developing anxiety disorders later. However, less research has investigated the specific mechanisms by which a stable pattern of behavioral inhibition in childhood is linked to anxiety in young adulthood.

The authors of this study hypothesized that children who demonstrate a stable pattern of behavioral inhibition may be at greater risk for worry dysregulation in adolescence--that is, difficulties managing worry and displaying inappropriate expressions of worry--and this would put them at greater risk for later heightened anxiety during stressful events like the pandemic.

In the larger study, behavioral inhibition was measured at ages 2 and 3 using observations of children's responses to novel toys and interaction with unfamiliar adults. When the children were 7 years old, they were observed for social wariness during an unstructured free play task with an unfamiliar peer. Worry dysregulation was assessed at age 15 through a self-report survey. For the current study, the participants, at an average age of 18, were assessed for anxiety twice during the early months of the COVID-19 pandemic after stay-at-home orders had been issued (first between April 20 and May 15 and approximately a month later).

At the first assessment, 20% of the participants reported moderate levels of anxiety symptoms considered to be in the clinical range. At the second assessment, 18.3% of participants reported clinical levels of anxiety. As expected, the researchers found that individuals with high behavioral inhibition in toddlerhood who continued to display high levels of social wariness in childhood reported experiencing dysregulated worry in adolescence, and this ultimately predicted increased anxiety in young adulthood during a critical stage of the pandemic. This developmental pathway was not significant for children who showed behavioral inhibition in toddlerhood but displayed low levels of social wariness later in childhood.

"This study provides further evidence of the continuing impact of early life temperament on the mental health of individuals," said Nathan A. Fox, Ph.D., Distinguished University Professor and director of the Child Development Lab at the University of Maryland, College Park, and an author of the study. "Young children with stable behavioral inhibition are at heightened risk for increased worry and anxiety, and the context of the pandemic only heightened these effects."

The findings suggest that targeting social wariness in childhood and worry dysregulation in adolescence may be a viable strategy for the prevention of anxiety disorders. The findings also suggest that targeting dysregulated worry in adolescence may be particularly important for identifying those who might be at risk for heightened anxiety during stressful life events like the COVID-19 pandemic and preventing that heightened anxiety.

Credit: 
NIH/National Institute of Mental Health

Star-shaped brain cells may be linked to stuttering

image: Photo shows Dr. Gerald Maguire.

Image: 
UCR School of Medicine.

RIVERSIDE, Calif. -- Astrocytes -- star-shaped cells in the brain that are actively involved in brain function -- may play an important role in stuttering, a study led by a University of California, Riverside, expert on stuttering has found.

"Our study suggests that treatment with the medication risperidone leads to increased activity of the striatum in persons who stutter," said Dr. Gerald A. Maguire, professor and chair of the Department of Psychiatry and Neuroscience at the UCR School of Medicine, who led the study. "The mechanism of risperidone's action in stuttering, in part, appears to involve increased metabolism -- or activity -- of astrocytes in the striatum."

Findings from the study, published today in Frontiers in Neuroscience, were borne from a collaboration between Maguire and Shahriar SheikhBahaei, an independent research scholar at the National Institutes of Health's National Institute of Neurological Disorders and Stroke.

The striatum is a key component of the basal ganglia, a group of nuclei best known for facilitating voluntary movement. Present in the forebrain, the striatum contains neuronal activity related to cognition, reward, and coordinated movements.

Stuttering, a childhood onset fluency disorder that leads to speech impairment, is associated with high levels of the neurotransmitter dopamine. Risperidone works by blocking the receptors in the brain that dopamine acts on, thus preventing excessive dopamine activity. Risperidone is available by prescription under a physician's order almost anywhere in the world. In existence for nearly 30 years, it is generally prescribed for schizophrenia and bipolar disorder.

Maguire and SheikhBahaei have now found evidence that astrocytes in the striatum may be crucially involved in how risperidone is able to reduce stuttering.

"We do not know the exact mechanism for how risperidone activates astrocytes in the striatum," said coauthor SheikhBahaei, an expert on astrocytes, and a person who stutters. "What we know is that it activates astrocytes. The astrocytes then release a signaling molecule that affects neurons in the striatum by blocking their dopamine receptors. In our future work, we would like to find this signaling molecule and better understand the exact role astrocytes play in stuttering, which, in turn, could help us design drugs that target astrocytes."

Maguire and his team conducted a randomized, double-blinded placebo-controlled clinical trial with 10 adult subjects to observe risperidone's effects on brain metabolism. At the start of the study and after six weeks of taking risperidone (0.5-2.0 mg/day) or a placebo pill, the 10 participants were assigned to a solo reading aloud task. The participants then each underwent a positron emission tomography, or PET, scan. It turned out that five subjects got risperidone while the other five got a placebo. Those in the risperidone treatment group were found to show higher glucose uptake-- that is, higher metabolism -- in specific regions of the brain according to scans taken after active treatment.

"Naturally, and abnormally, glucose uptake is low in stuttering -- a feature common to many neurodevelopmental conditions," said Maguire, who also is a person who stutters. "But risperidone seems to compensate for the deficit by increasing the metabolism, specifically, in the left striatum. More research is needed to understand this better. Neuroimaging techniques we used to visualize changes in the brains of those who stutter can provide valuable insights into the pathophysiology of the disorder and guide the development of future interventions."

Next, Maguire and SheikhBahaei will aim to further understand what causes stuttering, what the different types of stuttering are, what may be their etiologies; and develop targeted personalized treatments for those who stutter.

"The general goal of our research collaboration is to combine basic research in my lab with Dr. Maguire's clinical studies," SheikhBahaei said. "My lab is generating new animal models to study stuttering which will help us understand what causes different types of stuttering. Researchers have proposed other components are involved in stuttering's etiology. Our data, which suggests astrocytes in the striatum may be playing an important role in the development of stuttering, helps unify some of the findings the scientific literature has seen recently on astrocytes and could help connect the dots."

Personally speaking

The UCR School of Medicine has signed a research collaboration agreement with the National Institute of Neurological Disorders and Stroke to work together on research related to stuttering.

"I have been active in the stuttering community for decades," Maguire said. "This is a community that needs support, opportunities, and role models. Dr. SheikhBahaei and I encourage people who stutter to be more engaged in the scientific community. We both stutter and that has not stopped us from achieving our professional and personal goals. Young people who stutter and are thinking about careers in science and medicine should not let this speech disorder hold them back."

For SheikhBahaei, working with Maguire is an ideal collaboration to "bring bench to the bedside."

"We are working to reveal circuits in the brain that control the complex behavior of speaking," he said. "These circuits will shed more light on the mechanism involved in stuttering. Speaking may be the most complex human behavior. Consider that more than 100 muscles in the body must act in synchrony for us to speak."

Credit: 
University of California - Riverside

Study suggests sounds influence the developing brain earlier than previously thought

FOR IMMEDIATE RELEASE

Scientists have yet to answer the age-old question of whether or how sound shapes the minds of fetuses in the womb, and expectant mothers often wonder about the benefits of such activities as playing music during pregnancy. Now, in experiments in newborn mice, scientists at Johns Hopkins report that sounds appear to change "wiring" patterns in areas of the brain that process sound earlier than scientists assumed and even before the ear canal opens.

The current experiments involve newborn mice, which have ear canals that open 11 days after birth. In human fetuses, the ear canal opens prenatally, at about 20 weeks gestation.

The findings, published online Feb. 12 in Science Advances, may eventually help scientists identify ways to detect and intervene in abnormal wiring in the brain that may cause hearing or other sensory problems.

"As scientists, we are looking for answers to basic questions about how we become who we are," says Patrick Kanold, Ph.D., professor of biomedical engineering at The Johns Hopkins University and School of Medicine. "Specifically, I am looking at how our sensory environment shapes us and how early in fetal development this starts happening."

Kanold started his career in electrical engineering, working with microprocessors, a natural conduit for his shift to science and studying the circuitry of the brain.

His research focus is the outermost part of the brain, the cortex, which is responsible for many functions, including sensory perception. Below the cortex is the white brain matter that in adults contains connections between neurons.

In development, the white matter also contains so-called subplate neurons, some of the first to develop in the brain -- at about 12 weeks gestation for humans and the second embryonic week in mice.
Anatomist Mark Molliver of Johns Hopkins is credited with describing some of the first connections between neurons formed in white matter, and he coined the term subplate neurons in 1973.

These primordial subplate neurons eventually die off during development in mammals, including mice. In humans, this happens shortly before birth through the first few months of life. But before they die off, they make connections between a key gateway in the brain for all sensory information, the thalamus, and the middle layers of the cortex.

"The thalamus is the intermediary of information from the eyes, ears and skin into the cortex," says Kanold. "When things go wrong in the thalamus or its connections with the cortex, neurodevelopmental problems occur." In adults, the neurons in the thalamus stretch out and project long, armlike structures called axons to the middle layers of the cortex, but in fetal development, subplate neurons sit between the thalamus and cortex, acting as a bridge. At the end of the axons is a nexus for communication between neurons called synapses.
Working in ferrets and mice, Kanold previously mapped the circuitry of subplate neurons. Kanold also previously found that subplate neurons can receive electrical signals related to sound before any other cortical neurons did.

The current research, which Kanold began at his previous position at the University of Maryland, addresses two questions, he says: When sound signals get to the subplate neurons, does anything happen, and can a change in sound signals change the brain circuits at these young ages?

First, the scientists used genetically engineered mice that lack a protein on hair cells in the inner ear. The protein is integral for transforming sound into an electric pulse that goes to the brain; from there it is translated into our perception of sound. Without the protein, the brain does not get the signal.

In the deaf, 1-week-old mice, the researchers saw about 25% - 30% more connections among subplate neurons and other cortex neurons, compared with 1-week-old mice with normal hearing and raised in a normal environment. This suggests that sounds can change brain circuits at a very young age, says Kanold.

In addition, say the researchers, these changes in neural connections were happening about a week earlier than typically seen. Scientists had previously assumed that sensory experience can only alter cortical circuits after neurons in the thalamus reach out to and activate the middle layers of the cortex, which in mice is around the time when their ear canals open (at around 11 days).

"When neurons are deprived of input, such as sound, the neurons reach out to find other neurons, possibly to compensate for the lack of sound," says Kanold. "This is happening a week earlier than we thought it would, and tells us that the lack of sound likely reorganizes connections in the immature cortex."

In the same way that lack of sound influences brain connections, the scientists thought it was possible that extra sounds could influence early neuron connections in normal hearing mice, as well.

To test this, the scientists put normal hearing, 2-day-old mouse pups in a quiet enclosure with a speaker that sounds a beep or in a quiet enclosure without a speaker. The scientists found that the mouse pups in the quiet enclosure without the beeping sound had stronger connections between subplate and cortical neurons than in the enclosure with the beeping sound. However, the difference between the mice housed in the beeping and quiet enclosures was not as large as between the deaf mice and ones raised in a normal sound environment.

These mice also had more diversity among the types of neural circuits that developed between the subplate and cortical neurons, compared with normal hearing mouse pups raised in a quiet enclosure with no sound. The normal hearing mice raised in the quiet enclosure also had neuron connectivity in the subplate and cortex regions similar to that of the genetically-engineered deaf mice.

"In these mice we see that the difference in early sound experience leaves a trace in the brain, and this exposure to sound may be important for neurodevelopment," says Kanold.

The research team is planning additional studies to determine how early exposure to sound impacts the brain later in development. Ultimately, they hope to understand how sound exposure in the womb may be important in human development and how to account for these circuit changes when fitting cochlear implants in children born deaf. They also plan to study brain signatures of premature infants and develop biomarkers for problems involving miswiring of subplate neurons.

Credit: 
Johns Hopkins Medicine

Pigs show potential for 'remarkable' level of behavioral, mental flexibility in new study

image: Yorkshire pig operating the joystick

Image: 
Eston Martz / Pennsylvania State University

Pigs will probably never be able to fly, but new research is revealing that some species within the genus Sus may possess a remarkable level of behavioral and mental flexibility. A study published in Frontiers in Psychology tested the ability of four pigs to play a simple joystick-enabled video game. Each animal demonstrated some conceptual understanding despite limited dexterity on tasks normally given to non-human primates to analyze intelligence.

The study involved two Yorkshire pigs named Hamlet and Omelette, and two Panepinto micro pigs, Ebony and Ivory. All four animals were trained to approach and manipulate a joystick with their snouts in front of a computer monitor during the first phase of the experiment. They were then taught how to play a video game in which the goal was to move a cursor using the joystick toward up to four target walls on the screen.

Each pig performed the tasks well above chance, indicating the animal understood that the movement of the joystick was connected to the cursor on the computer screen. The fact that these far-sighted animals with no opposable thumbs could succeed at the task is "remarkable," according to the researchers.

"It is no small feat for an animal to grasp the concept that the behavior they are performing is having an effect elsewhere. That pigs can do this to any degree should give us pause as to what else they are capable of learning and how such learning may impact them," said lead author Dr. Candace Croney, a professor at Purdue University and director of the Purdue Center for Animal Welfare Science. Sarah T. Boysen, known for her work on chimpanzee cognition, co-authored the study.

Scientists already know that pigs are capable of various types of learning, from the same sort of basic obedience commands taught to dogs like "come" and "sit" to more complex behaviors that require them to change behaviors when the rules of the game change. One study has even shown that pigs can use mirrors to find hidden food in an enclosure, Croney noted.

In the current study, the team used food to teach and reinforce behaviors, but also found that social contact could strongly influence their persistence. For instance, when the machine dispensing treats failed to work, the pigs continued to make correct responses using only verbal and tactile cues. And only verbal encouragement seemed to help the animals during the most challenging tasks.

"This sort of study is important because, as with any sentient beings, how we interact with pigs and what we do to them impacts and matters to them," Croney said. "We therefore have an ethical obligation to understand how pigs acquire information, and what they are capable of learning and remembering, because it ultimately has implications for how they perceive their interactions with us and their environments."

While the pigs could not match the skill level of non-human primates on the video task and failed to meet the criteria used for primates to demonstrate full mastery of the concept, the researchers said the shortcomings could partially be explained by the nature of the experiment, which was designed for dexterous, visually-oriented mammals.

The study ended before the researchers could investigate a more ambitious goal: whether such a computer interface using symbols could be employed to communicate with the pigs more directly, as has been done with non-human primates.

"Informing management practices and improving pig welfare was and still is a major goal, but really, that is secondary to better appreciating the uniqueness of pigs outside of any benefit we can derive from them," Croney said.

Credit: 
Frontiers

Biosensors monitor plant well-being in real time

image: An implantable organic electrochemical transistor sensor.

Image: 
Thor Balkhed

Researchers at Linköping University, Sweden, have developed biosensors that make it possible to monitor sugar levels in real time deep in the plant tissues - something that has previously been impossible. The information from the sensors may help agriculture to adapt production as the world faces climate change. The results have been published in the scientific journal iScience.

The primary source of nutrition for most of the Earth's population is mainly plants, which are also the foundation of the complete ecosystem on which we all depend. Global population is rising, and rapid climate change is at the same time changing the conditions for crop cultivation and agriculture.

"We will have to secure our food supply in the coming decades. And we must do this using the same, or even fewer, resources as today. This is why it is important to understand how plants react to changes in the environment and how they adapt", says Eleni Stavrinidou, associate professor in the Laboratory of Organic Electronics, Department of Science and Technology at Linköping University.

The research group at Linköping University led by Eleni Stavrinidou, together with Totte Niittylä and his group from Umeå Plant Science Centre, has developed sugar sensors based on organic electrochemical transistors that can be implanted in plants. The biosensors can monitor the sugar levels of trees in real time, continuously for up to two days. The information from the sensors can be related to growth and other biological processes. Plants use sugars for energy, and sugars are also important signal substances that influence the development of the plant and its response to changes in the surrounding environment.

While biosensors for monitoring sugar levels in humans are widely available, in particular the glucometer used by people who have diabetes, this technology has not previously been applied to plants.

"The sensors now are used for basic plant science research but in the future they can be used in agriculture to optimise the conditions for growth or to monitor the quality of the product, for example. In the long term, the sensors can also be used to guide the production of new types of plant that can grow in non-optimal conditions", says Eleni Stavrinidou.

The mechanisms by which plant metabolism is regulated and how changes in sugar levels affect growth are still relatively unknown. Previous experiments have typically used methods that rely on detaching parts of the plant. However, the sensor developed by the research group gives information without damaging the plant and may provide further pieces of the puzzle of how plant metabolism works.

"We found a variation in sugar levels in the trees that had not been previously observed. Future studies will focus on understanding how plants sugar levels change when plants are under stress", says Eleni Stavrinidou.

Credit: 
Linköping University

New insights to past ecosystems are now available based on pollen and plant traits

EUGENE, Ore. -- Feb. 11, 2021 -- Researchers have mined and combined information from two databases to link pollen and key plant traits to generate confidence in the ability to reconstruct past ecosystem services.

The approach provides a new tool to that can be used to understand how plants performed different benefits useful for humans over the past 21,000 years, and how these services responded to human and climate disturbances, including droughts and fires, said Thomas Brussel, a postdoctoral researcher in the University of Oregon's Department of Geography.

The approach is detailed in a paper published online Jan. 13 in the journal Frontiers in Ecology and Evolution.

Ultimately, Brussel said, the combined information could enhance decisions about conservation to allow regional ecosystem managers to continue to provide goods and services, such as having plants that protect hillsides from erosion or help purify water, based on their relationship with climate changes in the past.

For example, he said, an ecosystem's history may indicate that plants have previously withstood similar disruptions and could continue to thrive through preservation techniques.

Pollen cores have long helped scientists study environmental and ecological changes in a given location that have occurred because of climate changes and wildfires over recent geologic time. Combining pollen records to plant traits provides a picture of how well ecosystems have functioned under different scenarios, Brussel said.

"The biggest finding in this study is that researchers can now be confident that transforming pollen into the processes that ecosystems undergo works," he said. "With this information, we can now explore new questions that were previously unanswerable and provide positive guidance on how we can conserve and manage landscapes and biodiversity."

Brussel began pursuing the approach as a doctoral student at the University of Utah in the emerging field of functional paleoecology. Initial reception to the approach, when presented at conferences, drew interest but also calls for proof that the idea is possible, Brussel said. The paper, co-authored with his Utah mentor Simon Christopher Brewer, provides a proof-of-concept that his approach works.

For the study, Brussel and Brewer merged publicly available records for surface pollen samples found in the Neotoma Paleoecology Database and plant traits, specifically leaf area, plant height and seed mass, from the Botanical Information and Ecology Network.

They then restricted their results to only plants native to ecosystems from Mexico to Canada by combing through the U.S. Department of Agriculture's PLANTS Database and a compilation of all native plants in Mexico.

The resulting data for North America covers some 1,300 individual sites and includes 9.5 million plant height measurements for 2,146 species, 13,103 leaf area details from 1,016 species, and 16,621 seed mass data from 3,580 species.

The information, Brussel said, provides extensive details on the fitness of ecosystems that should help researchers study the mechanisms of changes in carbon or water cycling related to climate change.

"Our work is extremely relevant to modern climate change," he said. "The past houses all these natural experiments. The data are there. We can use that data as parallels for what may happen in the future. Using trait-based information through this approach, we can gain new insight, with confidence, that we haven't been able to get at before now."

At the UO, Brussel is working with Melissa Lucash, a research assistant professor who studies large, forested landscapes with a focus on the impacts of climate change and wildfires. Brussel is part of Lucash's research on potential climate changes being faced by Siberia's boreal forests and tundra.

He also is applying his approach to potential conservation and management strategies for some of the world's biodiversity hotspots, which are seeing a decline in plant species and wildlife as a result of global change.

"Using the newly validated approach, my idea is to assess the severity of the biodiversity degradation that has been occurring in these regions over recent millennia," Brussel said. "My end goal is to create a list of regions that can be prioritized for hotspot conservation, based on how severe an ecosystem's services have declined over time."

Credit: 
University of Oregon

Climate research: rapid formation of iodic particles over the Arctic

FRANKFURT. More than two thirds of the earth is covered by clouds. Depending on whether they float high or low, how large their water and ice content is, how thick they are or over which region of the Earth they form, it gets warmer or cooler underneath them. Due to human influence, there are most likely more cooling effects from clouds today than in pre-industrial times, but how clouds contribute to climate change is not yet well understood. Researchers currently believe that low clouds over the Arctic and Antarctic, for example, contribute to the warming of these regions by blocking the direct radiation of long-wave heat from the Earth's surface.

All clouds are formed by aerosols, suspended particles in the air, to which water vapour attaches. Such suspended particles or aerosols naturally consist of dusts, salt crystals or molecules released by plants. Human activities cause above all soot particles to be released into the atmosphere, but also sulphuric acid and ammonia molecules, which can cluster and form new aerosol particles in the atmosphere. Model calculations show that more than half of the cloud droplets are formed from aerosol particles that have formed in the atmosphere. For the formation of clouds, it is not decisive what the aerosol particles are made of; what matters most is their size: Aerosol particles only become condensation nuclei for cloud droplets from a diameter of about 70 nanometres and up.

In the atmosphere over the sea, aerosols released by humans play a much smaller role in the formation of low clouds than over land. Besides salt crystals originating from sea spray, aerosol particles over the sea mainly originate from certain sulphur compounds (dimethyl sufide) that are released from phytoplankton and react to form sulphuric acid, for example. At least, that is what previous research concluded.

Scientists from the CLOUD consortium have now studied the formation of aerosol particles from iodine-containing vapours. The slightly pungent smell of iodine is part of the aroma of the sea air you breathe when walking along the North Sea. Every litre of seawater contains 0.05 milligrams of iodine, and when it enters the atmosphere, iodic acid or iodous acid is formed through sunlight and ozone. The scientists simulated atmospheric conditions in mid-latitudes and arctic regions in the CLOUD experimental chamber at the CERN particle accelerator centre in Geneva, including cosmic rays simulated by an elementary particle beam.

Their findings: aerosol particle formation by iodic acid takes place very rapidly, much more rapidly than the particle formation of sulphuric acid and ammonia under comparable conditions. Ions produced by cosmic rays further promote particle formation. For the transformation of the molecular iodine into the iodine-containing acids, not even UV radiation and only a little daylight are necessary. In this way, very large aerosol quantities can be formed very quickly.

Atmospheric researcher Prof. Joachim Curtius from Goethe University explains: "Iodine aerosols can form faster than almost all other aerosol types we know. If ions produced by cosmic rays are added, each collision leads to the growth of the molecular clusters." Curtius added that this is particularly important because global iodine emissions on Earth have already tripled over the past 70 years. "A vicious circle may have been set in motion here: The pack ice thaws, which increases the water surface area and more iodine enters the atmosphere. This leads to more aerosol particles, which form clouds that further warm the poles. The mechanism we found can now become part of climate models, because iodine may play a dominant role in aerosol formation, especially in the polar regions, and this could improve climate model predictions for these regions."

Credit: 
Goethe University Frankfurt

US cities segregated not just by where people live, but where they travel daily

PROVIDENCE, R.I. [Brown University] -- One thing that decades of social science research has made abundantly clear? Americans in urban areas live in neighborhoods deeply segregated by race -- and they always have.

Less clear, however, is whether city-dwellers stay segregated when they leave home and go about their daily routines. That's a question to which Jennifer Candipan, an assistant professor of sociology at Brown University, was determined to find an answer.

By analyzing geotagged locations for more than 133 million tweets by 375,000 Twitter users in the 50 largest U.S. cities, Candipan and a team of researchers found that in most urban areas, people of different races don't just live in different neighborhoods -- they also eat, drink, shop, socialize and travel in different neighborhoods.

"Most of us can sense that segregation is about more than where people live -- it's also about how they move," Candipan said. "With the recent availability of data from global positioning systems, satellite imaging and social media, we've been able to start quantifying that segregated movement in cities. In combination with existing measures, we've been able to provide a fuller picture of racial inequality and segregation in America's cities."

Candipan, who is affiliated with Brown's Population Studies and Training Center, teamed with Harvard University social scientists Nolan Edward Phillips, Robert J. Sampson and Mario Small on the study, and the results of their analysis were published on Wednesday, Feb. 10, in Urban Studies.

Candipan said that hundreds of studies, including two of her own published in Urban Affairs Review and Urban Studies, have demonstrated that for generations, racist hiring practices, housing policies and social settings have kept people of color -- particularly Black and Hispanic people -- residentially separated from whites. But before the proliferation of mobile devices, it was relatively unknown whether that separation extended to people's regular movements. Candipan is part of a new wave of social science researchers who are using data collected from millions of smartphones and wearable devices to uncover and solve social inequities.

"It's an exciting time in research: Not only do we have location data we didn't have before, but we also have computing capabilities to process that data and perform analyses," Candipan said. "We can now answer questions about segregation and mobility in a systematic way and offer new metrics that can be used in future research."

Using data collected between 2013 and 2015 from Twitter -- where millions of urban Americans leave behind valuable clues about where they eat lunch, work out and socialize each time they post a tweet -- Candipan and her colleagues developed what they called a Segregated Mobility Index, or SMI, for each of 50 cities in the U.S. Candipan explained that each city scored somewhere between 0 and 1 on the SMI. If a city were to score 0, it would indicate total interconnectedness, with residents regularly visiting neighborhoods that don't resemble the racial and ethnic composition of their own with a frequency that corresponds with the diversity of the city. If a city were to score 1, it would indicate total racial segregation, with residents failing to visit any neighborhood that doesn't resemble the racial makeup of their own.

The team found that cities with the highest SMIs -- in other words, the highest levels of segregated movement -- were those with large populations of Black residences and troubled legacies of racial conflict, including Cleveland, Philadelphia and Atlanta. Detroit's SMI was the highest at 0.5. By contrast, the cities with the lowest SMIs tended to have proportionally smaller Black and Hispanic populations and proportionally larger white populations: Denver, Minneapolis, Seattle. Portland's was the lowest at 0.11. SMIs of the largest, most racially diverse American cities, including New York, Los Angeles and Chicago, fell somewhere in the middle.

Candipan says it's not a coincidence that each city's SMI correlated directly with the size of its non-white population, and particularly its Black population. She and her colleagues believe that one likely explanation for the pattern is "minority group threat" -- a phenomenon in which an area's dominant racial group segregates itself, and excludes other groups, out of fear of one or more non-dominant groups. Scholars have previously cited minority group threat as a major reason for residential segregation.

"Some cities have long histories of preventing people of color from living in majority-white neighborhoods through racist housing policies and racial restrictive covenants," she said. "It makes sense that we see that segregation borne out in people's movement as well. If you live in a city with segregated neighborhoods, you're more likely to move in segregated social circles and spend time in neighborhoods full of people who look like you, and avoid places where you've been excluded."

The bad news, Candipan said, is that the Social Mobility Index shows U.S. cities are even more deeply segregated than previously understood. The good news is that the patterns the researchers illuminated in the study could help provide city policymakers with a roadmap toward more integrated, equitable futures.

For example, the fact that residential and mobility segregation seem to go hand-in-hand -- the more segregated a city's housing, researchers found, the higher its SMI -- indicates that providing more affordable housing options would go a long way toward diversifying neighborhoods and movement, since Black and Hispanic Americans are disproportionately likely to live at or below the poverty line.

That connection between residential segregation and high SMI also suggests that increasing public transportation options could help. Candipan said that in some cities, people of color live on the margins of the city limits, miles away from affluent neighborhoods and the city center -- and without good transit options, they can be cut off from job opportunities and cultural experiences in those areas.

"This country has a legacy of racial discrimination at a structural, endemic level, and it is clear that to this day, racial hierarchies remain and white Americans remain on top," Candipan said. "It's time for us to recognize the extent of that segregation and to try to fix it -- because at the very least, everyone should be able to go where they want, when they want."

Credit: 
Brown University

The songs of fin whales offer new avenue for seismic studies of the oceanic crust

CORVALLIS, Ore. - The songs of fin whales can be used for seismic imaging of the oceanic crust, providing scientists a novel alternative to conventional surveying, a new study published this week in Science shows.

Fin whale songs contain signals that are reflected and refracted within the crust, including the sediment and the solid rock layers beneath. These signals, recorded on seismometers on the ocean bottom, can be used to determine the thickness of the layers as well as other information relevant to seismic research, said John Nabelek, a professor in Oregon State University's College of Earth, Ocean, and Atmospheric Sciences and a co-author of the paper.

"People in the past have used whale calls to track whales and study whale behavior. We thought maybe we can study the Earth using those calls," Nabelek said. "What we discovered is that whale calls may serve as a complement to traditional passive seismic research methods."

The paper serves as a proof of concept that could provide new avenues for using data from whale calls in research, Nabelek said.

"This expands the use of data that is already being collected," he said. "It shows these animal vocalizations are useful not just for understanding the animals, but also understanding their environment."

The study's lead author is Vaclav M. Kuna, who worked on the project as a doctoral student at Oregon State and has since completed his Ph.D.

Kuna and Nabelek were studying earthquakes from a network of 54 ocean-bottom seismometers placed along the Blanco transform fault, which at its closest is about 100 miles off Cape Blanco on the Oregon Coast.

They noted strong signals on the seismometers that correlated with whales' presence in the area.

"After each whale call, if you look closely at the seismometer data, there is a response from the Earth," Nabelek said.

Whale calls bounce between the ocean surface and the ocean bottom. Part of the energy from the calls transmits through the ground as a seismic wave. The wave travels through the oceanic crust, where it is reflected and refracted by the ocean sediment, the basalt layer underneath it and the gabbroic lower crust below that.

When the waves are recorded at the seismometer, they can provide information that allows researchers to estimate and map the structure of the crust.

Using a series of whale songs that were recorded by three seismometers, the researchers were able to pinpoint the whale's location and use the vibrations from the songs to create images of the Earth's crust layers.

Researchers use information from these layers to learn more about the physics of earthquakes in the region, including how sediment behaves and the relationship between its thickness and velocity. Earthquakes shake up the sediment, expelling water and speeding up the settlement of the sediment.

The current traditional method for imaging of the crust can be expensive and permits can be difficult to obtain because the work involves deploying air guns, Nabelek said. The imaging created using the whale songs is less invasive, though overall it is of lower resolution.

Future research could include using machine learning to automate the process of identifying whale songs and developing images of their surroundings, Nabelek said.

"The data from the whale songs is useful but it doesn't completely replace the standard methods," he said. "This method is useful for investigating the Earth's oceanic crust where standard science survey methods are not available."

Credit: 
Oregon State University

Biochemical rules between RNA-protein interactions and and cancer

CLEVELAND--A team of Case Western Reserve University researchers has found a way to measure key characteristics of proteins that bind to RNA in cells--a discovery that could improve our understanding of how gene function is disturbed in cancer, neurodegenerative disorders or infections.

RNA--short for ribonucleic acid--carries genetic instructions within the body. RNA-binding proteins play an important role in the regulation of gene expression. Scientists already knew that the way these proteins function depends on their "binding kinetics," a term that describes how frequently they latch on to a site in an RNA, and how long they stay there.

Until now, researchers could not measure the kinetics of RNA-binding proteins in cells. But the Case Western Reserve researchers answered this longstanding question in RNA biology. The findings open the door to a biochemical understanding of RNA protein interactions in cells.

By understanding the kinetics, researchers can quantitatively predict how an RNA binding protein regulates the expression of thousands of genes, which is critical for developing strategies that target RNA protein interactions for therapeutic purposes.

"The study marks a major step toward understanding how gene function is regulated and how to devise ways to correct errors in this regulation in diseases such as cancer, neurodegenerative disorders or infections," said Eckhard Jankowsky, the study's principal author and a professor of biochemistry at the university's School of Medicine and director of the school's Center for RNA Science and Therapeutics .

Their study, "The kinetic landscape of an RNA binding protein in cells," was published Feb. 10 in Nature. Funding from the National Institute of General Medical Sciences and the National Science Foundation supported the research.

The co-authors, all from Case Western Reserve, are: research associate Deepak Sharma; graduate students Leah Zagore, Matthew Brister and Xuan Ye; Carlos Crespo-Hernández, a chemistry professor; and Donny Licatalosi, an associate professor of biochemistry and member of the Center for RNA Science and Therapeutics.

To measure the kinetics of RNA binding proteins, the researchers used a laser that sends out extremely short (femtosecond) pulses of ultraviolet light to cross-link the RNA-binding protein known as DAZL to its several thousand binding sites in RNAs. (DAZL, short for Deleted in Azoospermia-Like, is involved in germ cell development.) They then used high throughput sequencing to measure the change of the crosslinked RNA over time and determined the binding kinetics of DAZL at thousands of binding sites.

The resulting "kinetic landscape" allowed the researchers to decode the link between DAZL binding and its effects on RNAs.

Credit: 
Case Western Reserve University

Scientists create liquid crystals that look a lot like their solid counterparts

image: Graphic showing the arrangement of the disk-shaped molecules in a monoclinic liquid crystal with two symmetries.

Image: 
Smalyukh Lab

A team at the University of Colorado Boulder has designed new kinds of liquid crystals that mirror the complex structures of some solid crystals--a major step forward in building flowing materials that can match the colorful diversity of forms seen in minerals and gems, from lazulite to topaz.

The group's findings, published today in the journal Nature, may one day lead to new types of smart windows and television or computer displays that can bend and control light like never before.

The results come down to a property of solid crystals that will be familiar to many chemists and gemologists: Symmetry.

Ivan Smalyukh, a professor in the Department of Physics at CU Boulder, explained that scientists categorize all known crystals into seven main classes, plus many more sub-classes--in part based on the "symmetry operations" of their internal atoms. In other words, how many ways can you stick an imaginary mirror inside of a crystal or rotate it and still see the same structure? Think of this classification system as Baskin-Robbins' 32 flavors but for minerals.

To date, however, scientists haven't been able to create liquid crystals--flowing materials that are found in most modern display technologies--that come in those same many flavors.

"We know everything about all the possible symmetries of solid crystals that we can make. There are 230 of them," said Smalyukh, senior author of the new study who is also a fellow of the Renewable and Sustainable Energy Institute (RASEI) at CU Boulder. "When it comes to nematic liquid crystals, the kind in most displays, we only have a few that have been demonstrated so far."

That is, until now.

In their latest findings, Smalyukh and his colleagues came up with a way to design the first liquid crystals that resemble monoclinic and orthorhombic crystals--two of those seven main classes of solid crystals. The findings, he said, bring a bit more of order to the chaotic world of fluids.

"There are a lot of possible types of liquid crystals, but, so far, very few have been discovered," Smalyukh said. "That is great news for students because there's a lot more to find."

Symmetry in action

To understand symmetry in crystals, first picture your body. If you place a giant mirror running down the middle of your face, you'll see a reflection that looks (more or less) like the same person.

Solid crystals have similar properties. Cubic crystals, which include diamonds and pyrite, for example, are made up of atoms arranged in the shape of a perfect cube. They have a lot of symmetry operations.

"If you rotate those crystals by 90 or 180 degrees around many special axes, for example, all of the atoms stay in the right places," Smalyukh said.

But there are other types of crystals, too. The atoms inside monoclinic crystals, which include gypsum or lazulite, are arranged in a shape that looks like a slanted column. Flip or rotate these crystals all you want, and they still have only two distinct symmetries--one mirror plane and one axis of 180-degree rotation, or the symmetry that you can see by spinning a crystal around an axis and noticing that it looks the same every 180 degrees. Scientists call that a "low-symmetry" state.

Traditional liquid crystals, however, don't display those kinds of complex structures. The most common liquid crystals, for example, are made up of tiny rod-shaped molecules. Under the microscope, they tend to line up like dry pasta noodles tossed into a pot, Smalyukh said.

"When things can flow they don't usually exhibit such low symmetries," Smalyukh said.

Order in liquids

He and his colleagues wanted to see if they could change that. To begin, the team mixed together two different kinds of liquid crystals. The first was the common class made up of rod-shaped molecules. The second was made up of particles shaped like ultra-thin disks.

When the researchers brought them together, they noticed something strange: Under the right conditions in the lab, those two types of crystals pushed and squeezed each other, changing their orientation and arrangement. The end result was a nematic liquid crystal fluid with symmetry that looks a lot like that of a solid monoclinic crystal. The molecules inside displayed some symmetry, but only one mirror plane and one axis of 180-degree rotation.

The group had created, in other words, a material with the mathematical properties of a lazulite or gypsum crystal--but theirs could flow like a fluid.

"We're asking a very fundamental question: What are the ways that you can combine order and fluidity in a single material?" Smalyukh said.

And, the team's creations are dynamic: If you heat the liquid crystals up or cool them down, for example, you can morph them into a rainbow of different structures, each with their own properties, said Haridas Mundoor, lead author of the new paper. That's pretty handy for engineers.

"This offers different avenues that can modify display technologies, which may enhance the energy efficiency in performance of devices like smart phones," said Mundoor, a postdoctoral research associate at CU Boulder.

He and his colleagues are still nowhere near making liquid crystals that can replicate the full spectrum of solid crystals. But the new paper gets them closer than ever before--good news for fans of shiny things everywhere.

Credit: 
University of Colorado at Boulder

Sawfish face global extinction unless overfishing is curbed

Sawfish have disappeared from half of the world's coastal waters and the distinctive shark-like rays face complete extinction due to overfishing, according to a new study by Simon Fraser University researchers, published in Science Advances.

Sawfish, named after their unique long, narrow noses lined by teeth, called rostra, that resemble a sawblade, were once found along the coastlines of 90 countries but they are now among the world's most threatened family of marine fishes, presumed extinct from 46 of those nations. There are 18 countries where at least one species of sawfish is missing, and 28 more where two species have disappeared.

According to SFU researchers Helen Yan and Nick Dulvy, three of the five species of sawfish are critically endangered, according to the International Union for Conservation of Nature (IUCN) Red List of Threatened Species, and the other two are endangered.

Their teeth on their rostra are easily caught in fishing nets. Sawfish fins are among the most valuable in the global shark fin trade and rostra are also sold for novelty, medicine and as spurs for cockfighting.

The current presence of all sawfishes world-wide is unknown, but Dulvy warns complete extinction is possible if nothing is done to curb overfishing and to protect threatened habitats, such as mangroves, where sawfish can thrive.

"Through the plight of sawfish, we are documenting the first cases of a wide-ranging marine fish being driven to local extinction by overfishing," Dulvy says. "We've known for a while that the dramatic expansion of fishing is the primary threat to ocean biodiversity, but robust population assessment is difficult for low priority fishes whose catches have been poorly monitored over time. With this study, we tackle a fundamental challenge for tracking biodiversity change: discerning severe population declines from local extinction."

The study recommends that international conservation efforts focus on eight countries (Cuba, Tanzania, Columbia, Madagascar, Panama, Brazil, Mexico and Sri Lanka) where conservation efforts and adequate fishing protections could save the species. It also found Australia and the United States, where adequate protections already exist and some sawfish are still present, should be considered as "lifeboat" nations.

"While the situation is dire, we hope to offset the bad news by highlighting our informed identification of these priority nations with hope for saving sawfish in their waters," says Yan. "We also underscore our finding that it's actually still possible to restore sawfish to more than 70 per cent of their historical range, if we act now."

Credit: 
Simon Fraser University

How messenger substances influence individual decision-making

image: Longitudinal section of the brain: GABA/glutamate concentrations were measured at the locations marked (top: dorsal anterior cingulate cortex; further forward/bottom: ventromedial prefrontal cortex).

Image: 
HHU / Luca Franziska Kaiser

10 February 2021 – A research team of psychologists and physicists from Heinrich Heine University Düsseldorf (HHU) and Otto von Guericke University Magdeburg investigated the neurobiological processes in different types of decision-making. In the journal Nature Communications, they report that variations in the ratio of two messenger substances affects short-term and long-term strategic decisions in a different manner.

As indicated by other studies, different parts of the brain play a key role in different types of decisions. A research team led by Luca Franziska Kaiser and Prof. Dr. Gerhard Jocham from the HHU research group ‘Biological Psychology of Decision Making’, and Dr. Theo Gruendler together with colleagues in Magdeburg analysed the balance of the neuronal messenger substances GABA and glutamate in two types of decision-making. The primary research question was to find out how different concentrations of these substances influence the way in which humans make these decisions.

First, the researchers looked at so called ‘reward-based decisions’, which involve maximising reward by selecting the better of two options currently available. Luca Kaiser gives a simple example: “Where do I buy coffee on my way to work, depending on the price, quality and whether or not the café is on my way?” Previous results suggest that such decision-making processes in the brain are mainly processed in the ventromedial prefrontal cortex (vmPFC).

Unlike these reward-based decisions, ‘patch-leaving decisions’ are about long-term strategic considerations that include a careful balancing of immediate cost against (long-term) gain. An example of such a decision would be whether to move from Düsseldorf to Munich for a job offer. Prof. Jocham explains: “The job in Munich may offer a higher salary and a more interesting role, but may also involve stress and the effort involved in finding a place to live and moving to Munich – as well as higher rents and the loss of social contacts in Düsseldorf.” Thus, there are many factors that influence this type of decision. According to the literature, such decisions are made in the brain’s dorsal anterior cingulate cortex (dACC).

The two messenger substances glutamate and GABA may play a key role. The ratio between them, the so-called E/I balance, indexes the balance between excitatory and inhibitory neural transmission. The researchers used magnetic resonance spectroscopy to measure the concentrations of GABA and glutamate in different cortical areas of human volunteers.

The team then used the data to correlate the ratio of the two messenger substances with participants’ individual decision-making behaviour. In the patch-leaving scenario, subjects with a higher ratio of GABA to glutamate in dACC were quicker to leave a depleting resource. By contrast, people with higher concentrations of glutamate required a greater expected advantagebefore deciding to abandon their current patch.

In the other scenario, subjects with higher concentrations of GABA relative to glutamate in vmPFC exhibited significantly higher decision accuracy. They more reliably selected the higher-value option.

Luca Kaiser says: “Our results show a correlation between decision-making behaviour and the balance of two messenger substances in the brain. People with a higher ratio of excitation to inhibition in dACC need a bigger incentive to move away from their status quo. By contrast, people with more GABA in vmPFC exhibit greater accuracy for short-term decisions.”

Credit: 
Heinrich-Heine University Duesseldorf