Earth

Understanding Asteraceae: Validation of a Hyb-Seq probe set for evolutionary studies

IMAGE: Diversity of Asteraceae shown by representative species from the genera sampled in this study from six tribes across the family.

Image: 
Jones, K. E., T. Fér, R. E. Schmickl, R. B. Dikow, V. A. Funk, S. Herrando-Moraira, P. R. Johnston, N. Kilian, C. M. Siniscalchi, A. Susanna, M. Slovák, R. Thapa,...

Accurately reconstructing the relationships between different species requires analyzing the sequences of a judiciously selected, and preferably large, sample of different genes. Hybrid capture with high-throughput sequencing, or Hyb-Seq, is a powerful tool for obtaining those gene sequences, but must be calibrated for each group analyzed to ensure an informative sample of genes are sequenced. Researchers must take a variety of considerations into account when selecting which genes to sequence, and the choices made in gene sampling can affect the outcome of the analysis. In work presented in a recent issue of Applications in Plant Sciences, Dr. Katy Jones and colleagues evaluated the performance of a Hyb-Seq probe set designed for the large and diverse sunflower family, Asteraceae, and found it to be effective in reconstructing relationships at multiple taxonomic levels, from subspecies to tribe.

Genes that would be informative in one taxonomic group may not be in another, for a variety of reasons -- the gene is not present in all species, or evolves too slowly in that group to add meaningful information to a phylogenetic analysis, or has duplicated to create multiple paralogs. The diverse evolutionary histories in a large group like the Asteraceae makes selecting which genes to sequence a challenge. "Asteraceae is the largest angiosperm family and the Asteraceae COS probe set contains 1061 loci, some of which may be informative for some tribes/genera but not to others, for example due to potential paralogy in some groups but not in others," said Dr. Jones, corresponding author of the manuscript, work she did during her postdoctoral research at Botanischer Garten und Botanisches Museum Berlin.

Dr. Jones and colleagues were interested in how the genes sampled in the 1,061-locus Asteraceae Hyb-Seq probe set would perform in phylogenetic analyses at different taxonomic levels. The researchers tested the probe set on a tribe within the Asteraceae, the Cichorieae. "We were interested to know how analyzing a dataset containing many species across a large tribe compared to a dataset just containing a small species complex may influence phylogenetic inference within that small species complex," said Dr. Jones. "It was quite explorative at the start and over time the questions, ideas, and number of different taxonomic groups grew!"

The researchers found that the Hyb-Seq probe set yielded sequence data that could accurately reconstruct relationships between species at multiple different levels, but that the way the data was subsampled and analyzed was important and influenced results. For example, phylogenetic analysis with coalescent species tree approaches produced different results than with maximum likelihood methods when long branches (loci that have undergone considerable evolution) were not removed.

As part of this work, Dr. Jones and colleagues present an optimized pipeline for the preparation and analysis of Hyb-Seq data, and discussed different wet lab approaches that could influence results, streamlining the process for other research groups. This was a direct response to their own personal experience with Hyb-Seq. "We were often sending emails back and forth about different things, for example if someone would find that they got poor capture or more of the off-target plastome compared to previous runs," said Dr. Jones. "We'd chat about our wet lab steps or analysis pipelines." Dr. Jones noted the support of the Asteraceae community in this work, and particularly that of the late Vicki Funk.

Dr. Jones hopes that working out the nuances in these sorts of analyses will mean that, in the future, powerful tools like Hyb-Seq will be put to greater use. "I hope this paper encourages more people to use Hyb-Seq data for their research questions," said Dr. Jones, "because the phylogenetic methods are becoming even more accessible."

Credit: 
Botanical Society of America

MASS: An integrative software program for streamlined morphometric analyses of leaves

IMAGE: Typical landmark processing and analysis stages utilized by MASS for Acer saccharum leaves. (A) Stages of image processing for maple tree leaf, displayed from the raw image of a leaf...

Image: 
Chuanromanee, T. S., J. I. Cohen, and G. L. Ryan. 2019. Morphological Analysis of Size and Shape (MASS): An integrative software program for morphometric analyses of leaves. Applications in Plant...

Analysis of leaf shape and size is a cornerstone of botany, and is crucial in answering a variety of ecological, evolutionary, genetic, and agricultural questions. However, the software packages used to conduct these morphometric analyses can be cumbersome, and sometimes require stringing multiple programs together. This slows the rate of progress in the field, creates higher barriers of entry for newcomers, and introduces unnecessary errors to these calculations. In research presented in a recent issue of Applications in Plant Sciences, Dr. Gillian Ryan and colleagues introduced a new software program, MASS (Morphological Analysis of Size and Shape), that they developed to streamline the process of running geometric morphometric analyses on leaf shape.

"We wanted to automate as much of the morphometric analyses as possible," explains Dr. Ryan, Associate Professor of Physics at Kettering University and corresponding author of the paper. "Both to minimize the number of manual measurements, which can be tedious, and to improve the reproducibility of geometric measurements." To that end, MASS includes functions to scale and measure the features of a leaf based on an image, as well as a suite of functions for interrogating that data, including Fourier, Procrustes, and principal component analyses.

The inclusion of multiple functions into one program will be welcome for researchers who are currently manually editing files to fit formatting requirements between these programs, which introduces more opportunity for human error. "I think this is a common issue in many STEM fields where individual groups and teams are developing analysis tools and pipelines for their own work, sometimes in parallel," says Dr. Ryan. "As new users adopt these analysis methods it can be challenging to combine multiple tools that were not necessarily designed to be used together."

Reducing the complexity of running these morphometric analyses has benefits to the field beyond just reducing errors. "One of the primary goals of the MASS project is to streamline these analyses and to make them more accessible to a broader pool of potential users -- including novice researchers and trainees," says Dr. Ryan. "This has the added benefit of making it easier to collect and analyze larger groups of data, which we hope will aid in comprehensive future studies."

The timing is right to scale up morphological studies, too, as large datasets of digitized herbarium specimens come online. "It was important that we were able to demonstrate that MASS could use these [digitized herbarium] specimens so researchers could take advantage of all of those already online. It really opens up the ability to use these digitized specimens to answer interesting research questions," says Dr. Ryan. "With so many large-scale digitization projects having occurred (and currently taking place), the number of specimens available online is incredible."

Credit: 
Botanical Society of America

Loneliness may be due to increasing aging population

Despite some claims that Americans are in the midst of a "loneliness epidemic," older people today may not be any lonelier than their counterparts from previous generations - there just might be more of them, according to a pair of studies published by the American Psychological Association.

"We found no evidence that older adults have become any lonelier than those of a similar age were a decade before," said Louise C. Hawkley, PhD, of NORC at the University of Chicago, lead author of one of the studies. "However, average reported loneliness begins to increase beyond age 75, and therefore, the total number of older adults who are lonely may increase once the baby boomers reach their late 70s and 80s."

The studies were published in the journal Psychology and Aging.

Hawkley and her colleagues used data from the National Social Life, Health and Aging Project and the Health and Retirement Study, two national surveys of older adults that compared three groups of U.S. adults born in different periods throughout the 20th century. They first analyzed data in 2005 to 2006 from 3,005 adults born between 1920 and 1947 and a second time in 2010 to 2011 from 3,377 people, which included those from the previous survey who were still alive, and their spouses or partners. The third survey, in 2015 to 2016, comprised 4,777 adults, which included an additional sample of adults born between 1948 and 1965 to the surviving respondents from the previous two surveys.

The authors examined participants' level of loneliness, educational attainment, overall health on a scale from poor to excellent, marital status and number of family members, relatives and friends they felt close to. They found that loneliness decreased between the ages of 50 and 74, but increased after age 75, yet there was no difference in loneliness between baby boomers and similar-aged adults of earlier generations.

"Loneliness levels may have decreased for adults between 50 and 74 because they had better educational opportunities, health care and social relationships than previous generations," said Hawkley.

Adults over 75 were more susceptible to becoming lonely, possibly due to life factors such as declining health or the loss of a spouse or significant other, according to Hawkley.

"Our research suggests that older adults who remain in good health and maintain social relationships with a spouse, family or friends tend to be less lonely," said Hawkley.

In a similar study, researchers in the Netherlands found that older adults were less lonely than their counterparts from previous generations.

These researchers used data from the Longitudinal Aging Study Amsterdam, a long-term study of the social, physical, cognitive and emotional functioning of older adults. A total of 4,880 people, born between 1908 and 1957, participated.

The study measured peoples' loneliness, control over situations and life in general and goal achievement. For example, participants rated loneliness on a scale from 0 (no loneliness) to 11 (severe loneliness) based on feelings such as, "I miss having people around."

Older adults born in later generations were actually less lonely, because they felt more in control and thus most likely managed their lives better, according to Bianca Suanet, PhD, of Vrije Universiteit Amsterdam and lead author of the study.

"In contrast to assuming a loneliness epidemic exists, we found that older adults who felt more in control and therefore managed certain aspects of their lives well, such as maintaining a positive attitude, and set goals, such as going to the gym, were less lonely," said Suanet. "Additionally, as is well-known in loneliness research, participants who had a significant other and/or larger and more diverse networks were also less lonely."

Suanet recommended that older adults take personal initiative to better nurture their social ties, such as making friends to help them overcome increasing loneliness as they age. Also, interventions to reduce loneliness should focus more on bolstering older adults' feelings of control, instead of only offering social activities.

"People must manage their social lives better today than ever before because traditional communities, which provided social outlets, such as neighborhoods, churches and extended families, have lost strength in recent decades," said Suanet. "Therefore, older adults today need to develop problem-solving and goal-setting skills to sustain satisfying relationships and to reduce loneliness."

Seniors may also want to make use of modern technology to maintain meaningful social connections, according to Hawkley.

"Video chatting platforms and the Internet may help preserve their social relationships," said Hawkley. "These tools can help older adults stay mobile and engaged in their communities."

Credit: 
American Psychological Association

Brain patterns can predict speech of words and syllables

Neurons in the brain's motor cortex previously thought of as active mainly during hand and arm movements also light up during speech in a way that is similar to patterns of brain activity linked to these movements, suggest new findings published today in eLife.

By demonstrating that it is possible to identify different syllables or words from patterns of neural activity, the study provides insights that could potentially be used to restore the voice in people who have lost the ability to speak.

Speaking involves some of the most precise and coordinated movements humans make. Studying it is fascinating but challenging, because there are few opportunities to make measurements from inside someone's brain while they speak. This study took place as part of the BrainGate2 Brain-Computer Interface pilot clinical trial*, which is testing a computer device that can 'communicate' with the brain, helping to restore communication and provide control of prosthetics such as robotic arms.

The researchers studied speech by recording brain activity from multi-electrode arrays previously placed in the motor cortex of two people taking part in BrainGate 2 study. This allowed them to study the timing and location of the firing of a large population of neurons that is activated during speech, rather than just a few at a time.

"We first asked if neurons in the so-called 'hand knob' area of the brain's motor cortex are active during speaking," explains lead author Sergey Stavisky, Postdoctoral Research Fellow in the Department of Neurosurgery and the Wu Tsai Neurosciences Institute at Stanford University, US. "This seemed unlikely because this is an area known to control hand and arm movements, not speech. But clues in the scientific literature suggested there might be an overlap."

To test this, the team recorded neural activity from participants during a speaking task where they heard one of 10 different syllables, or one of 10 different short words, and then spoke the prompted sound after hearing a 'go' cue. The firing rates of neurons changed strongly during their speaking of words and syllables and the active neurons were spread throughout the part of motor cortex that the researchers were recording from. Moreover, the firing rates did not change as much when the participants heard the sound, only when they spoke it. This change in neuron activity also corresponded to groups of similar sounds, called phonemes, and with face and mouth movements. This suggests that although there is high-level separation of control of different body parts across the motor cortex, activity of the individual neurons overlaps.

Next, the team performed a 'decoding' analysis to see whether the neuron-firing patterns could reveal information about the specific sound being spoken. They found that by analysing neural activity across nearly 200 electrodes, they could identify which of several syllables or words the participant was saying. In fact, certain patterns of neuron activity could correctly predict the sound, or lack of sound, in more than 80% of cases for one of the participants, and between 55% and 61% of cases for the other.

"This suggests it might be possible to use this brain activity to understand what words people who cannot speak are trying to say," says co-senior author Krishna Shenoy, Howard Hughes Medical Institute Investigator and Hong Seh and Vivian W. M. Lim Professor in the School of Engineering, and Co-Director of the Neural Prosthetics Translational Laboratory (NPTL), at Stanford University.

"With this study we have shown that we can identify syllables and words people say based on their brain activity, which lays the groundwork for synthesising, or producing, text and speech from the neural activity measured when a patient tries to speak," concludes co-senior author Jaimie Henderson, John and Jene Blume - Robert and Ruth Halperin Professor in the Department of Neurosurgery and Co-Director of NPTL, Stanford University. "Further work is now needed to synthesise continuous speech for people who cannot provide example data by actually speaking."

Credit: 
eLife

Could dark carbon be hiding the true scale of ocean 'dead zones'?

image: This is one of the sediment samples gathered from the floor of the Arabian Sea.

Image: 
Sabine Lengger, University of Plymouth

Dead zones within the world's oceans - where there is almost no oxygen to sustain life - could be expanding far quicker than currently thought, a new study suggests.

The regions are created when large amounts of organic material produced by algae sinks towards the seafloor, using up the oxygen present in the deep water.

Computer models can predict the spread of these zones, with the aim being to provide an insight into the impact they might have on the wider marine environment.

However, a study published in Global Biogeochemical Cycles suggests that dark carbon fixation - caused by the presence of anaerobic bacteria in the deeper water column - needs to be incorporated into these models.

The research was led by Dr Sabine Lengger, a scientist at the University of Plymouth, and involved researchers from universities in the UK and the Netherlands.

They measured the stable isotopes of organic carbon in sediment cores taken from the floor of the Arabian Sea, one of the world's large natural dead zones, in order to get a clear understanding about what is contributing to the organic matter contained within them.

This value is a mixture of all the distinct signatures from all the organisms that produced this carbon - thought to be mostly algae and bacteria living in the oxygen-rich, light, surface ocean where it sinks from.

However, using a distinct biomarker produced by anaerobic bacteria, they suggest that around one fifth of the organic matter on the seafloor could in fact stem from bacteria living in or around these dead zones.

In the paper, the scientists say this casts doubt on current predictions around the impact of increasing atmospheric carbon dioxide concentrations, and consequent rising temperatures.

They in fact believe the dead zones could be expanding much faster than previously thought, and that future calculations must take the bacteria into account in order to accurately predict the full impacts of climate change and human activity on the marine environment.

The new study adds to warnings issued at COP25 by the International Union for the Conservation of Nature (IUCN), where it was reported that the number of known hypoxic dead zones has skyrocketed from 45 to 700 sites.

Dr Lengger, an organic and isotope biogeochemist at Plymouth, said: "With global warming, and increased nutrients from rivers, oceanic dead zones are forecast to expand. They can draw down carbon and store it in the deep ocean, but as they expand can have devastating effects on marine life, as well as people that are economically reliant on fisheries. Our study shows that organic matter that sinks to the seafloor is not just coming from the sea surface, but includes a major contribution from bacteria that live in the dark ocean and can fix carbon as well. Existing models could be missing out on a key contribution as a result of which people have underestimated the extent of the oxygen depletion we are to expect in a future, warming world.

"Our findings explain some of the mismatches in carbon budgets when experimental and modelling estimates are compared - and it should therefore be included in biogeochemical models predicting feedbacks to a warming world. It is imperative to refine predictions in biogeochemical models as if dead zones will intensify more than expected (something which has already been observed), this will have severe ecological, economic and climatic consequences."

Credit: 
University of Plymouth

Have you found meaning in life? Answer determines health and well-being

image: Dilip V. Jeste, MD, senior associate dean for the Center of Healthy Aging and Distinguished Professor of Psychiatry and Neurosciences at UC San Diego School of Medicine.

Image: 
Erik Jepson, UC San Diego Publications

Over the last three decades, meaning in life has emerged as an important question in medical research, especially in the context of an aging population. A recent study by researchers at University of California San Diego School of Medicine found that the presence of and search for meaning in life are important for health and well-being, though the relationships differ in adults younger and older than age 60.

"Many think about the meaning and purpose in life from a philosophical perspective, but meaning in life is associated with better health, wellness and perhaps longevity," said senior author Dilip V. Jeste, MD, senior associate dean for the Center of Healthy Aging and Distinguished Professor of Psychiatry and Neurosciences at UC San Diego School of Medicine. "Those with meaning in life are happier and healthier than those without it."

The study, publishing online in the December 10, 2019 edition of the Journal of Clinical Psychiatry, found the presence of meaning in life is associated with better physical and mental well-being, while the search for meaning in life may be associated with worse mental well-being and cognitive functioning. "When you find more meaning in life, you become more contented, whereas if you don't have purpose in life and are searching for it unsuccessfully, you will feel much more stressed out," said Jeste.

The results also showed that the presence of meaning in life exhibited an inverted U-shaped relationship, while the search for meaning in life showed a U-shaped relationship with age. The researchers found that age 60 is when the presence of meaning in life peaks and the search for meaning of life was at its lowest point.

"When you are young, like in your twenties, you are unsure about your career, a life partner and who you are as a person. You are searching for meaning in life," said Jeste. "As you start to get into your thirties, forties and fifties, you have more established relationships, maybe you are married and have a family and you're settled in a career. The search decreases and the meaning in life increases."

"After age 60, things begin to change. People retire from their job and start to lose their identity. They start to develop health issues and some of their friends and family begin to pass away. They start searching for the meaning in life again because the meaning they once had has changed."

The three-year, cross-sectional study examined data from 1,042 adults, ages 21 to 100-plus, who were part of the Successful Aging Evaluation (SAGE)--a multi-cohort study of senior residents living in San Diego County. The presence and search for meaning in life were assessed with interviews, including a meaning in life questionnaire where participants were asked to rate items, such as, "I am seeking a purpose or mission for my life" and "I have discovered a satisfying life purpose."

"The medical field is beginning to recognize that meaning in life is a clinically relevant and potentially modifiable factor, which can be targeted to enhance the well-being and functioning of patients," said Awais Aftab, MD, first author of the paper and a former fellow in the Department of Psychiatry at UC San Diego. "We anticipate that our findings will serve as building blocks for the development of new interventions for patients searching for purpose."

Jeste said next research steps include looking at other areas, such as wisdom, loneliness and compassion, and how these impact meaning in life. "We also want to examine if some biomarkers of stress and aging are associated with searching and finding the meaning in life. It's an exciting time in this field as we are seeking to discover evidence-based answers to some of life's most profound questions."

Credit: 
University of California - San Diego

Close friends help macaques survive

video: Grooming is a key social behavior among macaques.

Image: 
Lauren Brent

Close friendships improve the survival chances of rhesus macaques, new research shows.

University of Exeter scientists studied the social lives of female macaques on "Monkey Island" (Cayo Santiago, off Puerto Rico).

Data spanning seven years revealed that females with the strongest social connection to a another macaque - measured by factors including time spent together and time grooming each other's fur - were 11% less likely to die in a given year.

"We can't say for certain why close social ties help macaques survive," said lead author Dr Sam Ellis, of Exeter's Centre for Research in Animal Behaviour.

"Having favoured partners could be beneficial in multiple ways, including more effective cooperation and 'exchange' activities such as grooming and forming coalitions.

"Many species - including humans - use social interactions to cope with challenges in their environment, and a growing number of studies show that well-connected individuals are healthier and safer than those who are isolated."

The study focussed on four measures of social connection:

Associating with many other macaques

Having strong connections to favoured partners

Connecting the broader group (being a link by associating with several sub-groups)

A high rate of cooperative activities such as grooming

Having strong connections to favoured partners provided the biggest boost to survival chances, while having many connections was also linked with better survival rates.

Connecting a broader group and engaging in high rates of grooming were found to bring no survival benefits.

The macaques on Monkey Island have been studied for decades, and the researchers combined existing data with a study of social connections.

"Having many social connections might mean a macaque is widely tolerated - not chased away from food, for example," said Dr Lauren Brent, also of the University of Exeter and the senior author of the study.

"But it seems having 'close friends' brings more important benefits than simply being tolerated.

"These favoured partners have the opportunity to provide each other with mutual support and cooperation, making both parties more likely to survive."

Credit: 
University of Exeter

High above the storm clouds, lightning powers gamma-ray flashes and ultraviolet 'elves'

Using instruments onboard the International Space Station, researchers have observed millisecond pulses of gamma-rays produced by thunderstorms, clarifying the process by which these flashes are made, and discovering that they can produce an ultraviolet emission known as an "Elve." The results help reveal the process by which terrestrial gamma-ray flashes (TGFs) are generated from thunderstorms, which has been debated. While many are familiar with the brilliantly electric bolts of lightning that crack the sky below thunderstorm clouds, other types of luminous phenomena are known to occur above storm clouds high in Earth's upper atmosphere, too. Elves, one type of this phenomenon, are expanding waves of ultraviolet and optical emission in the ionosphere above the thunderstorm. They are triggered by electromagnetic pulse radiating from lightning discharges in the storm below, but questions remain about this process. Also, a question in the field is how thunderstorms lead to generation of TGFs. Torsten Neubert and colleagues observed a TGF and an associated Elve with the Atmosphere-Space Interactions Monitor (ASIM) instruments mounted to the exterior of the International Space Station. The ASIM data captured high-speed observations of the event in optical, ultraviolet, x-ray and gamma-ray bands, which allowed Neubert et al. to identify the sequence of events that generated the TGF. Their results show the TGF was produced by the high-electric fields produced just prior to a lightning bolt within the thunderstorm cloud - occurring milliseconds after the onset of the lightning leader, which was key to the TGF's formation. The subsequent lightning flash released an electromagnetic pulse, which induced the Elve visible above the thunderstorm. The scenario may represent prerequisite conditions for generating Elves, the authors say.

Credit: 
American Association for the Advancement of Science (AAAS)

All Bitcoin mining should be environmentally friendly

image: This is the data structure of a block in a blockchain with the proposed protocol.

Image: 
Naoki Shibata

The rise in popularity of cryptocurrencies such as Bitcoin has the potential to change how we view money. At the same time, governments and societies are worried that the anonymity of these cashless transactions could allow criminal activities to flourish. Another less remarked issue is the energy demands needed to mint new coins for these cryptocurrencies. A new report by Associate Professor Naoki Shibata of Nara Institute of Science and Technology presents a blockchain algorithm, which he calls "proof-of-search" (PoS), that retains the attractive features of most cryptocurrencies at a lower cost to the environment.

While the economics of cryptocurrencies gets most of the attention, it is becoming readily apparent that cryptocurrencies have a massive environmental cost. The energy used in the world to mine for Bitcoins alone equals almost that of the energy consumption of all of Ireland, while in Iceland, Bitcoin mining consumes more energy than households. In the end, it could be environmental implications, not economic ones, that halt the mainstream adoption of cryptocurrencies.

The basis of all major cryptocurrencies is the blockchain. Ironically, while the blockchain provides pure anonymity to the human user, it is remarkably transparent in all its transactions, meaning the digital owner of the digital coins is clear, even the actual person represented by the digital owner is not.

"Bitcoin uses a proof-of-work [PoW] system to decide the chronological order of transactions. PoW works anonymously because the order is identified by IP addresses," explains Shibata.

When a transaction in the Bitcoin blockchain is made, a user makes a request. PoW makes a series of calculations to confirm the validity of the transaction, calculations that consume energy. In PoS, users in the blockchain are invited to use this energy to request a job for finding a solution to an optimization problem.

"There are three kinds of users in the PoS blockchain. The first two are those who want to use the blockchain as a payment system or mine for e-coins, which is the same as PoW. The third group wants to use the PoS blockchain as grid computing infrastructure," says Shibata.

The energy lost in the PoW is redirected to finding an approximate solution to the submitted problem. Thus, energy can be devoted to adding new blocks to the blockchain or to another problem, namely, the optimization proposed by a user, so that the amount of energy used is not reduced, but neither is it wasted.

PoS is the newest of more than a dozen alternative algorithms to PoW that all aim to reduce energy cost. PoW has remained the standard algorithm through which cryptocurrencies operate, because it is extremely decentralized and democratic, which prevents any one user from having an outstanding influence on the currency value.

"The problem with the alternatives is that they lose their democracy or are more vulnerable to outside attacks," notes Shibata.

By adding the feature of an approximate solution, PoS also invites possible corruption to which PoW is immune. Therefore, as a deterrent, PoS demands that the user who submits the problem be the one who pays the user who proposes the solution. This prevents users from colluding together to submit problems for which they already know the solution.

Another appeal of PoW is its robustness. PoS preserves this robustness by introducing miniblocks each time an optimization problem is submitted.

Shibata envisions the optimization problems that can be solved by redirecting the wasted energy with PoS will include diverse problems from medicine to the beginnings of the universe.

"PoS could help solve problems in protein folding, the dynamics of interstellar formations and finance," he says.

Credit: 
Nara Institute of Science and Technology

Justified and unjustified movie violence evokes different brain responses, study finds

image: The figure shows intersubject correlation (ISC) for both character and action segments. Significant ISC maps for (A) justified and (B) unjustified movie violence for each 30-min period. Both conditions showed significant ISCs in occipital and temporal cortex associated with visual and auditory processing. The justified condition specifically elicited significant ISC in the ventromedial prefrontal cortex (vmPFC), while the unjustified condition specifically elicited significant ISC in frontal regions including lateral orbital frontal cortex (lOFC).

Image: 
Frontiers in Behavioral Neuroscience/Annenberg Public Policy Center

The gun violence seen in popular PG-13 movies aimed at children and teenagers has more than doubled since the rating was introduced in 1984. The increasing on-screen gun violence has raised concerns that it will encourage imitation, especially when it is portrayed as "justified."

What was not clear until now is whether justified and unjustified violence produce different brain responses.

In a new study, researchers at the University of Pennsylvania find that scenes of unjustified and justified violence in movies activate different parts of the adolescent brain. This research is the first to show that when movie characters engage in violence that is seen as justified, there is a synchronized response among viewers in a part of the brain involved in moral evaluation, the ventromedial prefrontal cortex (vmPFC), suggesting that viewers see the violent behavior as acceptable for self- or family protection.

Performing fMRI scans of more than two dozen late adolescents who watched scenes of movie violence, the researchers also found that scenes of unjustified violence evoked a synchronized response in a different part of the brain. Activating that area of the brain, the lateral orbital frontal cortex (lOFC), is consistent with a disapproving response to the violence.

The research, led by a team at the Annenberg Public Policy Center (APPC) of the University of Pennsylvania, was published in the journal Frontiers in Behavioral Neuroscience as "Intersubject Synchronization of Late Adolescent Brain Responses to Violent Movies: A Virtue-Ethics Approach."

"What this response suggests is that not all movie violence produces the same response," said senior author Dan Romer, APPC's research director. "Adolescents disapprove of movie violence that is seen as unjustified, which is consistent with what parents have reported in past studies. But when the violence seems justified, adolescents' brains appear to find it much more acceptable than when it is not."

Romer said the growth of movie violence and particularly the depiction of justified gun violence in movies raises concerns. "By popularizing the use of guns in a justified manner, Hollywood may be cultivating approval of this kind of entertainment," he said.

Viewing movies in an MRI scanner

For the study, researchers recruited a group of 26 college students ages 18 to 22, divided between men and women. All regularly watched violent movies and 70 percent played active shooter video games.

The researchers performed functional magnetic resonance imaging (fMRI) scans on the participants as they viewed movie clips. Each participant was shown eight pairs of 90-second movie clips from PG-13 or R-rated films. The clips featured a scene of characters talking followed by scenes of the characters engaged in violence. Half of the clips showed scenes of justified violence, the other unjustified violence. The order of scenes varied. The scenes of justified violence showed major characters engaging in the defense of friends, family or themselves, while the unjustified violence showed characters harming others out of cruelty or ill will. Prior evaluation of the scenes by parents and young adults confirmed that the scenes differed in justification for violence.

The researchers edited the scenes from the R-rated films to remove the graphic effects of the violence such as blood and suffering so that the scenes were more directly comparable to the violence portrayed in PG-13 movies. (A sample from a justified movie clip can be seen here. A sample from an unjustified clip is here.)

The scenes of justified violence came from the PG-13 movies "Live Free or Die Hard" (2007), "White House Down" (2013), "Terminator Salvation" (2009), and "Taken" (2008). The clips of unjustified violence came from the PG-13 movies "Skyfall" (2012) and "Jack Reacher" (2012) and the R-rated films "Sicario" (2015) and "Training Day" (2001).

A synchronized brain response

The researchers found that watching the movie clips produced a synchronous response in brain activity among the study participants at the same points during the movie clips. But the brain activity differed when the participants were watching scenes of justified or unjustified violence.

"It was exciting to observe a synchronized reaction to these movie clips," said the study's lead author, Azeez Adebimpe, a former postdoctoral fellow at the Annenberg Public Policy Center. "Our findings clearly show that violent movies have similar effects on viewers."

The researchers found that scenes of unjustified violence evoked greater synchrony in a region of the brain that responds to aversive events (lOFC). They also observed synchrony in a region that responds to the experience of pain in either oneself or others, the insular cortex. That finding was consistent with an empathetic response to the pain experienced by the victims of this kind of violence, again suggesting that the violence was seen as unacceptable.

Justified violence and the trolley problem

The ventromedial prefrontal cortex is the part of the brain that is activated when an individual is presented with a moral dilemma such as the trolley problem. This problem poses an ethical dilemma in which a runaway train is headed toward five people who are on the tracks. You can pull a switch and divert the train, which will kill one person on the alternate track - or you can take no action as the train races ahead toward five people. Most people see it as appropriate to save the five people and the ventromedial prefrontal cortex tends to respond as well.

In a different version of the problem, you can only stop the train from killing others by pushing an innocent bystander onto the tracks, which most people are unwilling to do. Research has shown that people who lack a functioning ventromedial prefrontal cortex are more willing to push an innocent person to death to save lives. The present study's results are consistent with this research in showing that the same brain region responds when the violence appears justified.

The findings are also consistent with a form of ethics based on evaluating an actor's character and motives, known as virtue ethics. Virtue ethics proposes that people judge behavior as acceptable - even when it might otherwise be seen as prohibited, as in harming others - when an actor has virtuous motives for the behavior. In the movie scenes with justified violence, the young viewers in the study rated the use of guns by the main character as more acceptable and their brains displayed a similar response.

The current research is consistent with previous APPC research that found that parents were more willing to let their children see the same movies clips when the violence appeared to be justified than when it had no socially redeeming purpose. This research also found that parents became more accepting of justified movie violence as they watched successive movie scenes that showed such violence.

Will justified screen violence encourage imitation?

In this MRI study, the researchers concluded: "The finding that brain synchrony discriminated between justified and unjustified violence suggests that even youth who are attracted to such content are sensitive to its moral implications. It remains for future research to determine whether the brain responses to justified film violence we have observed foster tendencies to imitate or consider the use of weapons for self-defense or other justified purposes. Laboratory research finds that justified film violence can encourage aggressive responses in response to provocation... What is less clear is whether the use of guns in movie portrayals of justified violence encourages their acquisition and use for purposes of self-defense."

In addition to Romer and Adebimpe, who is currently a postdoctoral fellow in Psychiatry at Penn, the study was conducted by Penn Bioengineering Professor Danielle S. Bassett, who is also an APPC distinguished research fellow, and Patrick E. Jamieson, director of APPC's Annenberg Health and Risk Communication Institute.

Credit: 
Annenberg Public Policy Center of the University of Pennsylvania

June rainfall in the lower Yangtze River Basin can be predicted four months ahead

image: Regression of June winds and rainfall from (a) observations, and (b) ensemble seasonal predictions initialized in March, onto observed preceding winter observed Niño3.4 SST variations.

Image: 
Gill Martin

Millions of people in China depend on the rainfall brought by the monsoon during summer for their livelihoods and water supplies. Although there have been recent studies demonstrating that monsoon rainfall over the summer as a whole can be predicted, skilful predictions on shorter time scales have not yet been demonstrated. A recent study, published in Advances in Atmospheric Sciences, suggests that such predictions may now be possible.

"Our analysis shows that we can use our seasonal forecasting system to predict whether the rainfall in June over the middle and lower Yangtze River Basin as a whole is likely to be more, or less, than average," said Gill Martin, a senior researcher at the Met Office, UK, and the study's lead author, "and we may be able to provide reliable predictions up to four months ahead, i.e. from February onwards."

Much of the rainfall in this region during June is contributed by the mei-yu rain band, a "stationary phase" of the seasonal progression of the East Asian Summer Monsoon (EASM) that is present in the middle and lower Yangtze River Basin between the second week of June and early July. Variations in mei-yu rainfall are linked to a large-scale atmospheric circulation pattern over the Western North Pacific, whose position and strength determine the south-westerly monsoon flow over southern China in early summer.

Previous studies have shown that El Niño-Southern Oscillation (ENSO), the dominant mode of variability in the tropical Pacific, is one of the most important factors affecting the EASM. Sea surface temperature (SST) changes associated with El Niño in the tropical Pacific during the previous winter contribute to altering the atmospheric circulation over Eurasia and the Western Pacific in such a way as to increase the EASM rainfall in early summer. The new study shows that the Met Office's seasonal forecasting system represents this relationship, and that this is the main source of skilful rainfall prediction for June in the middle/lower Yangtze River Basin.

"The ability to predict the June rainfall in the middle and lower Yangtze River Basin up to four months in advance offers exciting possibilities for providing useful, early information to contingency planners on the availability of water during the summer season", says Dr Martin. "We would encourage other forecasting centers to investigate the skill for predicting June rainfall in their operational forecasting systems."

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Could we cool the Earth with an ice-free Arctic?

The Arctic region is heating up faster than any other place on Earth, and as more and more sea ice is lost every year, we are already feeling the impacts. IIASA researchers explored strategies for cooling down the oceans in a world without this important cooling mechanism.

Scientists estimate that summer sea ice in the Arctic Ocean will be largely gone within a generation. This is bad news for the world, as ice and snow reflect a high proportion of the sun's energy into space, thus keeping the planet cool. As the Arctic loses snow and ice, bare rock and water become exposed and absorb more and more of the sun's energy, making it warmer - a process known as the albedo effect.

Given that it would be very difficult to reverse this trend, even if we do manage to reach the 1.5°C target set out in the Paris Agreement, IIASA researchers explored what would happen if we were to reverse this logic and make the Arctic region a net contributor to cooling down the world's oceans and by extension the Earth. In their new paper published in the Springer journal SN Applied Sciences, the authors analyzed what the Arctic's contribution to global warming would be if there were no ice cover, even throughout the winter months. They also looked at ways the world could adapt to the resulting new climate conditions.

"The Arctic Ocean ice cover works as a strong insulator, impeding the heat from the ocean below to warm up the atmosphere above. If this ice layer were however removed, the atmosphere would increase in temperature by around 20°C during the winter. This increase in temperature would in turn increase the heat irradiated into space and, thus cooling down the oceans," explains study lead-author Julian Hunt, who currently holds a postdoc fellowship at IIASA.

According to the authors, the main factor that contributes to maintaining the Arctic sea ice cover is the fact that the superficial Arctic Ocean (the top 100 meters) has a salinity that is around 5 grams per liter (g/l) lower than that of the Atlantic Ocean. This stops the Atlantic Ocean from flowing above the cold Arctic waters. The authors argue that increasing the salinity of the Arctic Ocean surface would allow the warmer and less salty North Atlantic Ocean current to flow over the surface of the Arctic Ocean, thereby considerably increasing the temperature of the Arctic atmosphere, and releasing the ocean heat trapped under the ice. The researchers propose three strategies to achieve this:

The first strategy entails reducing the flow of water from major rivers from Russia and Canada into the Arctic, by pumping the water to regions in the USA and Central Asia where it could be used to increase agricultural production in regions with low water availability. As a second strategy, the researchers suggest creating submerged barriers in front of Greenland glaciers to reduce the melting of the Greenland ice sheets, while the third strategy would be to pump water from the superficial Arctic Ocean to the deep ocean so that it is mixed with the more salty water below. The pumps in such a project would run on electricity generated from intermittent solar and wind sources, allowing a smoother implementation of these technologies.

The researchers' analysis show that with an average 116 GW of energy during 50 years of operation, these strategies could reduce the salinity of the Superficial Arctic Ocean waters to 2g/l. This would increase the flow of the North Atlantic current into the Arctic and considerably reduce the ice cover on the Arctic during the winter.

Despite the concerns about the loss of sea ice in the Arctic the authors point out that there are several advantages to an ice-free Arctic scenario: Ships would for example be able to navigate through the Arctic Ocean throughout the whole year, which would reduce the distance for shipping goods from Asia to Europe and North America. In addition, the temperature in the Arctic would increase during the winter months, which would reduce the demand for heating in Europe, North America, and Asia during the winter. The frequency and intensity of hurricanes in the Atlantic Ocean could also be reduced due to the reduction in temperature in Atlantic Ocean waters. On top of this, the ice-free waters could also help to absorb more CO2 from the atmosphere.

Hunt however cautions that while there are benefits to an ice-free Arctic, it is difficult to predict what the impact will be on global sea levels, as the higher Arctic temperatures would result in increased melting of the Greenland ice sheet. It is also difficult to predict the changes in the world climate as the polar circle will be considerably weakened during the winter.

"Although it is important to mitigate the impacts from climate change with the reduction in CO2 emissions, we should also think of ways to adapt the world to the new climate conditions to avoid uncontrollable, unpredictable and destructive climate change resulting in socioeconomic and environmental collapse. Climate change is a major issue and all options should be considered when dealing with it," Hunt concludes.

Credit: 
International Institute for Applied Systems Analysis

Improving the accuracy of climate model projections with emergent constraints

image: The concept of emergent constraints aims to find relationships between intermodel variations of some aspect of the recent observable climate and the uncertainties of particular future climate predictions. The idea is that the observations would then inform the least biased models and, by inference, the more likely climate change projections. The cover image is a graphical representation of this concept that questions which projections of the future climate simulated by Earth system models are more credible.

Image: 
<i>Advances in Atmospheric Sciences</i>

The increase in carbon dioxide concentration in the atmosphere has warmed the Earth since the beginning of the industrial era. Climate models try to project how much this warming trend will continue, but they differ in their global-mean temperature response to increasing concentrations of greenhouse gases. This is called climate sensitivity, according to Dr Florent Brient, a postdoctoral research scientist at the Centre National de Recherches Météorologiques (Météo-France/CNRS), France, and author of a recently published study that reviews a new method, known as emergent constraints, which attempts to use information about the current climate to constrain the evolution of climate in the future.

The study's findings were published on December 10th 2019 in Advances in Atmospheric Sciences.

In order to accurately predict how much the Earth will warm in the future, one needs to know how atmospheric carbon dioxide concentrations will evolve, together with an accurate assessment of climate sensitivity. Should carbon dioxide levels double, models predict the Earth would warm by 1.2 degrees Celsius, which induce changes that may either temper these effects or drive further warming. For example, an increase in warming increases atmospheric moisture, which provides a positive feedback that would drive further warming since water vapor is a potent greenhouse gas. Warming also melts sea-ice and snow which reflect sunlight away from the Earth -- known as the albedo effect. On the other hand, an increase in cloud cover, for example, could provide negative feedback, as the lowest clouds also reflect sunlight away from the Earth. However, since clouds are related to various dynamic scales and their radiative effects vary considerably in height, their effects on surface warming are complex. Consequently, clouds remain a major source of model disagreement.

In this paper, Dr Brient reviews the concept of emergent constraints and describes published emergent constraints, which reduce uncertainties in various simulated climate changes. The author discusses potential connections between emergent constraints and the influence of statistical methodologies in the quantification of these more likely projections. Finally, the author tries to verify whether emergent constraints can collectively reduce the spread in climate sensitivity provided by climate models.

"Emergent constraints are useful for narrowing the spread of climate projections and for guiding the development of more realistic climate models," said Brient. "However, they are sensitive to various factors, such as the way statistical inference has been performed or how observational uncertainties have been obtained. Therefore, more consistency across emergent constraints are needed for better cross-validation of more likely projections."

"The upcoming sixth phase of the Coupled Model Intercomparison Project (CMIP6) will most likely boost the enthusiasm of emergent constraints, by allowing a better understanding of certain climate phenomena and a further narrowing of their uncertain projections," said Brient. "However, this calls for sharing statistical methods used for these quantifications, as we have done in this paper," he adds.

According to Brient, two questions remain to be solved. Firstly, "what are the connections between the different predictors used for narrowing projections a given climate change? A better understanding of the links between circulation and clouds would help make progress in this regard," said Brient. And secondly, "how can the spread in climate projections be reliably narrowed if emergent constraints disagree with each other? This suggests that some emergent constraints are more trustworthy than others, but this remains to be investigated."

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Technologies and scientific advances needed to track methane levels in atmosphere

image: This is methane flaring in the Bakken oil field of North Dakota.

Image: 
Matt Rigby, University of Bristol

Understanding what influences the amount of methane in the atmosphere has been identified by the American Geophysical Union to be one of the foremost challenges in the earth sciences in the coming decades because of methane's hugely important role in meeting climate warming targets.

Methane is the second most important human-made greenhouse gas and is rising in the atmosphere more rapidly than predicted for reasons that are not well-understood. It is roughly 30 times more potent than carbon dioxide for warming the Earth over a century timescale.

Reductions in global methane emissions are needed to meet global climate warming targets. The goal of the 2015 Paris Agreement is to keep global average temperature increases well below 2?C of pre-industrial levels in the year 2100.

Success hinges on individual countries reducing their greenhouse gas emission through their Nationally Determined Contributions, which will be evaluated every five years in a global stock-take.

A new paper published today and led by climate scientists from the University of Bristol, explains the new technologies and scientific advances needed to track progress of these reductions.

Around half of the methane that is emitted to the atmosphere comes from natural sources, including wetlands and geological seeps.

The remainder is emitted from agriculture, fossil fuel use, and other human activities. Because methane is such a potent absorber of radiation in the atmosphere and because it is decays in the atmosphere faster than carbon dioxide, planned atmospheric concentration pathways that meet the Paris Agreement seek to cut anthropogenic methane emissions by almost half of present-day levels.

The 'budget' of atmospheric methane is the sum of the different individual sources and 'sinks' (the removal of methane from the atmosphere) that alter the total amount of methane in the atmosphere.

Dr Anita Ganesan, from the University of Bristol's School of Geographical Sciences and lead author of the paper, said: "There are major challenges in our ability to quantify this budget, and these challenges make it difficult to assess whether the emission reductions pledged for the Paris Agreement are actually occurring."

The new study highlights exciting new technologies being used to measure methane in the environment, discusses the current limitations in the major areas of methane science and proposes advances that, over the next decade, would significantly improve our ability to understand the mechanisms causing changes in atmospheric methane.

Some of these new technologies include the ability to measure rarer isotopic variants in methane, which provide new capability to pinpoint the sources of emissions, satellites, which are mapping methane concentrations globally with unprecedented detail, and systems to monitor possible 'feedback' emissions from permafrost.

Interpreting these new measurements through state-of-the-art model simulations of the atmosphere will allow emissions to be more accurately quantified from measurements in the atmosphere. The study also highlights the key advances needed for countries to be able to better inventory their methane emissions, for example, by being able to track the composition of waste sent to landfills, or to monitor emissions from leaks in the oil and gas industry.

The three main aspects of methane science covered include atmospheric measurements of methane and its isotopic variations, models that simulate the processes behind methane emissions and the quantification of the various components of the methane budget from atmospheric measurements. Improvements in these three areas will together result in more accurate quantification of methane emissions, which is a vital step toward knowing if we are on track to meeting the Paris Agreement.

Dr Matt Rigby from the University of Bristol's School of Chemistry, is a co-author on the study. He added: "We can't explain with very much confidence the factors that have resulted in large variations in the atmosphere in the past few decades, and with that level of present uncertainty - knowing how to control these concentrations to be in line with climate targets is an even bigger challenge."

Dr Ganesan said: "Since the Paris Agreement, there has unfortunately been a large divergence between some of the planned concentration pathways that would meet Paris targets and actual methane concentrations in the atmosphere.

"The impact is that revised pathways now call for cuts in methane concentrations to occur later and by a much larger amount. Each year that reductions are delayed implies a greater reduction for the future. Until we understand what controls the variations in atmospheric concentrations of methane, we risk falling farther behind."

Credit: 
University of Bristol

Middle-income countries are hardest hit by cardiovascular disease in Europe

Sophia Antipolis, 10 December 2019: Middle-income countries shoulder the bulk of morbidity and mortality from cardiovascular disease (CVD) in Europe, according to a major report published today in European Heart Journal, the flagship journal of the European Society of Cardiology (ESC).1

The document details the burden of CVD in the 57 ESC member countries,2 the infrastructure and human resources available for treatment, and the vast differences between states in access to modern diagnostics and therapies.

CVD remains the most common cause of death in Europe and around the world, accounting for 47% of all deaths in women and 39% of all deaths in men in ESC member countries. During the past 27 years, there has been only a modest decline in CVD in Europe, and in 11 countries there has been no drop at all. Likewise, the incidence of CVD's major components, coronary heart disease (narrowed arteries supplying the heart with blood) and stroke, have shown only minor reductions.

Compared to high-income countries, middle-income countries have:

More premature death (before 70 years) due to CVD.

A greater proportion of potential years of life lost due to CVD.

Higher age-standardised incidence and prevalence of coronary heart disease and stroke.

Three times more years lost due to CVD ill-health, disability, or early death.

"The statistics emphasise the need for concerted application of CVD prevention policies, particularly in middle-income countries where the need is greatest," said Professor Panos Vardas, a past ESC president and current chief strategy officer of the ESC's European Heart Agency in Brussels.

"Middle-income countries are less able to meet the costs of contemporary healthcare than high-income countries leaving patients with no access to modern cardiovascular facilities," he added. "The availability of transcatheter valve implantation, complex techniques for treating atherosclerotic coronary heart disease, and heart transplantation varies hugely."

Analyses according to sex show that compared to women, men have:

Higher age-standardised CVD mortality rates per 100,000 people in both high-income (283 for women vs. 410 for men) and middle-income countries (790 for women vs. 1,022 for men).

Higher age-standardised incidence per 100,000 inhabitants (132.0 for women vs. 235.9 for men) and prevalence per 100,000 people (1,895 for women vs. 2,665 for men) of coronary heart disease.

Higher age-standardised incidence per 100,000 people (130.3 for women vs. 159.9 for men) and prevalence per 100,000 people (1,272 for women vs. 1,322 for men) of stroke.

Almost twice as many years lost due to CVD ill-health, disability, or early death (3,219 vs. 5,925 per 100,000 people in women and men, respectively.

"CVD is the most common cause of premature death (before 70) in men, whereas in women the most common cause is cancer," noted Professor Adam Timmis, head of the report writing team.

Other notable statistics:

Coronary heart disease and stroke accounted for 82% of years lost due to CVD ill-health, disability, or early death.

Age-standardised years lost due to CVD ill-health, disability, or early death have been in steep decline over the past 27 years, with just two middle-income countries recording an increase.

Professor Timmis said: "The potential reversibility of risk factors, including high blood pressure and elevated cholesterol, and unhealthy behaviours such as sedentary lifestyles and poor diets provide a huge opportunity to address the health inequalities documented in this report."

But he added: "The World Health Organization's target3 for a 25% relative reduction in mortality from CVD, cancer, diabetes, and chronic respiratory disease by 2025 is unlikely to be achieved, with the modest downward CVD trends documented in this report concealing alarming increases in mortality in some member countries."

Credit: 
European Society of Cardiology