Brain

Working-age Americans dying at higher rates, especially in economically hard-hit states

Mortality rates among working-age Americans continue to climb, causing a decrease in U.S. life expectancy that is severely impacting certain regions of the United States, according to a Virginia Commonwealth University study set to publish Tuesday in JAMA. The report, "Life Expectancy and Mortality Rates in the United States, 1959-2017," is one of the most comprehensive 50-state analyses of U.S. mortality.

Deaths among Americans ages 25 to 64 are increasing, particularly in Rust Belt states and Appalachia. These deaths, which have fueled a decline in U.S. life expectancy since 2014, are linked to several major causes of death. Compared to the 1990s, working-age adults are now more likely to die before age 65 from drug overdoses, alcohol abuse and suicides -- sometimes referred to as "deaths of despair"-- but also from an array of organ system diseases. Mortality rates have increased for 35 causes of death, said lead author Steven Woolf, M.D., director emeritus of the VCU Center on Society and Health.

"Working-age Americans are more likely to die in the prime of their lives," Woolf said. "For employers, this means that their workforce is dying prematurely, impacting the U.S. economy. More importantly, this trend means that children are losing their parents and our children are destined to live shorter lives than us."

The paper calls for a better understanding of the root causes of these deaths, including the role of drugs, obesity, the health care system, stress and the economy. The impact of rising death rates is far-reaching from a public health perspective, as well as for the nation's future, said Woolf, a professor in the Department of Family Medicine and Population Health in the VCU School of Medicine.

The study shows that some of the largest increases in working-age mortality since 2010 occurred among women and adults without a high school diploma. The industrial Midwest and other regions that have been hard-hit by changes in the economy since the 1980s, such as job losses in manufacturing and other sectors, are experiencing the largest increases in mortality. The study lists socioeconomic pressures and unstable employment among possible explanations for increased working-age mortality in these areas.

Woolf and his VCU team published a paper in 2018 that showed that the increase in working-age mortality was affecting all racial groups. The new JAMA report uses data from the U.S. Mortality Database and Centers for Disease Control Wide-ranging Online Data for Epidemiologic Research, going back to 1959, to examine which regions and states have been most affected.

Woolf and co-author Heidi Schoomaker, now at Eastern Virginia Medical School, found that life expectancy decreased in some regions, such as northern New England (Maine, New Hampshire and Vermont) and the Ohio Valley (Indiana, Kentucky, Ohio and Pennsylvania), but increased along the Pacific coast. The four Ohio Valley states are responsible for one-third of excess deaths -- the number of deaths greater than the number of deaths projected by U.S. mortality rates -- since 2010.

Eight of the 10 states with the largest number of excess deaths for ages 25 to 64 were in the Rust Belt or Appalachia, Woolf said, and the 13 Appalachian states accounted for half of excess deaths.

A chart book publishing in JAMA with the paper contains 120 graphs of life expectancy trends for nine census divisions and each state in the United States.

Woolf found that the trend of increasing mortality goes back decades. U.S. life expectancy lost pace with other wealthy countries in the 1980s, stopped increasing in 2011 and has been falling since 2014.

"The notion that U.S. death rates are increasing for working-age adults is particularly disturbing because it is not happening like this in other countries," Woolf said. "This is a distinctly American phenomenon."

Credit: 
Virginia Commonwealth University

Research: Alcohol and tobacco policies can reduce cancer deaths

image: Smoking

Image: 
Pixabay

Policies aimed at cutting alcohol and tobacco consumption, including the introduction of random breath testing programs and bans on cigarette advertising, have resulted in a significant reduction in Australian cancer death rates, new research shows.

The La Trobe Centre for Alcohol Policy Research (CAPR) has led the first study into how public health policies on alcohol and tobacco implemented from the 1960s affected cancer deaths in Australia.

Researchers led by La Trobe epidemiologist Dr Jason (Heng) Jiang, compared cancer mortality data available from the 1950s with historical alcohol and tobacco control policies and 100-years of consumption data.

Dr Jiang said the results are a century in the making.

"Our research provides new evidence that key public health policies on alcohol and tobacco introduced in Australia from the 1960s to 2013 are related to reductions in mortality rates for various cancers," Dr Jiang said.

"The changes in mortality rates are measured over 20-year periods and emphasise that the effects of alcohol and tobacco policies cannot be fully evaluated in the short-term.

"It's clear from our findings that the full effect of more recent policies, such as plain cigarette packaging and alcohol content labelling of beverages, may not be known for decades."

Published today in BMC Medicine , the researchers found:

A series of key health policies on alcohol and tobacco have prevented more than 5 per cent (36,000) of total cancer deaths in Australia between the 1960s and 2013

The introduction of random breath testing programs in Australia in 1976 was associated with a reduction in population drinking and cancer death rates for both men and women. The policy prevented 1 per cent of male deaths (4880) and 0.8 per cent of female deaths (1680) overall between the 1980s and 2013

The release of UK and US public health reports in 1962 and 1964 on the health effects of tobacco was associated with a reduction in Australian tobacco consumption and cancer death rates - excluding liver cancer - with 3 per cent of male (13,400) and 4 per cent of female cancer deaths (11,600) in Australia in the last 30 years

The ban on cigarette advertising on Australian TV and radio in 1976 was associated with a 1.9 (4,520) and 2.2 (2,430) per cent reduction in total male and female cancer death rates respectively, excluding liver cancer, between the 1980s and 2013

Liquor license liberalisation introduced in the 1960s was linked to an increase of 0.6 per cent (2,680) of total male cancer deaths in the last 30 years

Dr Jiang said the study should help inform future government campaigns or policies on alcohol and tobacco.

"It's important to evaluate what works, what doesn't, and where to invest future funding," Dr Jiang said.

"We hope these findings will also help Australians make more informed decisions on their alcohol and tobacco consumption," Dr Jiang said.

Background

The researchers used annual population-based time series data from 1911 to 2013 which reported per capita alcohol and tobacco consumption. They also accessed mortality data from the 1950s to 2013 for cancers of the head and neck (lip, oral cavity, pharynx, larynx and oesophagus), lung, breast, colorectum, anus and liver collected by the Australian Bureau of Statistics, Cancer Council Victoria, the WHO Cancer Mortality Database and the Australian Institute of Health and Welfare.

Credit: 
La Trobe University

Did human hunting activities alone drive great auks' extinction?

image: A mounted great auk skin, The Brussels Auk (RBINS 5355), from the collections at the Royal Belgian Institute of Natural Sciences (RBINS)

Image: 
Thierry Hubin, RBINS (CC BY 4.0)

New insight on the extinction history of a flightless seabird that vanished from the shores of the North Atlantic during the 19th century has been published today in eLife.

The findings suggest that intense hunting by humans could have caused the rapid extinction of the great auk, showing how even species that exist in large and widespread populations can be vulnerable to exploitation.

Great auks were large, flightless diving birds thought to have existed in the millions. They were distributed around the North Atlantic, with breeding colonies along the east coast of North America and especially on the islands off Newfoundland. They could also be found on islands off the coasts of Iceland and Scotland, as well as throughout Scandinavia.

But these birds had a long history of being hunted by humans. They were poached for their meat and eggs during prehistoric times, and this activity was further intensified in 1500 AD by European seamen visiting the fishing grounds of Newfoundland. Their feathers later became highly sought after in the 1700s, contributing further to their demise.

"Despite the well-documented history of exploitation since the 16th century, it is unclear whether hunting alone could have been responsible for the species' extinction, or whether the birds were already in decline due to natural environmental changes," says lead author Jessica Thomas, who completed the work as part of her PhD studies at Bangor University, UK, and the University of Copenhagen, Denmark, and is now a postdoctoral researcher at Swansea University, Wales, UK.

To investigate this further, Thomas and her collaborators carried out combined analyses of ancient genetic data, GPS-based ocean current data, and population viability - a process that looks at the probability of a population going extinct within a given number of years. They sequenced complete mitochondrial genomes of 41 individuals from across the species' geographic range and used their analyses to reconstruct the birds' population structure and dynamics throughout the Holocene period, the last 11,700 years of Earth's history.

"Taken together, our data don't suggest that great auks were at risk of extinction prior to intensive human hunting behaviour in the early 16th century," explains co-senior author Thomas Gilbert, Professor of Evolutionary Genomics at the University of Copenhagen. "But critically, this doesn't mean that we've provided solid evidence that humans alone were the cause of great auk extinction. What we have demonstrated is that human hunting pressure was likely to have caused extinction even if the birds weren't already under threat from environmental changes."

Gilbert adds that their conclusions are limited by a couple of factors. The mitochondrial genome represents only a single genetic marker and, due to limited sample preservation and availability, the study sample size of 41 is relatively small for population genetic analyses.

"Despite these limitations, the findings help reveal how industrial-scale commercial exploitation of natural resources have the potential to drive an abundant, wide-ranging and genetically diverse species to extinction within a short period of time," says collaborator Gary Carvalho, Professor in Zoology (Molecular Ecology) at Bangor University. This echoes the conclusions of a previous study* on the passenger pigeon, a bird that existed in significant numbers before going extinct in the early 20th century.

"Our work also emphasises the need to thoroughly monitor commercially harvested species, particularly in poorly researched environments such as our oceans," concludes co-senior author Michael Knapp, Senior Lecturer in Biological Anthropology and Rutherford Discovery Fellow at the University of Otago, New Zealand. "This will help lay the platform for sustainable ecosystems and ensure more effective conservation efforts."

Credit: 
eLife

Theorem explains why quantities such as heat and power can fluctuate in microscopic system

The second law of thermodynamics states that the total entropy of an isolated system always tends to increase over time until it reaches a maximum. In other words, disorganization increases without outside intervention.

Even the best electrical equipment inevitably heats up, as part of the energy that should be converted into mechanical work is dissipated in the form of heat, and supposedly inanimate objects deteriorate as time progresses but do not spontaneously regenerate.

However, this "truth" taught by everyday experience does not necessarily apply to the microscopic world. Physicists have therefore reinterpreted the second law by giving it a statistical twist: Entropy indeed increases, but there is a non-null probability that it may sometimes decrease.

For example, instead of heat flowing from a hot body to a cold one, as usual, it may flow from a cold body to a hot one in certain situations. Fluctuation theorems (FTs) quantified this probability with precision, and the issue has practical interest when we think about the operation of nanoscale machines.

FTs were proposed for the first time in an article published in 1993 in Physical Review Letters. The article was signed by Australians Denis Evans and Gary Morriss and Dutch scientist Ezechiel Cohen. They tested one of these theorems using computer simulations.

An article published recently in the same journal shows that one consequence of FTs is thermodynamic uncertainty relations, which involve fluctuations in the values of thermodynamic quantities such as heat, work and power. The title of the new article is "Thermodynamic uncertainty relations from exchange fluctuation theorems."

The first author was André Timpanaro, a professor at the Federal University of the ABC (UFABC), São Paulo State, Brazil. The principal investigator for the study was Gabriel Landi, a professor at the University of São Paulo's Physics Institute (IF-USP). Giacomo Guarnieri and John Goold, affiliated with Trinity College Dublin's Physics Department (Ireland), also participated. The study was supported by São Paulo Research Foundation - FAPESP via two regular research grants awarded to Landi: "Entropy production in non-equilibrium quantum processes: from foundations to quantum technologies" and "Thermodynamics and information technologies with continuous variable quantum systems".

Uncertainty relations

"The physical origins of thermodynamic uncertainty relations were obscure until now. Our study shows they can be derived from FTs," Landi told.

"When we began studying thermodynamics, we had to deal with such quantities as heat, work and power, to which we always assigned fixed values. We never imagined they could fluctuate, but they do. In the microscopic world, these fluctuations are relevant. They may influence the operations of a nanoscale machine, for example. Thermodynamic uncertainty relations establish a floor for these fluctuations, linking them to other quantities such as system size."

Thermodynamic uncertainty relations were discovered in 2015 by a group of researchers led by Udo Seifert at Stuttgart University in Germany. André Cardoso Barato, a former student at IF-USP and currently a professor at the University of Houston (USA), participated in the discovery.

The mathematical structure of these relations resembles that of Heisenberg's uncertainty principle, but they have nothing to do with quantum physics. They are purely thermodynamic. "The nature of thermodynamic uncertainty relations has never been very clear," Landi said. "Our main contribution was to show that they derive from FTs. We believe that FTs describe the second law of thermodynamics more generally and that thermodynamic uncertainty relations are a consequence of FTs."

According to Landi, this generalization of the second law of thermodynamics "sees" thermodynamic quantities as entities that can fluctuate but not arbitrarily since they must obey certain symmetries. "There are several fluctuation theorems," he said. "We found a special class of FTs and focused on them as cases of mathematical symmetry. In this manner, we transformed our problem into a mathematical problem. Our main result was a theorem of probability theory."

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Ontario physicians do not need consent to withhold CPR that they feel will not benefit patients

In August, the Ontario Superior Court of Justice dismissed a malpractice lawsuit filed against two physicians who refused to provide cardiopulmonary resuscitation (CPR) to an 88-year-old man with multiple comorbidities and multiorgan failure. This ruling may have important implications for physicians in Ontario and elsewhere, according to a commentary published in this week's CMAJ (Canadian Medical Association Journal).

Notably, the court determined that under both Ontario's health care consent legislation and common law physicians do not require consent to withhold CPR that they believe to be medically inappropriate, or to write an order that CPR should be withheld.

This ruling indicates that the decision not to offer or perform CPR can be at the discretion of the treating physician, based on their assessment that CPR would not benefit the patient. "Even if patients or substitute decision-makers had previously consented to a "full-code" order, this would not compel physicians to provide CPR if they determined, because of a change in circumstances or context, it would no longer be beneficial," writes Dr. James Downar, a critical care and palliative care specialist at The Ottawa Hospital and Bruyère Continuing Care, with coauthors.

However, communication with patients and families is paramount. "Physicians have a professional responsibility to communicate (or make reasonable efforts to communicate) their concerns about performing CPR in cases where they do not feel that it is medically appropriate and to be honest when they feel that CPR would be outside the standard of care," write the authors.

In light of the court's ruling, the College of Physicians and Surgeons of Ontario updated their end-of-life policy to no longer require consent to withhold CPR. However, the policy still requires physicians to obtain consent to write a "No CPR" order -- an apparent inconsistency that may be addressed when the policy is revised in 2020.

"Do physicians require consent to withhold CPR that they determine to be nonbeneficial?" is published November 25, 2019

Credit: 
Canadian Medical Association Journal

Screen time patterns of kids

What The Study Did: Screen time data for nearly 3,900 children were used to examine patterns of screen time use and the association with sociodemographic characteristics such as parental education levels and sex of the child.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

Author: Edwina H. Yeung, Ph.D., of the National Institutes of Health, Bethesda, Maryland, is the corresponding author.

(doi:10.1001/jamapediatrics.2019.4488)

Editor's Note: The article contains conflict of interest and funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

Global health viewpoint: Poor data prevent accurate measurement of UN goals

SEATTLE - The lack of data, particularly in low- and middle-income countries, combined with the absence of international standards for data management, is hindering efforts in measuring progress toward meeting the United Nations Sustainable Development Goals (SDGs) according to a viewpoint published in the international medical journal The Lancet.

"We cannot make progress without being able to measure progress," said Dr. Tedros Adhanom Ghebreyesus, Director-General of the World Health Organization (WHO). "Strong health data is essential for highlighting who is being left behind, and why. Together, WHO and IHME and other partners are working to strengthen country data systems for better decision-making, and better health."

The viewpoint is a collaboration by global health experts from several countries and organizations, including WHO and the Institute for Health Metrics and Evaluation (IHME) at the University of Washington's School of Medicine. Other authors include health experts from Brazil, Ethiopia, India, and Russia.

"Openly sharing data ultimately lays the foundation for the best health practices - those that are informed by the best possible science that draws from all available data and represents the best evidence base at a given point in time," according to the authors. "Good measurement itself is not political--rather, the actions that are based on good measurement are political, as societies must make their own decisions and agendas, informed by the available data, national values, and social priorities."

"Why is it that, in this day and age, we as a global community of nations do not have the courage and commitment to share data that would improve people's lives to the fullest?" said IHME's Director Dr. Christopher Murray. "We are 10 years away from the 2030 deadline for the SDGs, and the whole world is watching."

"The collaborative scientific model espoused by the GBD [Global Burden of Disease study] is one example of how to establish highly standardised approaches to data processing and synthesis while also fostering broad ownership," according to the viewpoint, which was published November 22.

IHME is the convening body of the GBD study, the world's largest and most comprehensive effort to quantify health loss. It draws on the work of nearly 4,400 collaborators from 147 countries and territories. The GBD 2019 study will be released in May of 2020.

"The collaborative scientific model espoused by the GBD is one example of how to establish highly standardised approaches to data processing and management while also fostering broad ownership," according to the viewpoint.

The viewpoint also notes that in 2016, WHO led the creation of guidelines for substantiating data collected on diseases, injuries, and deaths, a two-year collaboration among experts from IHME and several of the world's most prestigious health institutions.

The Guidelines for Accurate and Transparent Health Estimates Reporting, or GATHER, promote best practices in reporting health estimates. Work to formulate the guidelines was funded by the Bill & Melinda Gates Foundation.

The guidelines are intended to inform and direct the reporting of estimates of health indicators like causes of death, the incidence and prevalence of diseases and injuries, and indicators of some health determinants, such as people's behaviors. They are designed to apply to studies that calculate health outcomes for multiple populations by combining several sources of information.

The authors note that GATHER represents "a powerful way to improve trust across all segments in society" and go on to state that WHO estimates of child mortality, maternal mortality, and tuberculosis are fully compliant with GATHER, as are all outcomes produced by the GBD collaboration.

However, "many efforts that generate global health estimates are not compliant with GATHER," and "national governments report data to WHO or other agencies but do not clarify if the data can be publicly shared," according to the viewpoint.

The authors intend to measure the quality and progress of countries' disclosure of health data and to update their analysis annually.

Among the authors' recommendations are:

Led by WHO, the international community should provide resources and technical assistance to countries.

Standard approaches to data management reflecting current best practices should be developed and promulgated.

GATHER should be strengthened, and the scope of the guidelines should cover all health-related SDGs and inputs, including population estimates.

Countries with analytical capacity should apply the standards for data processing and management to their own data in a transparent manner consistent with GATHER.

Credit: 
Institute for Health Metrics and Evaluation

Is cyberbullying common among adults?

image: Cyberpsychology, Behavior, and Social Networking explores the psychological and social issues surrounding the Internet and interactive technologies.

Image: 
Mary Ann Liebert, Inc., publishers

New Rochelle, NY, November 25, 2019--A new nationwide study examined the prevalence of negative behaviors that occur via digital communication, encompassing a broad definition of cyberbullying that includes both cyber-aggression and cyberbullying. The study, which assessed a national sample of New Zealanders 18-97 years of age, is published in Cyberpsychology, Behavior, and Social Networking, a peer-reviewed journal from Mary Ann Liebert, Inc., publishers. Click here to read the full-text article free on the Cyberpsychology, Behavior, and Social Networking website through December 25, 2019.

In the article entitled "How Common Is Cyberbullying Among Adults? Exploring Gender, Ethnic, and Age Differences in the Prevalence of Cyberbullying" the researchers divided the national sample into age cohorts and compared whether the participants had ever been a target of cyberbullying and whether they had had such an experience within the past month.

The study, coauthored by Meng-Jie Wang, MA, Kumar Yogeeswaran, PhD, and Nadia Andrews, MS, University of Canterbury (New Zealand), Diala Hawi, PhD, (Doha Institute for Graduate Studies (Qatar), and Chris Sibley, PhD, University of Auckland (New Zealand), showed that almost 15% of the participants had ever been a target of cyberbullying. Young adults (18-25 years) experienced the highest levels of cyberbullying (during both the lifetime and past month timeframes), but substantial lifetime cyberbullying was reported by older age groups as well, including those 26-35 years (24%) and 46-55 years (13%), up to the 66+ age group (6.5%).

"This study, which reported cyberbullying's prevalence based on various subgroups, provides important information that will allow for the development of more targeted prevention and treatment programs," says Editor-in-Chief Brenda K. Wiederhold, PhD, MBA, BCB, BCN, Interactive Media Institute, San Diego, California and Virtual Reality Medical Institute, Brussels, Belgium.

Credit: 
Mary Ann Liebert, Inc./Genetic Engineering News

Self-assembling system uses magnets to mimic specific binding in DNA

ITHACA, N.Y. - Sometimes it's best to let the magnets do all the work.

A team led by Cornell University physics professors Itai Cohen and Paul McEuen is using the binding power of magnets to design self-assembling systems that potentially can be created in nanoscale form.

Their paper, "Magnetic Handshake Materials as A Scale-Invariant Platform for Programmed Self-Assembly," published Nov. 21 in Proceedings of the National Academy of Sciences.

To make small systems - such as miniature machines, gels and metamaterials - that essentially build themselves, the researchers took inspiration from DNA origami, in which atomic-scale DNA strands can be folded into two- and three-dimensional structures through a process called complementary base pairing, where specific nucleotides bind to one another: A to T and G to C.

Rather than relying on atomic bonds, the team was drawn to another form of attraction: magnetics. Here, the attraction and repulsion between multiple magnets can serve as a kind of intelligent connection, like a handshake. Magnetic interactions also make for strong, versatile bonds that aren't easily disrupted by thermal effects. With a large enough array of magnets in a variety of orientations, thousands of different configurations would be possible.

The researchers tested their design theory by making centimeter-sized acrylic panels, each containing four tiny magnets in a square pattern. This arrangement allowed them to make four unique magnetic interactions.

"By controlling the pattern of magnetic dipoles on each panel, we can essentially get lock-and-key binding," said Cohen. "And by gluing these panels onto a flexible mylar strip in designed sequences, we created our basic building blocks."

To activate the self-assembly, the separate strands were scattered on a shaker table, with the table's vibrations preventing the magnets from forming bonds. As the shaking amplitude was decreased, the magnets attached in their designated order and the strands formed the target structures.

The ultimate goal, says Cohen, is to produce nanoscale versions of these systems, with self-assembling units that are only a hundred nanometers in diameter, or a thousandth of a human hair in diameter.

"It is quite a broad platform with many applications that are very exciting and interesting," said postdoctoral researcher Ran Niu, the paper's lead author. "You can design a lot of structures. We can build optically active actuators. We can build functional machines that we can control."

The project was recently awarded a $1.1 million Designing Materials to Revolutionize and Engineer Our Future grant from the National Science Foundation, which will enable the team to further explore nanoscale incarnations.

"The part that really interests me is the idea of just how structure and information interact to make shape-changing machines," said senior co-author McEuen, the John A. Newman Professor of Physical Science and director of the Kavli Institute at Cornell for Nanoscale Science, where Cohen is a senior investigator. "So RNA, for example, is this incredibly amazing molecule in our bodies that has a lot of information in it, but also has all sorts of interesting functions. And so this is sort of an analog of that system, where we can begin to understand how you mix information and structure to get complex behavior."

While nanoscale machines and self-assembling systems are not new, this project marks the first time the two concepts have been combined with magnetic encoding.

"The vision is that one day I will simply hand you a magnetic disk, you put it into your hard drive, it writes all the magnetic codes that you designed, then you take it and put it in some acid to release the building blocks," Cohen said. "All the little strands with the magnetic patterns that we encoded would come together and self-assemble into some sort of machine that we could control using external magnetic fields."

"This work opens up the field of design," Cohen added. "We're now giving people who are interested in the mathematics of designing materials from scratch a tool set that is incredibly powerful. There's really no end to the creativity and potential for interesting designs to come out of that."

The potential learning opportunities can be found in the research team itself. The paper's co-authors include Edward Esposito, a former university staff member who audited Cohen's Electricity and Magnetism honors class and became a technician in Cohen's lab. He is now pursuing a Ph.D. at the University of Chicago. And co-author Jakin Ng is an Ithaca High School student who started working part time in Cohen's lab through the Learning Web youth experiential education program. Ng's knowledge of origami patterns helped the researchers design some of their structures.

Credit: 
Cornell University

Association between parents' education level and youth outcomes

What The Study Did: Ethnic and racial differences between educational attainment by parents and outcomes among young people related to behavior, academics and health were explored in this observational study. The study included 10,619 adolescents ages 12 to 17 who participated in a nationally representative survey.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

Authors: Shervin Assari, M.D., M.P.H., of Charles R. Drew University of Medicine and Science in Los Angeles, is the corresponding author.

(doi:10.1001/jamanetworkopen.2019.16018)

Editor's Note: The articles includes funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

Buy less, be happier and build a healthy planet

US President Donald Trump is busy dismantling climate policies in the largest economy in the world. Europeans, while recognizing their climate obligations, still have among the highest carbon footprints on the planet.

So what's a concerned global citizen to do? Can people's actions -- outside of large-scale government interventions --make a difference?

A new study suggests the answer is yes.

When researchers looked at members of grassroots movements designed to cut climate impacts and compared them to non-activist peers, they found that the activists not only cut their carbon footprints but reported greater lifestyle satisfaction.

"Typically, as people grow wealthier, they tend to upscale their material living standards, thus consuming more and emitting more climate-damaging gases," said Gibran Vita, who was the co-first author of the study. Vita took his PhD at the Norwegian University of Science and Technology's Industrial Ecology Programme and is now an assistant professor at the Open University in the Netherlands.

"But members of climate initiatives keep their spending low key even if their incomes increase. And consuming less didn't seem to take a toll on their joy."

Instead, Vita and his co-first author, Diana Ivanova and their colleagues found, initiative members were 11-13 per cent more likely to look positively on their own life when compare to non-members, while still cutting their total emissions by 16 per cent.

The results of their study were just published in the academic journal "Energy Research and Social Science."

The researchers undertook their study under the umbrella of the Green Lifestyles Alternative Models and Upscaling Regional Sustainability (GLAMURS) project, funded under the EU's 7th Framework Programme.

Among the different sub-projects, researchers conducted a comprehensive survey to compare the lifestyles of people involved in different kinds of local sustainability-oriented grassroots initiatives to people from the same geographical regions who weren't members.

The GLAMURS researchers used the survey information to look at the carbon footprints and well-being of members and non-members of initiatives in four countries: three food and sustainable consumption cooperatives in Spain, two eco-villages in Romania, five food cooperatives in Italy and members of the "Transition Town Network" in Germany. Transition Towns are part of a global social movement that offers ideas and support to cities and towns who want to build more sustainable communities.

While decisions around diets and clothing may reflect individual preferences, mobility and housing choices are often limited by long-lived infrastructure, urban design, public transport options and commuting distances.

In all, the researchers had information from 141 people from the 12 different grassroots initiatives. They also had information from 1476 non-members spread across the four countries.

They then used a standardized questionnaire to gather data on environmental behaviours, consumption, socio-economic and demographic status, life satisfaction and living standards, and designed a carbon footprint model to estimate greenhouse gas emissions for individuals in the survey.

When all the calculations were completed, they found that people involved in sustainability initiatives had carbon footprints that were 16 per cent lower, on average, than non-members.

"We found that initiative members eat more plant-based food and used more secondhand clothing," said Ivanova, who also took her PhD at NTNU's Industrial Ecology Programme and is now a research fellow at the University of Leeds. "Members were able to cut their carbon footprints by 43 per cent for food-related emissions and 86 per cent for clothing-related emissions."

The researchers also found that even though initiative members tended to ride their bicycles more and live with lower home temperatures in the winter as compared to non-members, their carbon footprints for housing and transport were very similar to people who weren't involved in initiatives.

An important factor for this, Ivanova said, is that choices and emissions from transport and housing are more strongly tied to existing unsustainable infrastructure.

In other words, you may want to travel more by public transport, but if there's no public transport system in your area, you can't. And you may want to use less energy to keep your house warm, but if you're house isn't adequately insulated, you can only turn the heat down so much before your pipes freeze.

"While decisions around diets and clothing may reflect individual preferences, mobility and housing choices are often limited by long-lived infrastructure, urban design, public transport options and commuting distances," she said.

Nevertheless, initiative members weren't freezing in the dark.

Wearing used clothing and more sweaters in the winter didn't seem to dampen the happiness of people who chose to try to cut their carbon footprints.

When the researchers looked at survey questions concerning how initiative members compared to non-members in terms of life satisfaction, they found that members were 11-13 per cent more likely to think positively about their own life.

Although the researchers themselves didn't try to figure why this was the case, they cited other research that suggests that being less driven by materialistic aspirations and more internally motivated is satisfying for people.

"In general, research shows that altruistic behavior, including volunteering, is positively associated with pro-environmental behavior, higher well-being and lower emissions," Vita said.

"It's also something to think about with the upcoming the holiday season," Ivanova said. "The holiday season often encourages overconsumption, materialism and a work-and-spend cycle with negative consequences for environmental and human well-being. Our study adds to the evidence on the high price of materialism - because even though we may believe otherwise, beyond a certain basic consumption level, filling our lives with stuff is generally corrosive for well-being."

In general, research has shown that individual pro-environmental attitudes do not lead to lower carbon footprints, said Edgar Hertwich, senior author of the article and international chair and professor at NTNU's Industrial Ecology Programme. That's part of what makes this particular study encouraging, he said.

"Finding measurable reductions in sustainability initiatives is hence a pleasant surprise. It seems like by acting in concert with other, like-minded people, we can more easily fulfil our sustainability aspirations," he said.

Nevertheless, even though initiative members were able to cut their carbon footprints, the researchers found that it wasn't enough if society is going to hold global warming to a 2°C average.

And the difference wasn't small, either. The average carbon footprint of initiative members was fully five times higher than the per capita amount needed to reach targets.

"There are many ways for people and the planet to coexist harmoniously," Vita and Ivanova said. "Grassroots initiatives are an overlooked, yet essential part of the solution. If we stunt their growth by limiting their access to resources or if they are hindered by bureaucracy, we will lose out on an important mechanism for social change -- experimentation. But all our hopes on people's willingness to voluntarily change is unrealistic. We also need societies, cities and communities that offer low-carbon choices as the default option."

Credit: 
Norwegian University of Science and Technology

How does the prion protein clump? DNA-modulated liquid droplets may explain

image: The globular domain of the prion protein (PrP90-231), depicted in blue, binds different DNA aptamers, that have different conformations (hairpin, red; extended, black). Upon binding to these DNA sequences, PrP phase separates, and, depending on the conformation of the nucleic acid, a solid-like structure is formed (green), similar to the one found in the brain of carriers of neurodegenerative diseases related to protein misfolding, such as amyotrophic lateral sclerosis (ALS), and Parkinson's Disease.

Image: 
Carolina Matos and Anderson Pinheiro

Researchers at the Federal University of Rio de Janeiro (UFRJ), in Brazil, have identified that the interaction between prion proteins and DNA may be behind the formation of protein amyloid aggregates and of the emergence of neurodegenerative diseases such as Creutzfeldt-Jakob disease and other spongiform encephalopathies. The study appears today in the FASEB Journal.

Led by UFRJ Professors Yraima Cordeiro and Anderson Pinheiro, the scientists have found that the prion protein (PrP) suffers liquid-liquid phase separation, and that this mechanism is finely controlled by some DNA sequences. In a process similar to oil droplets dispersed in an oil-water emulsion, the DNA leads PrP to form liquid droplets, turning it into a gel-like state or even changing them into a solid. They have also observed that these properties depend on the conformation of the DNA aptamer (a hairpin or extended conformation) and on the stoichiometry of the protein-nucleic acid interaction.

The process of turning liquid droplets into a solid state could explain the formation of abnormal and irreversible clumping of the prion protein, known as amyloid aggregates. These structures are toxic to the brain and are related to the development of transmissible spongiform encephalopathies, such as the Creutzfeldt-Jakob disease and the bovine spongiform encephalopathy (BSE), commonly known as mad cow disease. The link between amyloid aggregates and the diseases has been known for years, but how these structures form remains unclear. The study brings insights that might help answer this question.

The main findings of the research enlighten the property of the prion protein to bind nucleic acids in a similar fashion of well-described proteins that cause other neurodegenerative diseases. This opens the possibility of targeting the disease by selecting specific DNA sequences to control or avoid the organelles turning into non-functional gels and solids.

Credit: 
Instituto Nacional de Ciência e Tecnologia de Biologia Estrutural e Bioimagem (INBEB)

Mental health program helps teens recognise and support peers at risk

A novel mental health program improves teenagers' ability to recognise and support friends who might be at risk of suicide, according to new research.

University of Melbourne researchers have evaluated the impact of teen Mental Health First Aid (tMHFA) - a universal mental health literacy program for high school students in Years 10-12 - as an intervention to improve peer support towards adolescents at risk of suicide.

Researchers analysed survey data from more than 800 Year 10 - 12 students from four government secondary schools who had participated in the 3 x 75-minute classroom-based training program. This was compared with students who completed a matched control physical first aid course.

Researchers found students who participated in the tMHFA training were 35 times more likely to report adequate suicide first aid than those in the control group. This includes noticing something is wrong, asking if their friend is OK and suggesting they tell an adult.

Results suggested that student knowledge of the general warning signs of mental health problems and confidence to offer support was more important than having specific knowledge of suicide - calling into question suicide specific education programs in schools.

tMHFA students reported higher levels of distress following the training than the students who received physical first aid, however most distress often lasted from a few moments to a few hours. The 12-month follow up confirmed that this experience was fleeting and not associated with long-term harm.

University of Melbourne Senior Research Fellow Laura Hart said the findings demonstrate the importance of embedding suicide-prevention information within general mental health programs in schools and increasing peer support and discussion opportunities.

"Young people account for nearly one third of the 800 000 people who die by suicide each year, with suicide a leading cause of death among 15 to 29-year-olds," Dr Hart said.

"Three in four young people report that they would first turn to a friend for help if they were considering suicide. We need to equip teenagers with the skills and knowledge to recognise warning signs and get appropriate help for their friends."

Credit: 
University of Melbourne

Transplanting human nerve cells into a mouse brain reveals how they wire into brain circuits

The brain cortex, the outside layer of our brain often referred to as grey matter, is one of the most complex structures found in living organisms. It gives us the advanced cognitive abilities that distinguish us from other animals.

Neuroscientist Prof. Pierre Vanderhaeghen (VIB-KU Leuven, Université libre de Bruxelles) explains what makes the human brain so unique: "One remarkable feature of human neurons is their unusually long development. Neural circuits take years to reach full maturity in humans, but only a few weeks in mice or some months in monkeys."

"This long period of maturation allows much more time for the modulation of brain cells and circuits, which allows us to learn efficiently for an extended period up until late adolescence. It's a very important and unique feature for our species, but what lies at its origin remains a mystery."

Understanding the mechanisms underlying brain circuit formation is important, for example if we want to treat brain disease, adds Prof. Vincent Bonin of Neuro-Electronics Research Flanders (NERF, empowered by imec, KU Leuven and VIB): "Disturbances of circuit development have been linked to intellectual disability, for example, and to psychiatric diseases such as schizophrenia. However, it has remained impossible to study human neural circuits in action in great detail - until now!"

Human brain cells in a mouse brain:

In a joint research effort, the teams of Vanderhaegen and Bonin developed a novel strategy to transplant human neurons as individual cells into the mouse brain and to follow their development over time.

Dr. Daniele Linaro: "We differentiated human embryonic stem cells into neurons and injected them into the brains of young mouse pups. This allows us to investigate human neurons in a living brain over many months. We can also apply a whole range of biological tools in these cells to study human neural circuit formation and human brain diseases."

The researchers discovered that the transplanted human cells follow the same developmental plan as they would in a human brain, with a months-long period of maturation typical for human neurons. This means that our nerve cells may follow an 'internal clock' of development that is surprisingly independent of the surrounding environment.

Moreover, the human cells were able to function in the mouse neural circuits. "After months of maturation, the human neurons began to process information, for example responding to visual inputs from the environment," says Dr. Ben Vermaercke, who conducted the experiments together with Linaro. "The human cells even showed different responses depending on the type of stimulus, indicating a surprisingly high degree of precision in the connections between the transplanted cells and the host mouse's brain circuits."

A milestone with a lot of potential:

This study constitutes the first demonstration of genuine circuit integration of neurons derived from human pluripotent stem cells. According to Bonin, "it's a technological milestone that opens up exciting possibilities to study how genetic information, environmental cues and behavior together shape how the brain wires itself up".

On the one hand, this model could be applied to study a whole range of diseases that are thought to impact the development of human neurons into neural circuits. The researchers plan to use neurons with genetic mutations linked to diseases such as intellectual disability to try and understand what goes wrong during maturation and circuit formation.

"Our findings also imply that nerve cells retain their 'juvenile' properties even in an adult (mouse) brain. This could have potentially important implications for neural repair," adds Vanderhaeghen. "The fact that transplanted young human neurons can integrate into adult circuits is promising news in terms of treatment development for neurodegeneration or stroke, where lost neurons could potentially be replaced by transplanting new neurons."

Credit: 
Université libre de Bruxelles

Two million-year-old ice cores provide first direct observations of an ancient climate

image: Princeton University-led researchers have extracted 2 million-year-old ice cores from Antarctica -- the oldest yet recovered -- that provide the first direct observations of prehistoric atmospheric conditions and temperatures. They used data from the ice cores to answer long-held questions about how our current colder, longer glacial cycle emerged.

Image: 
(Photos by Sean Mackay, Boston University)

Princeton University-led researchers have extracted 2 million-year-old ice cores from Antarctica that provide the first direct observations of Earth's climate at a time when the furred early ancestors of modern humans still roamed.

Gas bubbles trapped in the cores -- which are the oldest yet recovered -- contain pristine samples of carbon dioxide, methane and other gases that serve as "snapshots" of prehistoric atmospheric conditions and temperatures, the researchers recently reported in the journal Nature. The cores were collected in the remote Allan Hills of Antarctica.

First author Yuzhen Yan, who received his Ph.D. in geosciences from Princeton in 2019, explained that because ice flows and compresses over time, continual ice cores only extend back to 800,000 years ago. The cores he and his co-authors retrieved are like scenes collected from a very long movie that do not show the whole film, but convey the overall plot.

"You don't get a sense of how things changed continually, but you get an idea of big changes over time," said Yan, whose graduate research on ice cores supported by a 2016 Walbridge Fund Graduate Award for Environmental Research from the Princeton Environmental Institute (PEI) was a basis for the current work.

The ice cores reported in Nature are the latest to come out of the research group of senior author John Higgins, a Princeton associate professor of geosciences, PEI associated faculty and Yan's doctoral co-adviser. A previous team led by Higgins recovered a 1 million-year-old ice core from the Allan Hills, which was the oldest ice core ever recorded by scientists when it was reported in the journal Proceedings of the National Academy of Sciences in 2015. The cores were dated by measuring isotopes of the gas argon trapped in bubbles in the ice, a technique developed by co-author Michael Bender, Princeton professor of geosciences, emeritus, and PEI associated faculty.

"The ability to measure atmospheric composition directly is one of the biggest advantages of ice cores," Yan said. "That's why people spend years and years in the most isolated places getting them."

In the latest publication, the researchers use data from the ice cores to answer long-held questions about how our current glacial cycle emerged. Up until roughly 1.2 million years ago, Earth's ice ages consisted of thinner, smaller glaciers that came and went every 40,000 years on average.

Then, after what is known as the Mid-Pleistocene Transition, there emerged our current world characterized by colder and longer glacial cycles of 100,000 years. The two periods are known as the 40k and 100k world, respectively.

Some existing theories have stated that the 100k world -- which includes the last ice age that ended 11,700 years ago -- came about because of a long-term decline in atmospheric carbon dioxide, Yan said. But the researchers found that this was not the case -- average carbon dioxide was relatively steady through the 40k and 100k worlds. While the lowest temperatures and carbon dioxide levels of the 40k world were greater than the low points of the 100k world, the highest levels of both ages were similar.

"It could be the case that after the Mid-Pleistocene Transition, something occurred that lowered global glacial temperatures and atmospheric carbon dioxide values," Yan said. "This is the first time we have direct access to these greenhouse gas measurements. The ice core also opens up an array of new measurement possibilities that can give us insights into the 40k world when glacial cycles were very different from what we have today."

Although a long-term decline in average atmospheric carbon dioxide may not have directly led to the 100k world, the researchers nonetheless observed a correlation between carbon dioxide and global temperature, Bender said.

"To say that carbon dioxide is not a factor would be completely wrong," Bender said. "During the 40,000- and 100,0000-year glacial-interglacial cycles, temperature and global ice volume tracks carbon dioxide rather closely. Carbon dioxide changes are required to get from the cooler glacial temperatures to the warmer interglacial temperatures."

The amount of carbon dioxide now in the atmosphere tops 400 parts-per-million (ppm), which is nearly 100 ppm higher than the highest levels of the 40k world, Yan said.

"We're seeing carbon dioxide levels not seen in 2 million years," Yan said. "While our data suggest that long-term carbon dioxide decline was not the decisive factor in the Mid-Pleistocene Transition, it does not mean that carbon dioxide does not have the capability to bring about global-scale changes.

"We're in a different situation now -- carbon dioxide is the major player in our current world," he said. "If we want to look into the geologic past for an analogy of what's going on in our world today, we need to go beyond 2 million years to find it."

Credit: 
Princeton University