Earth

Brain or muscles, what do we lose first?

Someone dies somewhere in the world every 10 seconds owing to physical inactivity - 3.2 million people a year according to the World Health Organisation (WHO). From the age of 50, there is a gradual decline not just in physical activity but also in cognitive abilities since the two are correlated. But which of them influences the other? Does physical activity impact on the brain or is it the other way around? To answer this question, researchers from the University of Geneva (UNIGE), Switzerland, and the NCCR Lives Swiss National Centre of Competence in Research used a database of over 100,000 people aged 50-90 whose physical and cognitive abilities were measured every two years for 12 years. The findings, which are published in the journal Health Psychology, show that - contrary to what was previously thought - cognitive abilities ward off inactivity much more than physical activity prevents the decline in cognitive abilities. All of which means we need to prioritise exercising our brains.

The literature in this area has been looking at the impact of physical activity on cognitive skills for a number of years. "Correlations have been established between these two factors, particularly in terms of memory, but also regarding the growth and survival of new neurons," begins Boris Cheval, a researcher at UNIGE's Swiss Centre for Affective Sciences (CISA). "But we have never yet formally tested which comes first: does physical activity prevent a decline in cognitive skills or vice versa? That's what we wanted to verify."

What came first: the chicken or the egg?

Earlier studies based on the correlation between physical activity and cognitive skills postulated that the former prevent the decline of the latter. "But what if this research only told half the story? That's what recent studies suggest, since they demonstrate that our brain is involved when it comes to engaging in physical activity," continues the Geneva-based researcher.

The UNIGE researchers tested the two possible options formally using data from the SHARE survey (Survey of Health, Aging and Retirement in Europe), a European-wide socio-economic database covering over 25 countries. "The cognitive abilities and level of physical activity of 105,206 adults aged 50 to 90 were tested every two years over a 12-year period," explains Matthieu Boisgontier, a researcher at the Lives Swiss National Centre of Competence in Research (NCCR Lives). Cognitive abilities were measured using a verbal fluency test (naming as many animals as possible in 60 seconds) and a memory test (memorising 10 words and reciting them afterwards). Physical activity was measured on a scale of 1 ("Never") to 4 ("More than once a week").

The Geneva researchers employed this data in three separate statistical models. In the first, they looked at whether physical activity predicted the change in cognitive skills over time; in the second, whether cognitive skills predicted the change in physical activity; and in the third, they tested the two possibilities bidirectionally. "Thanks to a statistical index, we found that the second model adjusted the most precisely to the data of the participants," says Cheval. The study demonstrates, therefore, that cognitive capacities mainly influence physical activity and not vice versa, as the literature to date had postulated. "Obviously, it's a virtuous cycle, since physical activity also influences our cognitive capacities. But, in light of these new findings, it does so to a lesser extent," points out Boisgontier.

Slowing down an inevitable decline

From the age of 50, the decline in physical and cognitive abilities is inevitable. However, these results indicate that, contrary to what was once thought, if we act first on our cognitive skills, we can slow the decline of this virtuous circle. "This study backs up our theory that the brain has to make a real effort to get out of a sedentary lifestyle and that by working on cognitive capacities, physical activity will follow", says Cheval by way of conclusion.

Credit: 
Université de Genève

First high sensitivity dark matter axion hunting results from south Korea

image: The lower horizontal axis is the axion mass, the upper horizontal axis is the microwave frequency corresponding to the mass, and the vertical axis is the coupling constant of axion-to-photon conversion. Both axes are in logarithm scales. CAPP-8TB indicates the range of mass reported in this study. CAST indicates experimental results from CERN (Switzerland) published in 2017, RBF is the result from Brookhaven National Laboratory (BNL) in a collaboration of University of Rochester, BNL, and Fermi National Accelerator Laboratory (US) published in 1989. UF is the result from University of Florida (US) published in 1990, ADMX is the range scanned at University of Washington (US) from 1998 to 2018. HAYSTAC is the result scanned at Yale University (US) from 2017 to 2018. ORGAN and QUAX-aγ are the results from University of Western Australia (Australia) and INFN (Italy) in 2017 and 2019, respectively. KSVZ and DFSZ are two models that can solve the strong CP problem.

Image: 
IBS

Researchers at the Center for Axion and Precision Physics Research (CAPP), within the Institute for Basic Science (IBS, South Korea), have reported the first results of their search of axions; elusive ultra-lightweight particles that could constitute the mysterious dark matter. IBS-CAPP is located at Korea Advanced Institute of Science and Technology (KAIST). Published in Physical Review Letters, the analysis combines data taken over three months with a new axion-hunting apparatus developed for the last two years.

Proving the existence of axions could solve two of the biggest mysteries in modern physics at once: why galaxies orbiting within galaxy clusters are moving far faster than expected, and why two fundamental forces of nature follow different symmetry rules. The first conundrum was raised back in the 1930s, and was confirmed in the 70s when astronomers noticed that the observed mass of the Milky Way galaxy could not explain the strong gravitational pull experienced by the stars in the galaxies. The second enigma, known as the strong CP problem, was dubbed by Forbes magazine as "the most underrated puzzle in all of physics" in 2019.

Symmetry is an important element of particle physics and CP refers to the Charge+Parity symmetry, where the laws of physics are the same if particles are interchanged with their corresponding antiparticles (C) in their mirror images (P). In the case of the strong force, which is responsible for keeping nuclei together, CP violation is allowed theoretically, but has never been detected, even in the most sensitive experiments. On the contrary, the CP symmetry is violated both theoretically and experimentally in the weak force, which underlies some types of radioactive decays. In 1977, theoretical physicists, Roberto Peccei and Helen Quinn proposed the Peccei-Quinn symmetry as a theoretical solution to this problem, and two Nobel laureates in Physics, Frank Wilczek and Steven Weinberg, showed that the Peccei-Quinn symmetry results in a new particle: the axion. The particle was named after an American detergent, because it should clean the strong interactions mess.

Currently, it is estimated that 85% of matter in the Universe is dark, that is imperceptible. Dark matter provides enough mass to hold our sun from leaving the Milky Way, but it is not visible under ordinary conditions. In other words, axions are expected to be present in large amount in the Universe, but to barely interact with the particles that are familiar to us.

According to the predictions and Fermi's golden rule, an axion transforms spontaneously into two detectable particles (photons) at an extremely low rate, and this conversion can be faster in an environment where one of the photons is already present. In experiments, that role is played by a strong magnetic field, which provides (virtually) photons of all energy levels, speeding up the process tremendously.

To facilitate the axion-to-photon conversion, IBS researchers used their custom made CAPP-8TB haloscope. This instrument has a cylinder-shaped superconducting magnet with a clear bore of 165 mm and a central magnetic field of 8 Tesla. The signal of the axion-spawned photons is amplified in a resonant cavity. If the right frequency is chosen, the photons would resonate in the cavity and mark their presence with a little flash. The team would need to detect about 100 microwave photons per second to make a confident statement.

"This experiment is not a 100-meter sprint, but the first goal in a marathon run. We learned by doing and we tested new concepts to be used at higher-level systems in the future," explains Yannis K. Semertzidis, the director of the Center and also a professor of KAIST.

In this experimental run, the team searched axions with a mass between 6.62 and 6.82 μeV, corresponding to the frequency between 1.6 and 1.65 GHz, a range that was selected by quantum chromodynamics. The researchers showed experimentally with a 90% confidence level, which is the most sensitive result in the mass range to date, that there is no axion dark matter or axion-like particle within that range. In this way, CAPP-8TB takes its place among other axion-hunting experiments that are looking at various possible masses. Moreover, this is the only experiment at that mass range that reaches near the sensitivity required according to the two most famous theoretical models about axions: the KSVZ model and the DFSZ model. The letters are abbreviations that refer to the scientists who proposed them.

"We proved that we can reach much better sensitivity than all other experiments in that frequency range and that we are ready to scale up our research with larger systems. We aim to be at the top of our field for the next ten years! That's why it's so exciting!" enthuses research engineering fellow Soohyung Lee, the first author of the study.

The mass range is determined by the diameter of the cavity. A bigger diameter can search a lower mass region and vice versa. Since CAPP-8TB's resonant cavity is placed inside the clear bore of the superconducting magnet, IBS researchers designed a tunable copper cylindrical cavity as a resonator with the maximum available volume.

Beyond the cavity, the CAPP-8TB haloscope boasts a number of cutting-edge technologies, including a cryogenic dilution refrigerator reaching -273 degrees Celsius (just about 50 mK above absolute zero), a superconducting magnet with a strong magnetic field, low-noise microwave electronics and state-of-the-art amplifiers.

The plan of the Center is to look for axions tuning the haloscope at a frequency of 1-10 GHz, and later of 10-25 GHz using a more powerful magnet with large volume, implementing all their inventions. The search for axions continues nonstop.

Credit: 
Institute for Basic Science

Study challenges common view of oxygen scarcity on Earth 2 billion years ago

image: Two-billion-year-old shungite, a type of sedimentary rock exposed in north-western Russia, records evidence for balmy, oxygen-rich conditions on the early Earth. Photo credits K. Paiste.

Image: 
K. Paiste.

Shungite, a unique carbon-rich sedimentary rock from Russia that deposited 2 billion years ago, holds clues about oxygen concentrations on Earth's surface at that time. Led by Professor Kurt Konhauser at the University of Alberta and Professor Kalle Kirsimäe at the University of Tartu, an international research team involving other colleagues from France, Norway, Russia, and USA, have found strikingly high molybdenum, uranium, and rhenium concentrations, as well as elevated uranium isotope ratios in drill cores that dissect the shungite rocks. These trace metals are only thought to be common in Earth's oceans and sediments when there is abundant oxygen around. The researchers found that such trace metal concentrations are unrivaled in early Earth's history, suggesting elevated levels of oxygen at the time when the shungite was deposited.

"What is puzzling is that the widely-accepted models of Earth's carbon and oxygen cycles predict that shungite should have been deposited at a time of rapid decrease in oxygen levels," says Mänd, a PhD candidate at the University of Alberta and lead author of the study.

Most scientists agree that atmospheric oxygen levels significantly increased about 2.4 billion years ago--known as the Great Oxidation Event (GOE)--and reached about half of modern levels by about 2.1 billion years. The GOE was also accompanied by a shift in carbon isotope ratios in sedimentary rocks. To scientists, this fits the story--the anomalous carbon isotope ratios reflect the burial of massive amounts of plankton as organic matter in ocean sediments, which in turn lead to the generation of excess oxygen. But the prevailing understanding is that immediately after this period of high concentrations, oxygen levels decreased again and remained low for almost a billion years during Earth's so-called 'middle age'.

"Fresh drill cores that we obtained from the Lake Onega area with the support of University of Tartu and Tallinn University of Technology provide some of the best rock archives to decipher the environmental conditions immediately after the GOE" says Kirsimäe, coordinator of geological field work.

"What we found contradicts the prevailing view--essentially we have clear evidence that atmospheric oxygen levels rose even further after the carbon isotope anomaly ended," says Mänd. "This will force the Earth science community to rethink what drove the carbon and oxygen cycles on the early Earth."

These new findings are also crucial for understanding the evolution of complex life. Earth's 'middle age' represents the backdrop for the appearance of eukaryotes. Eukaryotes, the precursors to all complex life, including animals such as ourselves, generally require high oxygen levels in their environment to thrive. This work now strengthens the suggestion that suitable conditions for the evolution of complex life on early Earth existed for a much longer time than previously thought. As such, the findings indirectly support earlier studies where Prof. Konhauser was involved that revealed large, potentially eukaryotic trace fossils as old as 2.1 billion years.

Despite these new advances, the delay between the initial rise of oxygen and the appearance and radiation of eukaryotes, remains an area of active research; one that University of Tartu and University of Alberta researchers are well positioned to help answer.

Credit: 
Estonian Research Council

Engineers model mutations causing drug resistance

image: (Left) A schematic of drug resistance across a population of patients. Patients with initially sensitive disease (blue) are treated with a drug. Different genetic mutations cause different resistance mutations (red, yellow, and green). (Right) Tallying up the number of patients associated with each resistance allele, some alleles are more common in the clinical population than others.

Image: 
Scott Leighow, Penn State

Whether it is a drug-resistant strain of bacteria, or cancer cells that no longer react to the drugs intended to kill them, diverse mutations make cells resistant to chemicals, and "second generation" approaches are needed. Now, a team of Penn State engineers may have a way to predict which mutations will occur in people, creating an easier path to create effective pharmaceuticals.

"Structure-based drug design works very well," said Justin Pritchard, assistant professor of biomedical engineering and holder of the Dorothy Foehr Huck and J. Lloyd Huck Early Career Entrepreneurial Professorship. "It is an amazing ecosystem of technology, but you still have to point it at a set of resistance mutations."

Standard practice to develop drugs is to model the structure of chemicals and their cellular targets to kill specific pathogens or cancer cells. Once mutations begin to change the cells, treatment requires new drugs. However, a variety of mutations may occur and drug developers need to target the appropriate mutation to kill the pathogen or the cancer cells.

The researchers wanted to discover what drives which mutations to grow out in the real world so that they could choose the most effective mutations to target. They report today (Mar. 24) in Cell Reports that they found that the most drug-resistant mutation was not necessarily the mutation that dominated. "Survival of the fittest" did not always hold and targeting should aim at the most probable mutation rather than the most resistant, at least for some cancers.

"We need to not just understand the biophysics," said Pritchard. "We also need to understand the evolutionary dynamics."

Drug resistance is a problem when treating diseases caused by bacteria, viruses and cancers, but the researchers chose to investigate mutations in cancers because understanding mutations in cancer cells is simpler. Mutations in bacteria and virus have two components -- what happens within the cells and what happens when the bacteria or viruses spread from host to host. Because cancer is not, in humans, contagious, working with cancer cells removes a portion of the potential source of mutations.

"If we take out the community aspect of transmission, we can study just the de novo, or 'from nothing,' generation of mutations," said Pritchard.

The researchers looked at existing data for leukemia and three other types of cancer. The leukemia database was the largest and most complete. They used algorithms similar to those used in modeling how chemical reactions in chemical physics take place. In this case, they used the simulations to model how evolution works.

"We are trying to create a generalized approach to getting the numbers that we use in the models," said Pritchard. "To do this we did not 'fit' the model, but used data obtained from experiments and scaling."

Creating a way to obtain data for generalized cases rather than individuals would increase the possibility of using this method for a variety of pathogens.

"We ran the model and it matched clinical data to a degree much better than I ever expected," said Pritchard. "We did this from first principles (basic assumptions)."

As cancer cells divide, errors that are made in the copying of DNA result in mutations. One letter of DNA might be mistakenly replaced with another, but these mistakes are not completely random. Some letters are more easily substituted for others, and so these mutations happen more often. This creates a mutation bias -- some substitutions are more likely. Thus the likeliness of a mistake, and not the reduction in sensitivity to drugs, can predict the resistance mutations that real patients develop.

"We shouldn't always focus on the strongest resistance mutation because there are other evolutionary forces that dictate what happens in the real world," said Pritchard. "Sometimes drug resistance relies on biased random events."

The researchers found that biased random mutations played a big part in the evolution of resistance in leukemia. They found similar results with breast, prostate and stomach cancers, although the effect size was not as large.

"The data are not quite as strong in the prostate and breast cancer setting," said Pritchard. "In non-small cell lung cancer we didn't see this effect at all."

According to the researchers, there are lots of places where evolutionary bias creates an abundance of mutations that are not the most resistant strains, but it is a spectrum with leukemias on one end; breast, prostate and stomach cancers in the middle; and non-small cell lung cancer on the other end.

"Our analysis establishes a principle for rational drug design: When evolution favors the most probable mutant, so should drug design," the researchers said.

Credit: 
Penn State

Manipulating ligands

image: In light of cleanliness of laser-produced metal nanoparticle precursors, surface-clean noble metal aerogels are created. The intrinsic electrocatalytic properties of NMAs are elucidated, and ligand-modulated electrocatalytic properties are disclosed for electro-oxidation of ethanol. Therefore, the work offers a new dimension for devising high-performance electrocatalysts for NMAs and other material systems.

Image: 
(c) Ran Du

Noble metal aerogels (NMAs) are an emerging class of porous materials embracing nano-sized highly-active noble metals and porous structures, displaying unprecedented performance in diverse electrocatalytic processes. However, various impurities, particularly organic ligands, are often involved in the synthesis and remain in the corresponding products, hindering the investigation of the intrinsic electrocatalytic properties of NMAs. On the other hand, the presence of organic ligands is generally regarded detrimental to the catalytic process because they can block the active sites. However, the authenticity of this fact has not yet been verified in NMA systems because of the lack of way to impart ligands in clean NMAs via a controlled manner.

Ran Du from China is a Alexander von Humboldt research fellow working as postdoc in the physical chemistry group of Professor Alexander Eychmüller at TU Dresden since 2017. In collaboration with Prof. Stephan Barcikowski from the University of Duisburg-Essen, they recently created surface-clean noble metal aerogels by using laser-produced nanoparticles and thus, revealing a new dimension for enhancing electrocatalysis performance for electro-oxidation of ethanol (the anode reaction for direct ethanol fuel cells) by modulating ligand chemistry.

Ran Du and his team prepared various inorganic-salt-stabilized metal nanoparticles by laser ablation, which serve as organic-ligand-free precursors. In this way, they fabricated various impurity-free NMAs (gold (Au), palladium (Pd), and gold-palladium (Au-Pd) aerogels. In this light, the intrinsic electrocatalytic properties of NMAs were unveiled. In addition, these clean gels were used as a platform to deliberately graft specific ligands, by which the ligand-directed modulation of electrocatalytic properties was unambiguously demonstrated. The underlying mechanisms were found to be attributed to electron density modulations posed by different ligands, where the electrocatalytic activity of ethanol oxidation reaction (EOR) has been positively correlated with the oxidation state of the metals. In this regard, the polyvinylpyrrolidone (PVP)-modified Au-Pd bimetallic aerogel delivered a prominent current density of 5.3 times higher than commercial Pd/C (palladium/carbon) and 1.7 times higher than Au-Pd pristine aerogels.

"With this work, we not only provide a strategy to fabricate impurity-free NMAs for probing their intrinsic properties, but also offer a new dimension for devising high-performance electrocatalysts by revisiting the effects of the ligands", assumes Ran Du.

Credit: 
Technische Universität Dresden

Crumpled graphene makes ultra-sensitive cancer DNA detector

image: Illinois researchers found that crumpling graphene in DNA sensors made it tens of thousands of times more sensitive, making it a feasible platform for liquid biopsy.

Image: 
Image courtesy of Mohammad Heiranian

CHAMPAIGN, Ill. -- Graphene-based biosensors could usher in an era of liquid biopsy, detecting DNA cancer markers circulating in a patient's blood or serum. But current designs need a lot of DNA. In a new study, crumpling graphene makes it more than ten thousand times more sensitive to DNA by creating electrical "hot spots," researchers at the University of Illinois at Urbana-Champaign found.

Crumpled graphene could be used in a wide array of biosensing applications for rapid diagnosis, the researchers said. They published their results in the journal Nature Communications.

"This sensor can detect ultra-low concentrations of molecules that are markers of disease, which is important for early diagnosis," said study leader Rashid Bashir, a professor of bioengineering and the dean of the Grainger College of Engineering at Illinois. "It's very sensitive, it's low-cost, it's easy to use, and it's using graphene in a new way."

While the idea of looking for telltale cancer sequences in nucleic acids, such as DNA or its cousin RNA, isn't new, this is the first electronic sensor to detect very small amounts, such as might be found in a patient's serum, without additional processing.

"When you have cancer, certain sequences are overexpressed. But rather than sequencing someone's DNA, which takes a lot of time and money, we can detect those specific segments that are cancer biomarkers in DNA and RNA that are secreted from the tumors into the blood," said Michael Hwang, the first author of the study and a postdoctoral researcher in the Holonyak Micro and Nanotechnology Lab at Illinois.

Graphene - a flat sheet of carbon one atom thick - is a popular, low-cost material for electronic sensors. However, nucleic-acid sensors developed so far require a process called amplification - isolating a DNA or RNA fragment and copying it many times in a test tube. This process is lengthy and can introduce errors. So Bashir's group set out to increase graphene's sensing power to the point of being able to test a sample without first amplifying the DNA.

Many other approaches to boosting graphene's electronic properties have involved carefully crafted nanoscale structures. Rather than fabricate special structures, the Illinois group simply stretched out a thin sheet of plastic, laid the graphene on top of it, then released the tension in the plastic, causing the graphene to scrunch up and form a crumpled surface.

They tested the crumpled graphene's ability to sense DNA and a cancer-related microRNA in both a buffer solution and in undiluted human serum, and saw the performance improve tens of thousands of times over flat graphene.

"This is the highest sensitivity ever reported for electrical detection of a biomolecule. Before, we would need tens of thousands of molecules in a sample to detect it. With this device, we could detect a signal with only a few molecules," Hwang said. "I expected to see some improvement in sensitivity, but not like this."

To determine the reason for this boost in sensing power, mechanical science and engineering professor Narayana Aluru and his research group used detailed computer simulations to study the crumpled graphene's electrical properties and how DNA physically interacted with the sensor's surface.

They found that the cavities served as electrical hotspots, acting as a trap to attract and hold the DNA and RNA molecules.

"When you crumple graphene and create these concave regions, the DNA molecule fits into the curves and cavities on the surface, so more of the molecule interacts with the graphene and we can detect it," said graduate student Mohammad Heiranian, a co-first author of the study. "But when you have a flat surface, other ions in the solution like the surface more than the DNA, so the DNA does not interact much with the graphene and we cannot detect it."

In addition, crumpling the graphene created a strain in the material that changed its electrical properties, inducing a bandgap - an energy barrier that electrons must overcome to flow through the material - that made it more sensitive to the electrical charges on the DNA and RNA molecules.

"This bandgap potential shows that crumpled graphene could be used for other applications as well, such as nano circuits, diodes or flexible electronics," said Amir Taqieddin, a graduate student and coauthor of the paper.

Even though DNA was used in the first demonstration of crumpled graphene's sensitivity for biological molecules, the new sensor could be tuned to detect a wide variety of target biomarkers. Bashir's group is testing crumpled graphene in sensors for proteins and small molecules as well.

"Eventually the goal would be to build cartridges for a handheld device that would detect target molecules in a few drops of blood, for example, in the way that blood sugar is monitored," Bashir said. "The vision is to have measurements quickly and in a portable format."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Counteracting a legacy of extinctions

image: Wild donkeys in Kachana, Kimberley, Western Australia.The giant wombat may no longer roam the wilds of Australia, but wild donkeys certainly do, and new research reveals that the two are ecologically similar.

Image: 
Chris Henggeler

The giant wombat may no longer roam the wilds of Australia, but wild donkeys certainly do, and new research reveals that the two are ecologically similar. Human hunting caused the extinction of many megafauna (large herbivorous mammals) over the last 100,000 years, but recently humans have introduced numerous herbivore species, rewilding many parts of the world, particularly Australia.

In most places, these introduced megafauna are viewed as invasive species.

Now a new study, comparing the traits of introduced herbivores to those of the past, reveals that introductions have restored many important ecological traits that have been lost for thousands of years.

The study, published in PNAS, involved an international team of ecologists from The University of Technology Sydney (UTS) in Australia; the University of Massachusetts Amherst, University of Kansas, University of California Davis, and the Natural History Museum of Los Angeles County in the USA; the University of Sussex in the UK; the Universidad de Alcalá in Spain; and Aarhus University in Denmark

The authors note that our modern 'natural' world is very different than it was for the last 30-45 million years, throughout which thrived many megafauna, such as massive wombat-relatives called diprotodons, turtle-like glyptodons, hoofed kangaroos reminiscent of 'open-plains' horses, and two-story tall sloths. These big animals emerged in the fossil record not long after the demise of the dinosaurs but most were driven extinct by 10,000 years ago, most likely by the hunting pressure of our human ancestors in the Late Pleistocene.

The researchers found that by introducing species across the world, humans restored lost ecological traits to many ecosystems; making the world more similar to the pre-extinction Late Pleistocene and counteracting a legacy of extinctions.

Lead author and PhD student at the UTS Centre for Compassionate Conservation (CfCC), Erick Lundgren says the possibility that introduced herbivores might restore lost ecological functions had been suggested but not "rigorously evaluated".

The authors compared key ecological traits of herbivore species since before the Late Pleistocene extinctions, such as body size, diet, and habitat.

"This allowed us to compare species that are not necessarily closely related to each other, but are similar in terms of how they affect ecosystems," Lundgren said.

"By doing this, we could quantify the extent to which introduced species make the world more similar or dissimilar to the pre-extinction past. Amazingly they make the world more similar," Lundgren continued.

This is largely because 64% of introduced herbivores are most similar to extinct species than to local surviving native species. These introduced 'surrogates' for extinct species include taxonomically similar species in some places, like mustangs (wild horses) in North America who have replaced extinct pre-domestic horses. In places like Australia, however, introduced herbivores are different taxonomically but similar in terms of their traits.

Senior author Dr Arian Wallach from UTS CfCC said: "We usually think of nature as defined by the short period of time for which we have recorded history but this is already long after strong and pervasive human influences.

"Broadening our perspective to include the more evolutionary relevant past lets us ask more nuanced questions about introduced species and how they affect the world.

"We need a complete rethink of non-native species, to end eradication programs, and to start celebrating and protecting these incredible wildlife," Dr Wallach said.

By broadening our perspective beyond the past few hundred years - to a time before widespread human caused extinctions - we can recognise how introduced herbivores make the world more similar to the pre-extinction past, bringing with them broader biodiversity benefits, the authors conclude.

Credit: 
University of Technology Sydney

Uncertainty about facts can be reported without damaging public trust in news -- study

The numbers that drive headlines - those on Covid-19 infections, for example - contain significant levels of uncertainty: assumptions, limitations, extrapolations, and so on.

Experts and journalists have long assumed that revealing the "noise" inherent in data confuses audiences and undermines trust, say University of Cambridge researchers, despite this being little studied.

Now, new research has found that uncertainty around key facts and figures can be communicated in a way that maintains public trust in information and its source, even on contentious issues such as immigration and climate change.

Researchers say they hope the work, funded by the Nuffield Foundation, will encourage scientists and media to be bolder in reporting statistical uncertainties.

"Estimated numbers with major uncertainties get reported as absolutes," said Dr Anne Marthe van der Bles, who led the new study while at Cambridge's Winton Centre for Risk and Evidence Communication.

"This can affect how the public views risk and human expertise, and it may produce negative sentiment if people end up feeling misled," she said.

Co-author Sander van der Linden, director of the Cambridge Social Decision-Making Lab, said: "Increasing accuracy when reporting a number by including an indication of its uncertainty provides the public with better information. In an era of fake news that might help foster trust."

The team of psychologists and mathematicians set out to see if they could get people much closer to the statistical "truth" in a news-style online report without denting perceived trustworthiness.

They conducted five experiments involving a total of 5,780 participants, including a unique field experiment hosted by BBC News online, which displayed the uncertainty around a headline figure in different ways.

The researchers got the best results when a figure was flagged as an estimate, and accompanied by the numerical range from which it had been derived, for example: "...the unemployment rate rose to an estimated 3.9% (between 3.7%-4.1%)".

This format saw a marked increase in the feeling and understanding that the data held uncertainty, but little to no negative effect on levels of trust in the data itself, those who provided it (e.g. civil servants) or those reporting it (e.g. journalists).

"We hope these results help to reassure all communicators of facts and science that they can be more open and transparent about the limits of human knowledge," said co-author Prof Sir David Spiegelhalter, Chair of the Winton Centre at the University of Cambridge.

Catherine Dennison, Welfare Programme Head at the Nuffield Foundation, said: "We are committed to building trust in evidence at a time when it is frequently called into question. This study provides helpful guidance on ensuring informative statistics are credibly communicated to the public."

The findings are published today in the journal Proceedings of the National Academy of Sciences.

Prior views on contested topics within news reports, such as migration, were included in the analysis. Although attitudes towards the issue mattered for how facts were viewed, when openness about data uncertainty was added it did not substantially reduce trust in either the numbers or the source.

The team worked with the BBC to conduct a field experiment in October 2019, when figures were released about the UK labour market.

In the BBC's online story, figures were either presented as usual, a 'control', or with some uncertainty - a verbal caveat or a numerical range - and a link to a brief survey. Findings from this "real world" experiment matched those from the study's other "lab conditions" experiments.

"We recommend that journalists and those producing data give people the fuller picture," said co-author Dr Alexandra Freeman, Executive Director of the Winton Centre.

"If a number is an estimate, let them know how precise that estimate is by putting a minimum and maximum in brackets afterwards."

Sander van der Linden added: "Ultimately we'd like to see the cultivation of psychological comfort around the fact that knowledge and data always contain uncertainty."

"Disinformation often appears definitive, and fake news plays on a sense of certainty," he said.

"One way to help people navigate today's post-truth news environment is by being honest about what we don't know, such as the exact number of confirmed coronavirus cases in the UK. Our work suggests people can handle the truth."

Credit: 
University of Cambridge

Pablo Escobar's hippos may help counteract a legacy of extinctions

image: Introduced herbivores share many key ecological traits with extinct species across the world.

Image: 
Graphic courtesy of University of Kansas/Oscar Sanisidro

AMHERST, Mass. - When cocaine kingpin Pablo Escobar was shot dead in 1993, the four hippos he brought to his private zoo in Colombia were left behind in a pond on his ranch. Since then, their numbers have grown to an estimated 80-100, and the giant herbivores have made their way into the country's rivers. Scientists and the public alike have viewed Escobar's hippos as invasive pests that by no rights should run wild on the South American continent.

A new study published in Proceedings of the National Academy of Sciences by an international group of researchers challenges this view. Through a worldwide analysis comparing the ecological traits of introduced herbivores like Escobar's hippos to those of the past, they reveal that such introductions restore many important traits that have been lost for thousands of years. While human impacts have caused the extinction of several large mammals over the last 100,000 years, humans have since introduced numerous species, inadvertently rewilding many parts of the world such as South America, where giant llamas once roamed, and North America, where the flat-headed peccary could once be found from New York to California.

"While we found that some introduced herbivores are perfect ecological matches for extinct ones, in others cases the introduced species represents a mix of traits seen in extinct species," says study co-author John Rowan, Darwin Fellow in organismic and evolutionary biology at the University of Massachusetts Amherst. "For example, the feral hippos in South America are similar in diet and body size to extinct giant llamas, while a bizarre type of extinct mammal - a notoungulate - shares with hippos large size and semiaquatic habitats. So, while hippos don't perfectly replace any one extinct species, they restore parts of important ecologies across several species."

Rowan was part of an international team of conservation biologists and ecologists from The University of Technology Sydney (UTS) in Australia, the University of Kansas and the University of California Davis and the Natural History Museum of Los Angeles County in the U.S., the University of Sussex in the U.K., the Universidad de Alcalá in Spain and Aarhus University in Denmark.

The authors note that what most conservation biologists and ecologists think of as the modern 'natural' world is very different than it was for the last 45 million years. Even recently, rhino-sized wombat-relatives called diprotodons, tank-like armored glyptodons and two-story tall sloths ruled the world. These giant herbivores began their evolutionary rise not long after the demise of the dinosaurs, but were abruptly driven extinct beginning around 100,000 years ago, most likely due to hunting and other pressures from our Late Pleistocene ancestors.

The researchers found that by introducing species across the world, humans restored lost ecological traits to many ecosystems; making the world more similar to the pre-extinction Late Pleistocene and counteracting a legacy of extinctions.

Erick Lundgren, lead author and Ph.D. student at the UTS Centre for Compassionate Conservation (CfCC), says the possibility that introduced herbivores might restore lost ecological functions had been suggested but not "rigorously evaluated."

To this end, the authors compared key ecological traits of herbivore species from before the Late Pleistocene extinctions to the present day, such as body size, diet and habitat.

"This allowed us to compare species that are not necessarily closely related to each other, but are similar in terms of how they affect ecosystems," Lundgren said. "By doing this, we could quantify the extent to which introduced species make the world more similar or dissimilar to the pre-extinction past. Amazingly they make the world more similar."

This is largely because 64% of introduced herbivores are more similar to extinct species than to local native species. These introduced 'surrogates' for extinct species include evolutionary close species in some places, like mustangs (wild horses) in North America, where pre-domestic horses of the same species lived but were driven extinct.

"Many people are concerned about feral horses and donkeys in the American southwest, because they aren't known from the continent in historic times," Rowan says. "But this view overlooks the fact that horses had been present in North America for over 50 million years - all major milestones of their evolution, including their origin, takes place here. They only disappeared a few thousand years ago because of humans, meaning the North American ecosystems they have since been reintroduced to had coevolved with horses for millions of years."

"We usually think of nature as defined by the short period of time for which we have recorded history but this is already long after strong and pervasive human influences," said senior author Arian Wallach from UTS CfCC. "Broadening our perspective to include the more evolutionarily relevant past lets us ask more nuanced questions about introduced species and how they affect the world."

When looking beyond the past few hundred years - to a time before widespread human caused pre-historic extinctions - introduced herbivores make the world more similar to the pre-extinction past, bringing with them broader biodiversity benefits, the authors conclude.

Credit: 
University of Massachusetts Amherst

Ancestor of all animals identified in Australian fossils

image: Artist's rendering of Ikaria wariootia.

Image: 
Sohail Wasif/UCR

A team led by UC Riverside geologists has discovered the first ancestor on the family tree that contains most familiar animals today, including humans.

The tiny, wormlike creature, named Ikaria wariootia, is the earliest bilaterian, or organism with a front and back, two symmetrical sides, and openings at either end connected by a gut. The paper is published today in Proceedings of the National Academy of Sciences.

The earliest multicellular organisms, such as sponges and algal mats, had variable shapes. Collectively known as the Ediacaran Biota, this group contains the oldest fossils of complex, multicellular organisms. However, most of these are not directly related to animals around today, including lily pad-shaped creatures known as Dickinsonia that lack basic features of most animals, such as a mouth or gut.

The development of bilateral symmetry was a critical step in the evolution of animal life, giving organisms the ability to move purposefully and a common, yet successful way to organize their bodies. A multitude of animals, from worms to insects to dinosaurs to humans, are organized around this same basic bilaterian body plan.

Evolutionary biologists studying the genetics of modern animals predicted the oldest ancestor of all bilaterians would have been simple and small, with rudimentary sensory organs. Preserving and identifying the fossilized remains of such an animal was thought to be difficult, if not impossible.

For 15 years, scientists agreed that fossilized burrows found in 555 million-year-old Ediacaran Period deposits in Nilpena, South Australia, were made by bilaterians. But there was no sign of the creature that made the burrows, leaving scientists with nothing but speculation.

Scott Evans, a recent doctoral graduate from UC Riverside; and Mary Droser, a professor of geology, noticed miniscule, oval impressions near some of these burrows. With funding from a NASA exobiology grant, they used a three-dimensional laser scanner that revealed the regular, consistent shape of a cylindrical body with a distinct head and tail and faintly grooved musculature. The animal ranged between 2-7 millimeters long and about 1-2.5 millimeters wide, with the largest the size and shape of a grain of rice -- just the right size to have made the burrows.

"We thought these animals should have existed during this interval, but always understood they would be difficult to recognize," Evans said. "Once we had the 3D scans, we knew that we had made an important discovery."

The researchers, who include Ian Hughes of UC San Diego and James Gehling of the South Australia Museum, describe Ikaria wariootia, named to acknowledge the original custodians of the land. The genus name comes from Ikara, which means "meeting place" in the Adnyamathanha language. It's the Adnyamathanha name for a grouping of mountains known in English as Wilpena Pound. The species name comes from Warioota Creek, which runs from the Flinders Ranges to Nilpena Station.

"Burrows of Ikaria occur lower than anything else. It's the oldest fossil we get with this type of complexity," Droser said. "Dickinsonia and other big things were probably evolutionary dead ends. We knew that we also had lots of little things and thought these might have been the early bilaterians that we were looking for."

In spite of its relatively simple shape, Ikaria was complex compared to other fossils from this period. It burrowed in thin layers of well-oxygenated sand on the ocean floor in search of organic matter, indicating rudimentary sensory abilities. The depth and curvature of Ikaria represent clearly distinct front and rear ends, supporting the directed movement found in the burrows.

The burrows also preserve crosswise, "V"-shaped ridges, suggesting Ikaria moved by contracting muscles across its body like a worm, known as peristaltic locomotion. Evidence of sediment displacement in the burrows and signs the organism fed on buried organic matter reveal Ikaria probably had a mouth, anus, and gut.

"This is what evolutionary biologists predicted," Droser said. "It's really exciting that what we have found lines up so neatly with their prediction."

Credit: 
University of California - Riverside

Stanford researcher investigates how squid communicate in the dark

image: A group of Humboldt squid swim information about 200 meters below the surface of Monterey Bay.

Image: 
© 2010 MBARI

In the frigid waters 1,500 feet below the surface of the Pacific Ocean, hundreds of human-sized Humboldt squid feed on a patch of finger-length lantern fish. Zipping past each other, the predators move with exceptional precision, never colliding or competing for prey.

How do they establish such order in the near-darkness of the ocean's twilight zone?

The answer, according to researchers from Stanford University and the Monterey Bay Aquarium Research Institute (MBARI) may be visual communication. Like the illuminated words on an e-book reader, these researchers suggest that the squid's ability to subtly glow - using light-producing organs in their muscles - can create a backlight for shifting pigmentation patterns on their skin. The creatures may be using these changing patterns to signal one another.

The research is published March 23 in the journal Proceedings of the National Academy of Sciences.

"Many squid live in fairly shallow water and don't have these light-producing organs, so it's possible this is a key evolutionary innovation for being able to inhabit the open ocean," said Benjamin Burford, a graduate student in biology in the School of Humanities and Sciences at Stanford and lead author of the paper. "Maybe they need this ability to glow and display these pigmentation patterns to facilitate group behaviors in order to survive out there."

Seeing the deep sea

Humboldt squid behavior is nearly impossible to study in captivity, so researchers must meet them where they live. For this research, Bruce Robison of MBARI, who is senior author of the paper, captured footage of Humboldt squid off the coast of California using remotely operated vehicles (ROVs), or unmanned, robotic submarines.

While the ROVs could record the squid's skin patterning, the lights the cameras required were too bright to record their subtle glow, so the researchers couldn't test their backlighting hypothesis directly. Instead, they found supporting evidence for it in their anatomical studies of captured squid.

Using the ROV footage, the researchers analyzed how individual squid behaved when they were feeding versus when they were not. They also paid attention to how these behaviors changed depending on the number of other squid in the immediate area - after all, people communicate differently if they are speaking with friends versus a large audience.

The footage confirmed that squid's pigmentation patterns do seem to relate to specific contexts. Some patterns were detailed enough to imply that the squid may be communicating precise messages - such as "that fish over there is mine." There was also evidence that their behaviors could be broken down into distinct units that the squid recombine to form different messages, like letters in the alphabet. Still, the researchers emphasize that it is too early to conclude whether the squid communications constitute a human-like language.

"Right now, as we speak, there are probably squid signaling each other in the deep ocean," said Burford, who is affiliated with the Denny lab at Stanford's Hopkins Marine Station. "And who knows what kind of information they're saying and what kind of decisions they're making based on that information?"

Although these squid can see well in dim light, their vision is probably not especially sharp, so the researchers speculated that the light-producing organs help facilitate the squid's visual communications by boosting the contrast for their skin patterning. They investigated this hypothesis by mapping where these light organs are located in Humboldt squid and comparing that to where the most detailed skin patterns appear on the creatures.

They found that the areas where the illuminating organs were most densely packed - such as a small area between the squid's eyes and the thin edge of their fins - corresponded to those where the most intricate patterns occurred.

Familiar aliens

In the time since the squid were filmed, ROV technology has advanced enough that the team could directly view their backlighting hypothesis in action the next time the squid are observed in California. Burford would also like to create some sort of virtual squid that the team could project in front of real squid to see how they respond to the cyber-squid's patterns and movements.

The researchers are thrilled with what they have found so far but eager to do further research in the deep sea. Although studying the inhabitants of the deep sea where they live can be a frustratingly difficult endeavor, this research has the potential to inform a new understanmetimes think of squid as crazy lifeforms living in this alien world but we have a lot in common - they live in groups, they're social, they talkding of how life functions.

"We so to one another," Burford said. "Researching their behavior and that of other residents of the deep sea is important for learning how life may exist in alien environments, but it also tells us more generally about the strategies used in extreme environments on our own planet."

Credit: 
Stanford University

Beyond your doorstep: What you buy and where you live shapes land-use footprint

image: Princeton researchers developed a tool for examining consumption-based land footprints and found that when direct land-use such as housing is combined with indirect land-use through the consumption of goods and services, each of our imprints on the land could be significantly higher than most people are aware. They identified five individual actions (at left) that could reduce people's indirect land footprint (orange). The percentages indicate the decrease in an individual's indirect footprint by square-foot based on the action taken. The researchers also evaluated how a person's direct land-use (blue) is affected by their housing decisions, including moving into a multi-family dwelling, living in the heart of the nearest city, and relocating from a median-density metro area such as Minneapolis-Saint Paul (MSP) to a more densely populated area such as New York City. Moving to a more urban area reduced a person's total footprint due to the greater availability of goods and services in a city.

Image: 
Courtesy of Lin Zeng, Department of Civil and Environmental Engineering

In recent years, the attention of scientists and environmentalists has turned toward how population growth and urban expansion are driving habitat loss and an associated decline in ecosystem productivity and biodiversity. But the space people directly occupy is only one part of the land-use puzzle, according to new research.

Princeton researchers report in the journal Environmental Science and Technology that when direct land use such as housing is combined with indirect land use -- the land taken up to provide people with goods and services -- each of our imprints on the land could be significantly higher than most people are aware.

The researchers developed a tool for examining what they call consumption-based land footprints (CBLF), which combines the indirect land use associated with providing consumer goods such as a food and clothing with direct use such as homes, public parks and roads allocated to personal travel. Their goal was to identify new avenues for reducing the demand for land and the loss of natural ecosystems.

After evaluating urban and rural areas of the United States, the researchers found that the amount of land going toward providing goods and services -- including industrial and agricultural production, transportation and retail -- is much larger than the land people personally take up. The analysis suggests that consumer behavior could rival housing, locational choices and event urbanization in terms of land use, the researchers said.

"Land is scarce if we're trying to feed and clothe 9 billion people," said co-author Anu Ramaswami, Princeton's Sanjay Swani '87 Professor of India Studies,
professor of civil and environmental engineering and the Princeton Environmental Institute (PEI). "Yes, urban areas are expanding, but they only account for 3% of Earth's land surface."

Ramaswami and first author Lin Zeng, a postdoctoral research associate in civil and environmental engineering, found that the indirect land use of a typical urban resident was approximately 23 times their direct use. Rural residents had an even greater footprint, using about 10 times more land for their homes than their urban counterparts, the researchers found. They also had a slightly larger indirect footprint, amounting to approximately 6% more than their urban counterparts.

These findings highlight the impact an individual has on the landscape well beyond their home, and the importance of the daily decisions we make about our purchases and food habits, Zeng said.

"We're trying to inform people that simple choices can have big impacts," Zeng said.

"There's a lot of research into greenhouse gas-emission footprints and water-use footprints, but there's much less understanding of land-use footprints," she said. "It can be much harder to gather land data, but it's important to understand the impacts we have as consumers. This information can help us lower our footprint on a personal level, and ultimately drive policy."

Ramaswami and Zeng identified five individual actions that could reduce people's indirect land footprint. They found that consumers could reduce indirect land use by nearly 5% if they simply halved avoidable food waste. In addition, removing meat from one's diet once a week resulted in a reduction of more than 3%. Spending roughly 80% less on clothing and using clothes longer reined in land consumption by 2.8%.

Zeng and Ramaswami also looked at how a person's direct land use is affected by their housing decisions, including living in the heart of the nearest city, relocating cross-country to a more densely populated area, or moving into a multi-family dwelling. They focused on three metropolitan areas representing different population densities: New York City as high-density, Minneapolis-Saint Paul as a median-density metro area, and Raleigh, North Carolina, representing low-density.

Within a medium-density area such as Minneapolis-St. Paul, the researchers found that moving into a multifamily home had little effect on the floor area of the living space, but resulted in a nearly 2% reduction in the overall land footprint. Moving from a rural to an urban area reduced a person's direct and total footprint by 10.6% thanks to the greater availability of goods and services in a city. Relocating entirely from Minneapolis-Saint Paul to a high-density area such as New York City resulted in a 7.6% reduction in total land use.

For low-density metro areas such as Raleigh, the benefits of moving toward a local urban center or to a compact city such as New York City were similar. One notable difference was that moving from a single-family home to a multifamily residence in a more sprawling metro area like Raleigh had a larger land-footprint reduction than in an area similar to the Twin Cities.

"It is the role of the urban planner to develop more compact areas that reduce direct land use," Zeng said. "In contrast, our study found that individual consumers can achieve the same magnitude of reduction through their behavior and by being more conscious of what they consume and how much."

Credit: 
Princeton University

Study: Climatic-niche evolution strikingly similar in plants and animals

image: Results of phylogenetic models are similar between plants (green) and animals (orange), with darker colours indicating greater overlap of data points.

Image: 
LIU Hui

Given the fundamental biological differences in plants and animals, previous research proposed that plants may have broader environmental tolerances than animals but are more sensitive to climate. However, a recent study has found that there are actually "general rules" of climatic-niche evolution that span plants and animals.

Climatic niches describe where species can occur and are essential to determining how they will respond to climate change. For this reason, climatic niches are critically important for answering many of the most fundamental and urgent questions in ecology and evolution. To advance scholarship in this area, Dr. LIU Hui and Dr. YE Qing from the South China Botanical Garden of the Chinese Academy of Sciences, together with Dr. John J. Wiens from the University of Arizona, initiated research on patterns of climatic-niche evolution across plants and animals.

The researchers developed a systematic picture of how climatic niches for species evolve, based on 10 hypotheses. They used phylogenetic and climatic data for 19 globally distributed plant clades and 17 globally distributed vertebrate clades (2,087 species total) along with a large number of phylogenetic models.

The most unexpected finding was that, in both plants and animals, rates of niche evolution were similarly slow (1.44 and 0.82 °C per million years [Myr-1] for mean annual temperature; and 226.0 and 126.0 mm Myr-1 for annual precipitation, for plants and animals, respectively), and changed faster in younger (more recently diverged) clades (Fig. 1).

Those rates were about 10,000 times slower than recent and future climate change speeds, based on calculations conducted by Dr. Wiens. This is extremely important, because it warns people to pay more attention to the high extinction risks for both plant and animal species, especially if we cannot slow down climatic changes caused by humans.

"In ecology, people also used to think that species with wider climatic niches might evolve faster, given that widely distributed species might have broad tolerances and fast adaptations," Dr. LIU said. "However, we proved that this was not true." The reasons included one of their confirmed hypotheses, i.e., that climatic-niche widths are dominated by seasonal variation within localities rather than variations across all global localities. Thus, the most dramatic environmental stresses came within small habitats that might drive evolution.

Ultimately, why is climatic-niche evolution similar in plants and animals? In short, niche conservatism, physiological constraints, trade-offs, and latitudinal patterns of variability and seasonality all play important roles in determining global sorting and the evolution of climatic niches of plants and animals. Disentangling these causes will be an exciting area for future research.

Overall, this study explains why biogeographic regions, diversity hotspots, life zones, and richness patterns are often similar between plants and animals, despite both showing enormous diversity. The authors also indicate that plants and animals may have similar responses to climate change, which is important in predicting future species distribution and climatic-niche evolution under climate change scenarios.

Credit: 
Chinese Academy of Sciences Headquarters

It's in our genome: Uncovering clues to longevity from human genetics

image: Identification of causal drivers affecting lifespan by PRS association study, with the use of genetic and clinical data from 700,000 individuals worldwide.

Image: 
Saori Sakaue and Yukinori Okada at Osaka University

Osaka, Japan - The genetic code within DNA has long been thought to determine whether one becomes sick or resists illnesses. DNA contains the information that makes all cells that compose our bodies and allows them to function. Part of DNA is composed of genes, of which proteins are produced that participate in virtually every process within our cells and organs. While variations in the genetic code determine biological traits, such as eye color, blood type, and risk for diseases, it is often a group of numerous variations with tiny effects that influence a phenotypic trait. Harnessing a huge amount of genetic and clinical data worldwide and a methodological breakthrough, it is now possible to identify individuals at several-fold increased risk of human diseases using genetic information.

While a risk stratification based on genetic information could be one potential strategy to improve population health, a major challenge lies in that the genetic code itself cannot be modified even if there is a known increased risk of a particular disease. In a new study, researchers from Osaka University discovered that individuals who have a genetic susceptibility to certain traits, such as high blood pressure or obesity, have a shorter lifespan. "The genetic code contains a lot of information, most of it of unknown significance to us," says corresponding author of the study Yukinori Okada. "The goal of our study was to understand how we can utilize genetic information to discover risk factors for important health outcomes that we can directly influence as health care professionals."

To achieve their goal, the researchers analyzed genetic and clinical information of 700,000 individuals from biobanks in the UK, Finland and Japan. From these data, the researchers calculated polygenic risk scores, which are an estimate of genetic susceptibility to a biological trait, such as a risk for disease, to find out which risk factor causally influences lifespan.

"Biobanks are an incredible resource," says lead author of the study Saori Sakaue. "By collaborating with large biobanks in the UK, Finland and Japan, we not only had access to large amounts of data, but also to genetically diverse populations, both of which are necessary to make clinically meaningful conclusions."

The researchers found that high blood pressure and obesity were the two strongest risk factors that reduced lifespan of the current generation. Interestingly, while high blood pressure decreased lifespan across all populations the researchers investigated, obesity significantly reduced lifespan in individuals with European ancestry, suggesting that the Japanese population was somehow protected from the detrimental effects obesity has on lifespan.

"These are striking results that show how genetics can be used to predict health risks," says Okada. "Our findings could offer an approach to utilize genetic information to seek out health risk factors with the goal of providing targeted lifestyle changes and medical treatment. Ultimately, these approaches would be expected to improve the health of the overall population."

Credit: 
Osaka University

Electric cars better for climate in 95% of the world

Fears that electric cars could actually increase carbon emissions are unfounded in almost all parts of the world, news research shows. Media reports have regularly questioned whether electric cars are really "greener" once emissions from production and generating their electricity are taken into account. But a new study by Radboud University with the universities of Exeter and Cambridge has concluded that electric cars lead to lower carbon emissions overall, even if electricity generation still involves substantial amounts of fossil fuel.

Already under current conditions, driving an electric car is better for the climate than conventional petrol cars in 95% of the world, the study finds. The only exceptions are places like Poland, where electricity generation is still mostly based on coal. Average lifetime emissions from electric cars are up to 70% lower than petrol cars in countries like Sweden and France (which get most of their electricity from renewables and nuclear), and around 30% lower in the UK.

In a few years, even inefficient electric cars will be less emission-intensive than most new petrol cars in most countries, as electricity generation is expected to be less carbon-intensive than today. The study projects that in 2050, every second car on the streets could be electric. This would reduce global CO2 emissions by up to 1.5 gigatons per year, which is equivalent to the total current CO2 emissions of Russia.

The study also looked at electric household heat pumps, and found they too produce lower emissions than fossil-fuel alternatives in 95% of the world. Heat pumps could reduce global CO2 emissions in 2050 by up to 0.8 gigatons per year - roughly equal to Germany's current annual emissions.

"We started this work a few years ago, and policy-makers in the UK and abroad have shown a lot of interest in the results," said Dr Florian Knobloch, of the Environmental Science Department at Radboud University (The Netherlands), the lead author of the study. "The answer is clear: to reduce carbon emissions, we should choose electric cars and household heat pumps over fossil-fuel alternatives."

"In other words, the idea that electric vehicles or electric heat pumps could increase emissions is essentially a myth. We've seen a lot of discussion about this recently, with lots of disinformation going around. Here is a definitive study that can dispel those myths. We have run the numbers for all around the world, looking at a whole range of cars and heating systems. Even in our worst-case scenario, there would be a reduction in emissions in almost all cases. This insight should be very useful for policy-makers," said Knobloch.

The study examined the current and future emissions of different types of vehicles and home heating options worldwide. It divided the world into 59 regions to account for differences in power generation and technology. In 53 of these regions - including all of Europe, the US and China - the findings show electric cars and heat pumps are already less emission-intensive than fossil-fuel alternatives. These 53 regions represent 95% of global transport and heating demand and, with energy production decarbonising worldwide, Knobloch said the "last few debatable cases will soon disappear".

The researchers carried out a life-cycle assessment in which they not only calculated greenhouse gas emissions generated when using cars and heating systems, but also in the production chain and waste processing. "Taking into account emissions from manufacturing and ongoing energy use, it's clear that we should encourage the switch to electric cars and household heat pumps without any regrets," Knobloch concluded.

Credit: 
Radboud University Nijmegen