Culture

Lead up to volcanic eruption in Galapagos captured in rare detail

image: Field crew downloading data from a continuously operating Global Positioning station in the Sierra Negra caldera, Galapagos Islands, Ecuador.

Image: 
Keith Williams (UMAVCO, Inc).

Hours before the 2018 eruption of Sierra Negra, the Galápagos Islands' largest volcano, an earthquake rumbled and raised the ground more than 6 feet in an instant. The event, which triggered the eruption, was captured in rare detail by an international team of scientists, who said it offers new insights into one of the world's most active volcanoes.

"The power of this study is that it's one of the first times we've been able to see a full eruptive cycle in this detail at almost any volcano," said Peter La Femina, associate professor of geosciences at Penn State. "We've monitored Sierra Negra from when it last erupted in 2005 through the 2018 eruption and beyond, and we have this beautiful record that's a rarity in itself."

For nearly two months in 2018, lava erupted from the volcano, covering about 19 square miles of Isabela Island, the largest island in the Galápagos and home to about 2,000 people and endangered animal species like the Galápagos giant tortoise.

"The 2018 eruption of Sierra Negra was a really spectacular volcanic event, occurring in the 'living laboratory' of the Galápagos Islands," said Andrew Bell, a volcanologist at the University of Edinburgh. "Great teamwork, and a bit of luck, allowed us to capture this unique dataset that provide us with important new understanding as to how these volcanoes behave, and how we might be able to better forecast future eruptions."

While Sierra Negra is among the world's most active volcanos, its remote location previously made monitoring difficult. Scientists now use networks of ground-based seismic and GPS monitoring stations and satellite observations to observe the volcano.

"Based on constant monitoring of activity of Galapagos volcanoes, we detected a dramatic increase of seismicity and a steady uplift of crater floor at Sierra Negra," said Mario Ruiz, director of the Ecuador Geophysical Institute, the country's national monitoring agency. "Soon we contacted colleagues from the United Kingdom, United States and Ireland and proposed them to work together to investigate the mechanisms leading to an impending eruption of this volcano. This research is an example of international collaboration and partnership."

The scientists captured data over 13 years as the volcano's magma chamber gradually refilled following the 2005 eruption, stressing the surrounding crust and creating earthquakes. This continued until June 2018, when an earthquake occurred on the calderas fault system and triggered the subsequent eruption, the scientists said.

"We have this story of magma coming in and stressing the system to the point of failure and the whole system draining again through the eruption of lava flows," La Femina said. "This is the first time anyone's seen that in the Galápagos to this detail. This is the first time we've had the data to say, 'okay, this is what happened here.'"

Often during volcanic eruptions, as magma chambers empty the ground above them sinks and forms a bowl-like depression, or a caldera. But Sierra Negra experienced a caldera resurgence, leaving this area higher in elevation than it was before the eruption, the scientists said.

Inside the Sierra Negra caldera is a "trap-door fault," which is hinged at one end while the other can be uplifted by rising magma. The scientists found the fault caused hills inside of the six-mile-wide caldera to lift vertically by more than 6 feet during the earthquake that triggered the eruption.

Caldera resurgence, important to better understanding eruptions, had not been previously observed in such detail, the scientists reported in the journal Nature Communications.

"Resurgence is typical of explosive calderas at volcanoes like Yellowstone, not the kind of shield volcanoes we see in the Galápagos or Hawaii," La Femina said. "This gives us the ability to look at other volcanoes in the Galápagos and say, 'well that's what could have happened to form that caldera or that resurgent ridge.'"

The scientists said the findings could help their counterparts in Ecuador better track unrest and warn of future eruptions.

"There are people who live on Isabella Island, so studying and understanding how these eruptions occur is important to manage the hazards and risks to local populations," La Femina said.

Credit: 
Penn State

New technology allows scientists first glimpse of intricate details of Little Foot's life

image: Little Foot Fossil skull in Diamond's beamline I12

Image: 
Copyright Diamond Light Source Ltd

In June 2019, an international team brought the complete skull of the 3.67-million-year-old Little Foot Australopithecus skeleton, from South Africa to the UK and achieved unprecedented imaging resolution of its bony structures and dentition in an X-ray synchrotron-based investigation at the UK's national synchrotron, Diamond Light Source. The X-ray work is highlighted in a new paper in e-Life, published today (2nd March 2021) focusing on the inner craniodental features of Little Foot. The remarkable completeness and great age of the Little Foot skeleton makes it a crucially important specimen in human origins research and a prime candidate for exploring human evolution through high-resolution virtual analysis.

To recover the smallest possible details from a fairly large and very fragile fossil, the team decided to image the skull using synchrotron X-ray micro computed tomography at the I12 beamline at Diamond, revealing new information about human evolution and origins. This paper outlines preliminary results of the X-ray synchrotron-based investigation of the dentition and bones of the skull (i.e., cranial vault and mandible).

Leading author and Principal Investigator, Dr Amelie Beaudet, Department of Archaeology, University of Cambridge and honorary research at the University of the Witwatersrand (Wits University) explains: "We had the unique opportunity to look at the finest details of the craniodental anatomy of the Little Foot skull. While scanning it, we did not know how well the smallest structures would be preserved in this individual, who lived more than 3.5 million years ago. So, when we were finally able to examine the images, we were all very excited and moved to see such intimate details of the life of Little Foot for the first time. The microstructures observed in the enamel indicate that Little Foot suffered through two clear periods of dietary stress or illness when she was a child."

The team were also able to observe and describe the vascular canals that are enclosed in the compact bone of the mandible. These structures have the potential to reveal a lot about the biomechanics of eating in this individual and its species, but also more broadly about how bone was remodelled in Little Foot The branching pattern of these canals indicates some remodelling took place, perhaps in response to changes in diet, and that Little Foot died as an older individual.

The team also observed tiny (i.e., less than 1 mm) channels in the braincase that are possibly involved in brain thermoregulation (i.e., how to cool down the brain). Brain size increased dramatically throughout human evolution (about threefold), and, because the brain is very sensitive to temperature change, understanding how temperature regulation evolves is of prime interest. Dr Amelie Beaudet adds: "Traditionally, none of these observations would have been possible without cutting the fossil into very thin slices, but with the application of synchrotron technology there is an exciting new field of virtual histology being developed to explore the fossils of our distant ancestors."

Dr Thomas Connolley, Principal Beamline Scientist at Diamond commented:
"Important aspects of early hominin biology remain debated, or simply unknown. In that context, synchrotron X-ray imaging techniques like microtomography have the potential to non-destructively reveal crucial details on the development, physiology, biomechanics and taxonomy of fossil specimens. Little Foot's skull was also scanned using the adjacent IMAT neutron instrument at ISIS Neutron and Muon Source, combining X-ray and neutron imaging techniques in one visit to the UK. With such a rich volume of information collected, we're eager to make more discoveries in the complementary X-ray and neutron tomography scans."

Applications of X-ray synchrotron-based analytical techniques in evolutionary studies have opened up new avenues in the field of (paleo)anthropology. In particular, X-ray synchrotron microtomography has proved to be enormously useful for observing the smallest anatomical structures in fossils that are traditionally only seen by slicing through the bones and looking at them under a microscope. Through the last decade, there have been more studies in palaeoanthropology using synchrotron radiation to investigate teeth and brain imprints in fossil hominins. However, scanning a complete skull such as the one of Little Foot and aiming to reveal very small details using a very high-resolution was quite challenging, but the team managed to develop a new protocol that made this possible. To recover the smallest possible details from a fairly large and very fragile fossil, the team decided to image the skull using synchrotron X-ray micro computed tomography at the I12 beamline at Diamond.

Principal Investigator, and Associate Professor, Prof Dominic Stratford, University of Witwatersrand (Wits University), School of Geography, Archaeology and Environmental Studies says: "This level of resolution is providing us with remarkably clear evidence of this individual's life. We think there will also be a hugely significant evolutionary aspect, as studying this fossil in this much detail will help us understand which species she evolved from and how she differs from others found at a similar time in Africa. This is just our first paper so watch this space. Funding permitting, we hope to be able to bring other parts of Little Foot to Diamond," adding:

"This research was about bringing the best-preserved Australopithecus skull to the best of the best synchrotron facility for our purposes. Traditionally, hominins have been analysed by measuring and describing by the exterior shapes of their fossilised bones to assess how these differ between species. Synchrotron development and microCT resources means that we are now able to virtually observe structures inside the fossils, which hold a wealth of information. More recently, technology has developed to such an extent that we can now virtually explore minute histological structures in three dimensions, opening new avenues for our research."

The first bones of the Little Foot fossil were discovered in the Sterkfontein Caves, northwest of Johannesburg, by Professor Ron Clarke of the University of the Witwatersrand in 1994. In 1997, following their discovery of the location of the skeleton, Professor Clarke and his team spent more than 20 years painstakingly removing the skeleton in stages from the concrete-like cave breccia using a small airscribe (a vibrating needle). Following cleaning and reconstructing, the skeleton was publicly unveiled in 2018. Wits University is the custodian of the StW 573, Little Foot, fossil.

Professor Ron Clarke, the British scientist based in South Africa who discovered and excavated Little Foot and conducted all the early examinations of the fossil, was also part of the research team and concludes: "It has taken us 23 years to get to this point. This is an exciting new chapter in Little Foot's history, and this is only the first paper resulting from her first trip out of Africa. We are constantly uncovering new information from the wealth of new data that was obtained. We hope this endeavour will lead to more funding to continue our work. Our team and PAST* emphasise that all of humanity has had a long-shared ancestry in harmony with the natural world, and that learning from those earliest ancestors gives us perspective on the necessity to conserve nature and our planet."

This paper is the first in what is expected to be a series of papers resulting from the wealth of data the Principal Investigators from the University of Witwatersrand in South Africa the University of Cambridge in UK, co-investigators from the Natural History Museum and Diamond were able to gain from their collaboration. Little Foot also underwent neutron imaging at STFC's ISIS Neutron and Muon Source at the same time as the work undertaken at Diamond Light Source, providing unprecedented access to complementary advanced imaging techniques. Neutrons are absorbed very differently from X-rays by the fossil's interior parts thanks to the sensitivity of neutrons to certain chemical elements. Despite having coarser spatial resolution, neutron tomography can sometimes differentiate between different mineralogical constituents for which contrast is very low for X-rays.

Credit: 
Diamond Light Source

Black NBA players have shorter careers than white players

COLUMBUS, Ohio - Black players in the NBA have 30% greater odds of leaving the league in any given season than white players who have equivalent performance on the court, a new study finds.

The results were driven mostly by bench players, who are the majority of those in the league, but who average less than 20 minutes of action per game.

These findings suggest that even in the NBA - a league in which Black players make up 70-75% of those on the court - African Americans face discrimination, said Davon Norris, lead author of the study and a doctoral student in sociology at The Ohio State University.

"If there is going to be anywhere in America where you would expect there wouldn't be racial disparities, it would be the NBA," Norris said.

"But even here we find there is an advantage to being white for most."

The disparity is not obvious if you look at raw data on career lengths, because Black and white players leave the league at similar rates, said study co-author Corey Moss-Pech, a PhD graduate of Ohio State who is now a postdoctoral research fellow in sociology at the University of Michigan.

"We see the effects when we account for performance," Moss-Pech said.

"Black players tend to be better than white players, according to the data. They should have longer careers, but they don't."

The study was published recently in the journal Social Forces.

The researchers analyzed the career lengths for all NBA players beginning their career in or after the 1979-80 season through the conclusion of the 2016-17 season.

The final sample included 2,611 players, who were considered for the purposes of the study as either Black, white or international.

The average career length in the NBA is only five years, Norris said, most likely because most players are cut by teams before they would choose to leave.

In the main analysis, the researchers evaluated performance using the same advanced metrics that NBA teams use, including player efficiency rating, offensive win shares and defensive win shares.

These measures combine stats like points scored, rebounds, steals and blocks into succinct comprehensive measures of performance.

They separated players into starters/key players (more than 30 minutes on the court per game), role players (between 20 and 30 minutes per game) and bench players (fewer than 20 minutes per game).

After taking on-court performance into account, there were "stark differences" in how long equally effective Black and white players stayed in the league.

For example, take the defensive win shares statistic, which measures the ability to prevent opposing teams from scoring. The researchers compared career length for white and Black players who were identical on this statistic, at the 50th percentile.

The proportion of white players at this level who left the league by their fifth season was 26%, significantly lower than the 33% of Black players who performed just as well but were no longer in the league after five years.

Not all in the NBA were treated similarly.

For those who were starters, there was no significant difference in career length based on race. For role players, Black players were at a disadvantage, but the difference was not large enough to be statistically significant.

However, there was a significant difference in bench players based on race, Norris said. About half of Black players and 65% of white players in the NBA are classified as bench players in this study.

"For those on the bench, being white really gives you an advantage," he said. "We have these cultural stereotypes that white men distinctly lack ability at basketball, but our analysis shows that this has little bearing on how long they last in the league."

The question becomes why. It doesn't appear to be associated with the race of the coach. The study found similar results whether there was a Black or white coach.

While the data in the study can't explain the disparity, Moss-Pech said, it may be that Black players are viewed differently from their white teammates off the court.

"Bench players may be more valued for their 'locker room presence' or for being a 'good teammate' than for their in-game performance. But concepts like 'good teammate' are likely racialized in a biased way that benefits white players," he said.

"White players may fit more comfortably into these team and organizational roles. We need more research that directly examines whether white and Black bench players are perceived differently in the media, by fans, by players or by decision makers."

The results have implications beyond the NBA, Norris said. They suggest that efforts to objectively measure employee effectiveness won't be enough to eliminate discrimination.

"NBA teams have all these stats available to measure productivity, and distinguish good from bad players, and yet we still see that disadvantages persist," he said.

"There are underlying structural and organizational processes at work that can undermine even the best efforts to objectively measure performance."

Credit: 
Ohio State University

Origin of life - The chicken-and-the-egg problem

A Ludwig-Maximilians-Universitaet (LMU) in Munich team has shown that slight alterations in transfer-RNA molecules (tRNAs) allow them to self-assemble into a functional unit that can replicate information exponentially. tRNAs are key elements in the evolution of early life-forms.

Life as we know it is based on a complex network of interactions, which take place at microscopic scales in biological cells, and involve thousands of distinct molecular species. In our bodies, one fundamental process is repeated countless times every day. In an operation known as replication, proteins duplicate the genetic information encoded in the DNA molecules stored in the cell nucleus - before distributing them equally to the two daughter cells during cell division. The information is then selectively copied ('transcribed') into what are called messenger RNA molecules (mRNAs), which direct the synthesis of the many different proteins required by the cell type concerned. A second type of RNA - transfer RNA (tRNA) - plays a central role in the 'translation' of mRNAs into proteins. Transfer RNAs act as intermediaries between mRNAs and proteins: they ensure that the amino-acid subunits of which each particular protein consists are put together in the sequence specified by the corresponding mRNA.

How could such a complex interplay between DNA replication and the translation of mRNAs into proteins have arisen when living systems first evolved on the early Earth? We have here a classical example of the chicken-and-the-egg problem: Proteins are required for transcription of the genetic information, but their synthesis itself depends on transcription.

LMU physicists led by Professor Dieter Braun have now demonstrated how this conundrum could have been resolved. They have shown that minor modifications in the structures of modern tRNA molecules permit them to autonomously interact to form a kind of replication module, which is capable of exponentially replicating information. This finding implies that tRNAs - the key intermediaries between transcription and translation in modern cells - could also have been the crucial link between replication and translation in the earliest living systems. It could therefore provide a neat solution to the question of which came first - genetic information or proteins?

Strikingly, in terms of their sequences and overall structure, tRNAs are highly conserved in all three domains of life, i.e. the unicellular Archaea and Bacteria (which lack a cell nucleus) and the Eukaryota (organisms whose cells contain a true nucleus). This fact in itself suggests that tRNAs are among the most ancient molecules in the biosphere.

Like the later steps in the evolution of life, the evolution of replication and translation - and the complex relationship between them - was not the result of a sudden single step. It is better understood as the culmination of an evolutionary journey. "Fundamental phenomena such as self-replication, autocatalysis, self-organization and compartmentalization are likely to have played important roles in these developments," says Dieter Braun. "And on a more general note, such physical and chemical processes are wholly dependent on the availability of environments that provide non-equilibrium conditions."

In their experiments, Braun and his colleagues used a set of reciprocally complementary DNA strands modeled on the characteristic form of modern tRNAs. Each was made up of two 'hairpins' (so called because each strand could partially pair with itself and form an elongated loop structure), separated by an informational sequence in the middle. Eight such strands can interact via complementary base-pairing to form a complex. Depending on the pairing patterns dictated by the central informational regions, this complex was able to encode a 4-digit binary code.

Each experiment began with a template - an informational structure made up of two types of the central informational sequences that define a binary sequence. This sequence dictated the form of the complementary molecule with which it can interact in the pool of available strands. The researchers went on to demonstrate that the templated binary structure can be repeatedly copied, i.e. amplified, by applying a repeating sequence of temperature fluctuations between warm and cold. "It is therefore conceivable that such a replication mechanism could have taken place on a hydrothermal microsystem on the early Earth," says Braun. In particular, aqueous solutions trapped in porous rocks on the seafloor would have provided a favorable environment for such reaction cycles, since natural temperature oscillations, generated by convection currents, are known to occur in such settings.

During the copying process, complementary strands (drawn from the pool of molecules) pair up with the informational segment of the template strands. In the course of time, the adjacent hairpins of these strands also pair up to form a stable backbone, and temperature oscillations continue to drive the amplification process. If the temperature is increased for a brief period, the template strands are separated from the newly formed replicate, and both can then serve as template strands in the next round of replication.

The team was able to show that the system is capable of exponential replication. This is an important finding, as it shows that the replication mechanism is particularly resistant to collapse owing to the accumulation of errors. The fact that the structure of the replicator complex itself resembles that of modern tRNAs suggests that early forms of tRNA could have participated in molecular replication processes, before tRNA molecules assumed their modern role in the translation of messenger RNA sequences into proteins. "This link between replication and translation in an early evolutionary scenario could provide a solution to the chicken-and-the-egg problem," says Alexandra Kühnlein. It could also account for the characteristic form of proto-tRNAs, and elucidate the role of tRNAs before they were co-opted for use in translation.

Laboratory research on the origin of life and the emergence of Darwinian evolution at the level of chemical polymers also has implications for the future of biotechnology. "Our investigations of early forms of molecular replication and our discovery of a link between replication and translation brings us a step closer to the reconstruction of the origin of life," Braun concludes.

Credit: 
Ludwig-Maximilians-Universität München

USC study shows promising potential for marine biofuel

image: Diver attaches kelp to an early prototype of the kelp elevator.

Image: 
Maurice Roper

For several years now, the biofuels that power cars, jet airplanes, ships and big trucks have come primarily from corn and other mass-produced farm crops. Researchers at USC, though, have looked to the ocean for what could be an even better biofuel crop: seaweed.

Scientists at the USC Wrigley Institute for Environmental Studies on Santa Catalina Island, working with private industry, report that a new aquaculture technique on the California coast dramatically increases kelp growth, yielding four times more biomass than natural processes. The technique employs a contraption called the "kelp elevator" that optimizes growth for the bronze-colored floating algae by raising and lowering it to different depths.

The team's newly published findings suggest it may be possible to use the open ocean to grow kelp crops for low-carbon biofuel similar to how land is used to harvest fuel feedstocks such as corn and sugarcane -- and with potentially fewer adverse environmental impacts.

The National Research Council has indicated that generating biofuels from feedstocks like corn and soybeans can increase water pollution. Farmers use pesticides and fertilizers on the crops that can end up polluting streams, rivers and lakes. Despite those well-evidenced drawbacks, 7% of the nation's transportation fuel still comes from major food crops. And nearly all of it is corn-based ethanol.

"Forging new pathways to make biofuel requires proving that new methods and feedstocks work. This experiment on the Southern California coast is an important step because it demonstrates kelp can be managed to maximize growth," said Diane Young Kim, corresponding author of the study, associate director of special projects at the USC Wrigley Institute and a professor of environmental studies at the USC Dornsife College of Letters, Arts and Sciences.

The study was published on Feb. 19 in the journal Renewable and Sustainable Energy Reviews. The authors include researchers from USC Dornsife, which is home to the Wrigley Institute, and the La Cañada, California-based company Marine BioEnergy, Inc., which designed and built the experimental system for the study and is currently designing the technology for open-ocean kelp farms.

Though not without obstacles, kelp shows serious promise as biofuel crop

Government and industry see promise in a new generation of climate-friendly biofuels to reduce net carbon dioxide emissions and dependence on foreign oil. New biofuels could either supplement or replace gasoline, diesel, jet fuel and natural gas.

If it lives up to its potential, kelp is a more attractive option than the usual biofuel crops -- corn, canola, soybeans and switchgrass -- for two very important reasons. For one, ocean crops do not compete for fresh water, agricultural land or artificial fertilizers. And secondly, ocean farming does not threaten important habitats when marginal land is brought into cultivation.

The scientists focused on giant kelp, Macrocystis pyrifera, the seaweed that forms majestic underwater forests along the California coast and elsewhere and washes onto beaches in dense mats. Kelp is one of nature's fastest-growing plants and its life cycle is well understood, making it amenable to cultivation.

But farming kelp requires overcoming a few obstacles. To thrive, kelp has to be anchored to a substrate and only grows in sun-soaked waters to about 60 feet deep. But in open oceans, the sunlit surface layer lacks nutrients available in deeper water.

To maximize growth in this ecosystem, the scientists had to figure out how to give kelp a foothold to hang onto, lots of sunlight and access to abundant nutrients. And they had to see if kelp could survive deeper below the surface. So, Marine BioEnergy invented the concept of depth-cycling the kelp, and USC Wrigley scientists conducted the biological and oceanographic trial.

The kelp elevator consists of fiberglass tubes and stainless-steel cables that support the kelp in the open ocean. Juvenile kelp is affixed to a horizontal beam, and the entire structure is raised and lowered in the water column using an automated winch.

Beginning in 2019, research divers collected kelp from the wild, affixed it to the kelp elevator and then deployed it off the northwest shore of Catalina Island, near Wrigley's marine field station. Every day for about 100 days, the elevator would raise the kelp to near the surface during the day so it could soak up sunlight, then lower it to about 260 feet at night so it could absorb nitrate and phosphate in the deeper water. Meantime, the researchers continually checked water conditions and temperature while comparing their kelp to control groups raised in natural conditions.

"We found that depth-cycled kelp grew much faster than the control group of kelp, producing four times the biomass production," Kim said.

The push to develop a new generation of biofuels

Prior to the experiment, it was unclear whether kelp could effectively absorb the nutrients in the deep, cold and dark environment. Nitrate is a big limiting factor for plants and algae, but the study suggests that the kelp found all it needed to thrive when lowered into deep water at night. Equally important, the kelp was able to withstand the greater underwater pressure.

Brian Wilcox, co-founder and chief engineer of Marine BioEnergy, said: "The good news is the farm system can be assembled from off-the-shelf products without new technology. Once implemented, depth-cycling farms could lead to a new way to produce affordable, carbon-neutral fuel year-round."

Cindy Wilcox, co-founder and president of Marine BioEnergy, estimates that it would take a Utah-sized patch of ocean to make enough kelp biofuel to replace 10% of the liquid petroleum consumed annually in the United States. One Utah would take up only 0.13% of the total Pacific Ocean.

Developing a new generation of biofuels has been a priority for California and the federal government. The U.S. Department of Energy's Advanced Research Projects Agency-Energy invested $22 million in efforts to increase marine feedstocks for biofuel production, including $2 million to conduct the kelp elevator study. The Department of Energy has a study to locate a billion tons of feedstock per year for biofuels; Cindy Wilcox of Marine BioEnergy said the ocean between California, Hawaii and Alaska could contribute to that goal, helping make the U.S. a leader in this new energy technology.

Credit: 
University of Southern California

Coronavirus-like particles could ensure reliability of simpler, faster COVID-19 tests

image: Coronavirus-like nanoparticles, made from plant viruses and bacteriophage, could serve as positive controls for the RT-LAMP test.

Image: 
Soo Khim Chan

Rapid COVID-19 tests are on the rise to deliver results faster to more people, and scientists need an easy, foolproof way to know that these tests work correctly and the results can be trusted. Nanoparticles that pass detection as the novel coronavirus could be just the ticket.

Such coronavirus-like nanoparticles, developed by nanoengineers at the University of California San Diego, would serve as something called a positive control for COVID-19 tests. Positive controls are samples that always test positive. They are run and analyzed right alongside patient samples to verify that COVID-19 tests are working consistently and as intended.

The positive controls developed at UC San Diego offer several advantages over the ones currently used in COVID-19 testing: they do not need to be kept cold; they are easy to manufacture; they can be included in the entire testing process from start to finish, just like a patient sample; and because they are not actual virus samples from COVID-19 patients, they do not pose a risk of infection to the people running the tests.

Researchers led by Nicole Steinmetz, a professor of nanoengineering at UC San Diego, published their work in the journal Biomacromolecules.

This work builds on an earlier version of the positive controls that Steinmetz's lab developed for the RT-PCR test, which is the gold standard for COVID-19 testing. The positive controls in the new study can be used not only for the RT-PCR test, but also for a cheaper, simpler and faster test called the RT-LAMP test, which can be done on the spot and provide results in about an hour.

Having a hardy tool to ensure these tests are running accurately--especially for low-tech diagnostic assays like the RT-LAMP--is critical, Steinmetz said. It could help enable rapid, mass testing of COVID-19 in low-resource, underserved areas and other places that do not have access to sophisticated testing equipment, specialized reagents and trained professionals.

Upgraded positive controls

The new positive controls are essentially tiny virus shells--made of either plant virus or bacteriophage--that house segments of coronavirus RNA inside. The RNA segments include binding sites for both of the primers used in the PCR and LAMP tests.

"This design creates an all-in-one control that can be used for either one of these assays, making it very versatile," said first author Soo Khim Chan, who is a postdoctoral researcher in Steinmetz's lab.

The team developed two types of positive controls. One was made from plant virus nanoparticles. To make them, the researchers infected cowpea plants in the lab with cowpea chlorotic mottle virus and then extracted the viruses from the plants. Afterwards, the researchers removed the virus' RNA and replaced it with a custom-made RNA template containing specific yet non-infectious sequences from the SARS-CoV-2 virus. The resulting nanoparticles consist of coronavirus RNA sequences packaged inside plant virus shells.

The other positive control was made from bacteriophage nanoparticles. It involved a similar recipe. The researchers infected E. coli bacteria with custom-made plasmids--rings of DNA--that contain specific fragments of sequences (which are also non-infectious) from the SARS-CoV-2 virus, as well as genes coding for surface proteins of a bacteriophage called Qbeta. This process caused the bacteria to produce nanoparticles that consist of coronavirus RNA sequences packaged inside bacteriophage shells.

The plant virus and bacteriophage shells are key to making these positive controls so sturdy. They protect the coronavirus RNA segments from breaking down at warmer temperatures--tests showed that they can be stored for a week at temperatures up to 40 C (104 F). The shells also protect the RNA during the first step of the PCR and LAMP tests, which involves breaking down cells in the sample--via enzymes or heat--to release their genetic material for testing.

These protections are not present in the positive controls currently used in COVID-19 testing (naked synthetic RNAs, plasmids or RNA samples from infected patients). That's why existing controls either require refrigeration (which makes them inconvenient to handle, costly to ship and store) or have to be added at a later stage of the test (which means that means scientists will not know if something went wrong in the first steps).

As a next step, the researchers are looking to partner up with industry to implement this technology. The positive controls can be adapted to any established RT-PCR or RT-LAMP assay, and using them would help negate false readouts, Steinmetz's team said. Plus, these positive controls can be easily produced in large quantities by molecular farming in plants or microbial culture fermentation, which is good news for translating them to large-scale manufacturing.

"With mutants and variants emerging, continued testing will be imperative to keep the population safe," Steinmetz said. "The new technology could find utility in particular for at-home tests, which may have a higher rate of false readouts due to the less controlled experimental conditions."

Credit: 
University of California - San Diego

'A Bluetooth mouse'--you can wirelessly read a mouse's mind

image: Mouse with a head-mounted Bluetooth wireless system that transmits neuronal signals from cortex implanted microneedle electrodes

Image: 
COPYRIGHT (C) TOYOHASHI UNIVERSITY OF TECHNOLOGY. ALL RIGHTS RESERVED.

Overview:

A research team at the Department of Electrical and Electronic Information Engineering, Department of Computer Science and Engineering, Department of Applied Chemistry and Life Science, and the Electronics-Inspired Interdisciplinary Research Institute (EIIRIS) at Toyohashi University of Technology has developed a lightweight, compact, Bluetooth-low-energy-based wireless neuronal recording system for use in mice. The wireless system weighs 3 with the battery, having advantages of high signal quality, good versatility, and low cost, compared to wired recording with a commercial neurophysiology system. The study was published online in Sensors and Actuators B: Chemical, on January 8, 2021.

Details:

Electrophysiological recording, which uses micro-scale needle-like electrodes penetrated into the brain tissue, has made significant contributions to fundamental neuroscience and medical applications. Electrophysiological recording, however, requires improvements in signal quality, invasiveness, and cable use. Although wireless recording can meet these requirements, conventional wireless systems are heavy and bulky for use in small animals such as mice, and systems based on their own custom technologies are costly and lack versatility.

The research team developed a lightweight, compact, wireless Bluetooth-low-energy neuronal recording system. As explained by the first author of the article, Ph.D. candidate Shinnosuke Idogawa, "We tackled the challenge of developing a lightweight and compact wireless neuronal recording system for use in mice and developed a 15 × 15 × 12 mm3 system weighing 3.9 g with the battery, which is less than 15% of a mouse's weight (e.g., 33 g for a two-week-old C57BL/6 mouse). Surprisingly, the wireless system demonstrates advantages of not only recording without using any cables, but also improvements in signal quality, including signal-to-noise ratios, compared to wired recording with a commercial neurophysiology system. In addition to these advantages, the developed wireless system costs USD $79.90, which is less than the wired system."

Development Background:

The leader of the research team, Associate Professor Takeshi Kawano said "We demonstrated the wireless system for single channel recording as our first step, but we can increase the channel numbers based on our system, and we are currently developing wireless systems for four-channels and more. Because we use Bluetooth technology, the device features will help us further develop small wireless neurophysiology systems with the advantages of good versatility and low cost for a wide range of users."

Future Outlook:

The research team believes that the wireless recording system can also be used to study the behavioral characteristics of mice as well as drug screening using mice. Because of its light weight, compactness, and Bluetooth technology, the developed wireless neuronal recording system can also be used with other species, including rats and monkeys.

Credit: 
Toyohashi University of Technology (TUT)

Sniffing in the name of science

image: Annegret Grimm-Seyfarth with Border Collie "Zammy" in search of endangered crested newts.

Image: 
Daniel Peter

The lists of Earth's endangered animals and plants are getting increasingly longer. But in order to stop this trend, we require more information. It is often difficult to find out exactly where the individual species can be found and how their populations are developing. According to a new overview study published in Methods in Ecology and Evolution by Dr Annegret Grimm-Seyfarth from the Helmholtz Centre for Environmental Research (UFZ) and her colleagues, specially trained detection dogs can be indispensable in such cases. With the help of these dogs, the species sought can usually be found faster and more effectively than with other methods.

How many otters are there still in Germany? What habitats do threatened crested newts use on land? And do urban hedgehogs have to deal with different problems than their rural conspecifics? Anyone wishing to effectively protect a species should be able to answer such questions. But this is by no means easy. Many animals remain in hiding - even their droppings can be difficult to find. Thus, it is often difficult to know exactly whether and at what rate their stocks are shrinking or where the remaining survivors are. "We urgently need to know more about these species", says Dr Annegret Grimm-Seyfarth of the UFZ. "But first we must find them".

Remote sensing with aerial and satellite images is useful for mapping open landscapes or detecting larger animals. But when it comes to densely overgrown areas and smaller, hidden species, experts often carry out the search themselves or work with cameras, hair traps, and similar tricks. Other techniques (e.g. analysing trace amounts of DNA) have also been attracting increasing interest worldwide. The use of specially trained detection dogs can also be particularly useful. After all, a dog's sense of smell is virtually predestined to find the smallest traces of the target species. While humans have about six million olfactory receptors, a herding dog has more than 200 million - and a beagle even 300 million. This means that dogs can perceive an extremely wide range of odours, often in the tiniest concentrations. For example, they can easily find animal droppings in a forest or plants, mushrooms, and animals underground.

At the UFZ, the detection dogs have already proven their abilities in several research projects. "In order to be able to better assess their potential, we wanted to know how detection dogs have previously been used around the world", says Grimm-Seyfarth. Together with UFZ employee Wiebke Harms and Dr Anne Berger from the Leibniz Institute for Zoo and Wildlife Research (IZW) in Berlin, she has evaluated 1220 publications documenting the use of such search dogs in more than 60 countries. "We were particularly interested in which breeds of dogs were used, which species they were supposed to track down, and how well they performed", explains the researcher.

The longest experience with the detection dogs is in New Zealand, where dogs have been tracking threatened birds since around 1890. Since then, the idea has been implemented in many other regions, especially in North America and Europe. The studies analysed focused mainly on finding animals as well as their habitats and tracks. Dogs have been used to find more than 400 different animal species - most commonly mammals from the cat, dog, bear, and marten families. They have also been used to find birds and insects as well as 42 different plant species, 26 fungal species, and 6 bacterial species. These are not always endangered species. The dogs sometimes also sniff out pests such as bark beetles or invasive plants such as knotgrass and ragweed.

"In principle, you can train all dog breeds for such tasks", says Grimm-Seyfarth. "But some of them may require more work than others". Pinschers and Schnauzers, for example, are now more likely to be bred as companion dogs and are therefore less motivated to track down species. And terriers tend to immediately snatch their targets - which is, of course, not desirable.

Pointers and setters, on the other hand, have been specially bred to find and point out game - but not to hunt it. This is why these breeds are often used in research and conservation projects in North America, Great Britain, and Scandinavia in order to detect ground-breeding birds such as ptarmigans and wood grouse. Retrievers and herding dogs also have qualities that make them good at tracking species. They are eager to learn, easy to motivate, enjoy working with people, and generally do not have a strong hunting instinct. That is why Labrador Retrievers, Border Collies, and German Shepherds are among the most popular detection dogs worldwide.

Grimm-Seyfarth's Border Collie Zammy, for example, learned as a puppy how to track down the droppings of otters. This is a valuable contribution to research because the droppings can be genetically analysed to find out which individual it comes from, how it is related to other conspecifics, and what it has eaten. However, even for experienced experts, these revealing traces are not so easy to find. Especially small and dark coloured droppings are easy to overlook. Dogs, on the other hand, sniff even the most unremarkable droppings without distinction. In an earlier UFZ study, they found four times as many droppings as human investigators alone. And the fact that Zammy is now also looking for crested newts makes his efforts even more rewarding.

According to the overview study, many other teams around the world have had similarly good experiences. In almost 90% of cases, the dogs worked much more effectively than other detection methods. Compared with camera traps, for example, they detected between 3.7 and 4.7 fold more black bears, pied martens, and bobcats. They are also often reach their destination particularly quickly. "They can find a single plant on a football field in a very short time", says Grimm-Seyfarth. They are even able to discover underground parts of plants.

However, there are also cases where the use of detection dogs is not the method of choice. Rhinos, for example, leave their large piles of excrement clearly visible on paths so that humans can easily find them on their own. And animal species that know feral dogs as enemies are more likely to find (and fight) the detection dogs than to be found.

"However, in most cases where the dogs did not perform so well, poor training is to blame", says Grimm-Seyfarth. She believes that good training of the animal is the most important recipe for success for detection dogs. "If you select the right dog, know enough about the target species, and design the study accordingly, this can be an excellent detection method". She and her colleagues are already planning further applications for the useful detection dogs. A new project that involves tracking down invasive plant species will soon be launched.

Credit: 
Helmholtz Centre for Environmental Research - UFZ

Gold-phosphorus nanosheets catalyzes nature gas to greener energy selectively

image: Schematic diagram of the reaction pathway for methane oxidation over Au1/BP nanosheets.

Image: 
LUO Laihao

Advances in hydraulic fracturing technology have enabled discovery of large reserves of natural gas which primarily contains methane, which is mainly burned directly and causing global warming potentially. Upgrading methane to greener energy such as methanol through aerobic oxidation is an ideal way to solve the problem and remain 100% atom economy.

Yet the difficulties lie in activating methane and preventing methanol from over-oxidation. Methane takes a stable non-polar tetrahedral structure with high dissociation energy of C-H bond, which requires high energy to be activated. Meanwhile methanol can be easily over-oxidized to carbon dioxide during the process. The activation and directional transformation of methane is regarded as the "holy grail" of catalysis.

A recent work published on Nature Communications by research team led by Prof. ZENG Jie and LI Weixue from Hefei National Laboratory for Physical Sciences at the Microscale marks new progress. They designed and fabricated Au single atoms on black phosphorus (Au1/BP) nanosheets for methane selective oxidation into methanol under mild conditions with >99% selectivity.

Au1/BP nanosheets was able to catalyze methane oxidation reaction with oxygen as oxidant under irradiation conditions. Based on mechanistic studies, water and O2 were activated on Au1/BP nanosheets to form reactive hydroxyl groups and * OH radicals under light irradiation. The reactive hydroxyl groups enabled mild oxidation of methane into CH3* species, followed by oxidation of CH3* via * OH radicals into methanol.

Since water is consumed to form hydroxyl groups and produced via reaction of hydroxyl groups with methane, water is completely recycled and thus can also be regarded as a catalyst.

This study provides insight into the activation mechanism of oxygen and methane in methane selective oxidation, and offers a new understanding of the role of water in the reaction process.

Credit: 
University of Science and Technology of China

Saarbruecken chemists develop variety of industrially important synthetic process

image: Prof. Dr. David Scheschkewitz

Image: 
Saarland University/Oliver Dietze

The formation of double bonds between two carbon atoms (C=C) is of central significance in natural organisms. The vast majority of natural substances therefore contain one or more of these double bonds. Compounds with C=C double bonds, the alkenes or olefins, also play a prominent role in the organic chemical industry. A great many chemical processes have therefore been developed over the years to control the formation of C=C bonds.

One such process, olefin metathesis, has received particular attention over the last few decades and the 2005 Nobel Prize for Chemistry was awarded in recognition of its significance.

Despite the many parallels between carbon and the heavier members of the carbon group (Group 14) of the periodic table, olefin metathesis was only of practical significance when compounds containing C=C bonds were involved. This seems somewhat surprising given the fact that double bonds between the heavier elements of the carbon group are considerably weaker than a C=C bond and are thus more easily cleaved.

David Scheschkewitz, Professor of Inorganic and General Chemistry at Saarland University, Lukas Klemmer and Anna-Lena Thömmes from his research group and Volker Huch and Bernd Morgenstern from the X-ray Diffraction Service Centre have developed and characterized a new class of germanium-based heavier alkene analogues whose Ge=Ge bond exhibits just the right degree of stability to participate in synthetically useful metathesis reactions.

The Scheschkewitz group employed the new methodology to synthesize the first long-chain polymers containing double bonds between heavier elements. In the near future, the researchers hope to extend the concept to other elements of the periodic table, which could be of potential use in developing novel materials for applications in the field of organic electronics. 'The underlying principle is simple and could also be applied in organic chemistry,' explains Professor Scheschkewitz.

Potentially, this could also provide a means of carrying out olefin metathesis reactions without the precious-metal catalysts needed in the traditional approach.

Credit: 
Saarland University

Rice plant resists arsenic

image: Rice plant astol1

Image: 
Sheng-Kai Sun / Nature Communications

The agricultural cultivation of the staple food of rice harbours the risk of possible contamination with arsenic that can reach the grains following uptake by the roots. In their investigation of over 4,000 variants of rice, a Chinese-German research team under the direction of Prof. Dr Rüdiger Hell from the Centre for Organismal Studies (COS) of Heidelberg University and Prof. Dr Fang-Jie Zhao of Nanjing Agricultural University (China) discovered a plant variant that resists the toxin. Although the plants thrive in arsenic-contaminated fields, the grains contain far less arsenic than other rice plants. At the same time, this variant has an elevated content of the trace element selenium.

The researchers explain that especially in agricultural regions in Asia, increasing amounts of the metalloid arsenic get into the groundwater through large-scale fertilisation or wastewater sludge, for example. Because rice is cultivated in submerged fields, the plants absorb a good deal of arsenic through the roots, thus giving the potential carcinogen a pathway into the food chain. According to Prof. Hell, arsenic pollution in some soils in Asia is now so high that it is also causing significant crop losses because the arsenic is poisonous to the plants themselves.

In the course of their research project, the scientists exposed over 4,000 rice variants to water containing arsenic and then observed their growth. Only one of the plants studied proved to be tolerant against the toxic metalloid. What biologically characterises the rice variant called astol1 is a so-called amino acid exchange in a single protein. "This protein is part of a sensor complex and controls the formation of the amino acid cysteine, which is an important component in the synthesis of phytochelatins. Plants form these detoxifying substances in response to toxic metals and thus neutralise them," explains Prof. Hell, who together with his research group at the COS is studying the function of this sensory complex. The neutralised arsenic is stored in the roots of the plant before it reaches the edible rice grains and can endanger humans.

In the field study, astol1 rice grains absorbed one third less arsenic than conventional rice grains that were also exposed to arsenic-contaminated water. The researchers further discovered a 75 percent higher content of the essential trace element selenium, which is involved in the production of thyroid hormones in humans. As for yield, astol1 is just as good as the standard high-yield rice variants, making it especially suitable for agricultural use.

"In future, rice plants like astol1 could be used in arsenic-contaminated regions to feed the population as well as help fight diet-related selenium deficiency," states Dr Sheng-Kai Sun with optimism. The junior researcher was instrumental in discovering the rice variant during the course of his PhD work at Nanjing Agricultural University. Thanks to a scholarship from the Alexander von Humboldt Foundation, he has been working since last year at the Centre for Organismal Studies in the groups of Prof. Hell and Dr Markus Wirtz to investigate the sensor complex causing the astol1 phenotype.

Credit: 
Heidelberg University

Astrophysicist's 2004 theory confirmed: Why the Sun's composition varies

image: The solar corona viewed in white light during the total solar eclipse on Aug. 21, 2017 from Mitchell, Oregon. The moon blocks out the central part of the Sun, allowing the tenuous outer regions to be seen in full detail. The image is courtesy of Benjamin Boe and first published in "CME-induced Thermodynamic Changes in the Corona as Inferred from Fe XI and Fe XIV Emission Observations during the 2017 August 21 Total Solar Eclipse", Boe, Habbal, Druckmüller, Ding, Hodérova, & Štarha, Astrophysical Journal, 888, 100, (Jan. 10, 2020).

Image: 
American Astronomical Society (AAS)

WASHINGTON -- About 17 years ago, J. Martin Laming, an astrophysicist at the U.S. Naval Research Laboratory, theorized why the chemical composition of the Sun's tenuous outermost layer differs from that lower down. His theory has recently been validated by combined observations of the Sun's magnetic waves from the Earth and from space.

His most recent scientific journal article describes how these magnetic waves modify chemical composition in a process completely new to solar physics or astrophysics, but already known in optical sciences, having been the subject of Nobel Prizes awarded to Steven Chu in 1997 and Arthur Ashkin in 2018.

Laming began exploring these phenomena in the mid-1990s, and first published the theory in 2004.

"It's satisfying to learn that the new observations demonstrate what happens "under the hood" in the theory, and that it actually happens for real on the Sun," he said.

The Sun is made up of many layers. Astronomers call its outermost layer the solar corona, which is only visible from earth during a total solar eclipse. All solar activity in the corona is driven by the solar magnetic field. This activity consists of solar flares, coronal mass ejections, high-speed solar wind, and solar energetic particles. These various manifestations of solar activity are all propagated or triggered by oscillations or waves on the magnetic field lines.

"The very same waves, when they hit the lower solar regions, cause the change in chemical composition, which we see in the corona as this material moves upwards," Laming said. "In this way, the coronal chemical composition offers a new way to understand waves in the solar atmosphere, and new insights into the origins of solar activity."

Christoph Englert, head of the U.S. Naval Research Laboratory's Space Science Division, points out the benefits for predicting the Sun's weather and how Laming's theory could help predict changes in our ability to communicate on Earth.

"We estimate that the Sun is 91 percent hydrogen but the small fraction accounted for by minor ions like iron, silicon, or magnesium dominates the radiative output in ultraviolet and X-rays from the corona," he said. "If the abundance of these ions is changing, the radiative output changes."

"What happens on the Sun has significant effects on the Earth's upper atmosphere, which is important for communication and radar technologies that rely on over-the-horizon or ground-to-space radio frequency propagation," Englert said.

It also has an impact on objects in orbit. The radiation is absorbed in the Earth's upper atmospheric layers, which causes the upper atmosphere to form plasma, the ionosphere, and to expand and contract, influencing the atmospheric drag on satellites and orbital debris.

"The Sun also releases high energy particles," Laming said. "They can cause damage to satellites and other space objects. The high energy particles themselves are microscopic, but it's their speed that causes them to be dangerous to electronics, solar panels, and navigation equipment in space."

Englert said that reliably forecasting solar activity is a long-term goal, which requires us to understand the inner workings of our star. This latest achievement is a step in this direction.

"There is a long history of advances in astronomy seeding technological progress, going all the way back to Galileo," Englert said. "We are excited to carry on this tradition in support of the U.S. Navy."

Credit: 
Naval Research Laboratory

Human instinct can be as useful as algorithms in detecting online 'deception'

Travellers looking to book a hotel should trust their gut instinct when it comes to online reviews rather than relying on computer algorithms to weed out the fake ones, a new study suggests.

Research, led by the University of York in collaboration with Nanyang Technological University, Singapore, shows the challenges of online 'fake' reviews for both users and computer algorithms. It suggests that a greater awareness of the linguistic characteristics of 'fake' reviews can allow online users to spot the 'real' from the 'fake' for themselves.

Dr Snehasish Banerjee, Lecturer in Marketing from the University of York's Management School, said: "Reading and writing online reviews of hotels, restaurants, venues and so on, is a popular activity for online users, but alongside this, 'fake' reviews have also increased.

"Companies can now use computer algorithms to distinguish the 'fake' from the 'real' with a good level of accuracy, but the extent to which company websites use these algorithms is unclear and so some 'fake' reviews slip through the net.

"We wanted to understand whether human analysis was capable of filling this gap and whether more could be done to educate online users on how to approach these reviews."

The researchers tasked 380 people to respond to questions about three hotel reviews - some authentic, others fake - based on their perception of the reviews. The users could rely on the same cues that computer algorithm use to discern 'fake' reviews, which includes the number of superlatives in the review, the level of details, if it was easy to read, and appeared noncommittal.

For those who already sceptical of online reviews this was a relatively straightforward task, but most could not identity factors such as 'easy to read' and 'non-committal' like a computer algorithm could. In the absence of this skill, the participants relied on 'gut instinct'.

Dr Banerjee said: "The outcomes were surprisingly effective. We often assume that the human brain is no match for a computer, but in actual fact there are certain things we can do to train the mind in approaching some aspects of life differently.

"Following this study, we are recommending that people need to curb their instincts on truth and deception bias - the tendency to either approach online content with the assumption that it is all true or all fake respectively - as neither method works in the online environment.

"Online users often fail to detect fake reviews because they do not proactively look for deception cues. There is a need to change this default review reading habit, and if reading habit is practised long enough, they will eventually be able to rely on their gut instinct for fake review detection."

The research also reminds businesses that ethical standards should be upheld to ensure that genuine experiences of their services are reflected online.

Credit: 
University of York

Under 55's found lockdown most challenging, finds survey

A UK wide survey of 2252 adults, carried out five weeks into the first lockdown revealed 95% of those who took part were following lockdown restrictions. Of that 95% more than 80% reported finding it challenging. Adjusting to changes in daily routines, and mental and physical health struggles were the most common challenges faced by participants. Women and adults under the age of 55 were most likely to report experiencing challenges.

The research, 'What challenges do UK adults face when adhering to COVID-19-related instructions? Cross-sectional survey in a representative sample'*, was published in the journal, Preventive Medicine today, 2 March. It was conducted by researchers at The Manchester Centre for Health Psychology and the National Institute for Health Research Greater Manchester Patient Safety Translational Research Centre (NIHR GM PSTRC). The Centre is a partnership between The University of Manchester and Salford Royal NHS Foundation Trust. .

Dr Chris Keyworth, research fellow in the Behavioural Science sub theme at the GM PSTRC and lead for this study, said: "Our research shows that during the first UK lockdown a high proportion of the people we surveyed did stick to the government rules. Understanding the impact of this on mental health is vital when looking at how to encourage people to do this long term and into the future.

"The first step is to identify the biggest challenges people faced and for which age group and gender."

The challenge the most people reported facing was the changes in daily routines, followed by the impact on mental health and then the issues around physical health.

Dr Keyworth continued: "According to our survey more than 40% of people said they struggled with their mental health during the first lockdown. This is interesting because, in comparison, according to a 2016 study, one in six people reported experiencing a common mental health problem in a given week in England. This goes some way in quantifying the profound affect the restrictions had on the population at the time."

The research highlights the importance of tailoring public health messages to age groups, genders or those with certain characteristics. The study's findings suggest an urgent need to prioritise interventions which address the physical, psychological and social impacts of the pandemic. These may include interventions that aim to help people to change habits and support them in establishing new routines when faced with the sudden introduction of strict rules such as lockdown. Greater investment in services to improve physical and mental health that can be delivered remotely should also be a priority. Home-based interventions to promote physical health should be developed and more work put into improving access to healthcare professionals remotely.

These interventions should then be targeted at women, those under 55 and people without care commitments as they were identified as the most likely to struggle during a full lockdown.

Professor Chris Armitage, lead of the GM PSTRC's Behavioural Science sub theme, said: "The findings show that by-and-large, the British public have been adhering to government COVID-19 instructions, but following the government lockdown rules comes at a personal cost. Greater attention needs to be paid to how following the rules can be sustained with targeted support measures."

Dr Keyworth concluded: "Lockdown is undoubtedly challenging and, to ensure any future government restrictions and guidelines are followed it is important to learn from the behaviour of the population at a time when a high number of people were following the rules. We hope that our research will help to improve patient safety by aiding understanding. It can be used to guide the design of interventions and inform public health messaging both now and into the future."

Credit: 
NIHR Greater Manchester Patient Safety Translational Research Centre

UMD study finds the fuel efficiency of one car may be cancelled by your next car purchase

image: Fueling up an electric car

Image: 
Chuttersnap, public domain

In a recent collaborative study led by the University of Maryland (UMD), researchers find that consumers tend to buy something less fuel efficient than they normally would for their second car after springing for an eco-friendly vehicle. While this sounds like an all-too-logical conclusion, the study reports a 57% reduction in the benefits of driving your fuel efficient car for carbon emissions purely based on the purchase of your second vehicle. Since about three-quarters of cars are purchased into multi-car households, these findings could have major implications for carbon emissions, and especially for the design of carbon mitigation programs like Cash-for-Clunkers and Corporate Average Fuel Economy (CAFE) standards that aren't taking into account the decisions of consumers with multiple vehicles.

"What we really wanted to do is see how households are making decisions when they purchase and own more than one vehicle," says James Archsmith, assistant professor in Agricultural & Resource Economics at UMD and lead author on this study. "We have a lot of energy policy out there trying to get people to buy more fuel efficient cars, but we really think of every car as this separate purchase that doesn't rely on any other things going on in the household, and that's just not the case. Other vehicles, priorities, and how those purchases and the intended uses of the vehicles interact are all important to understand how effective our policies are."

Published in The RAND Journal of Economics and funded by the California Air Resources Board, Archsmith collaborated with Kenneth Gillingham of Yale University, Christopher Knittel of MIT, and David Rapson of the UC Davis Department of Economics to examine vehicle purchasing behaviors using California-based data. The researchers studied California Department of Motor Vehicle trends in household behaviors for two-car households for a period of six years. The data revealed multiple trends that correlate with a decrease in overall fuel economy and efficiency. This trend has been likened to the "diet soda effect," where people buying diet soda tend to reward themselves by adding something like french fries to their meal. However, Archsmith says it is unlikely that consumers are thinking about their decisions that way.

"It's not likely that people are actually thinking about fuel economy that way, that they can splurge on a less fuel efficient vehicle," explains Archsmith. "It is probably operating through other attributes of the car that are associated with fuel economy. So I have a car that is small and fuel efficient, but it isn't as comfortable and can't fit the kids. Then, I tend to buy a bigger second car. It is more likely utility based in some way, but it is correlated with fuel economy nonetheless."

The study also found that consumers who buy fuel efficient vehicles tend to end up driving them more and farther distances than they might otherwise, further chipping away at the emissions benefits. These decisions and consumer behaviors need to have a place in policy making that is designed to reduce carbon emissions and incentivize more fuel efficient cars, the researchers say.

"If people buy a more fuel-efficient car, down the road when they replace one of their other cars, the car they buy is going to be less fuel-efficient," says Rapson. "So the effect of fuel economy standards is reduced. There is a strong force that we didn't know about before that is going to erode the benefit of [policy] forcing people to buy more fuel-efficient cars."

Since California's fuel economy standards are models for the rest of the country, they should be adapted to fit actual human behavior. "Unintended consequences like this need to be taken into account when making policy," adds Rapson. "On average, fuel economy standards are putting more fuel-efficient cars in households. That can be good if it reduces gasoline use. But if it causes people to buy a bigger, less fuel-efficient second car to compensate, this unintended effect will erode the intended goals of the policy."

Archsmith and the team hope to expand this research out beyond California and to other aspects of driver behavior research that plays an important role on fuel economy, and ultimately on environmental health and climate change.

"We want to focus more on driving behavior and how multi-car households drive their vehicles and respond to changes in gasoline prices in the future," says Archsmith. "We want to refine them and then extend beyond the state of California, using California as a model in this paper and doing the same kind of analyses in other states as well. The pandemic has also changed the way people drive, and we expect to see more purchases of fuel inefficient cars coming off of lockdowns and lower gas prices."

Credit: 
University of Maryland