Tech

In a field where smaller is better, researchers discover the world's tiniest antibodies

image: From left to right: The antigen binding fragment of an antibody (Fab), an antibody fragment consisting of a single domain antibody (VHH) and a knob domain.

Image: 
PLOS Biology and Alex Macpherson

Researchers at the University of Bath in the UK and biopharma company UCB have found a way to produce miniaturised antibodies, opening the way for a potential new class of treatments for diseases.

Until now, the smallest manmade antibodies (known as monoclonal antibodies, or mAbs) were derived from llamas, alpacas and sharks, but the breakthrough molecules isolated from the immune cells of cows are up to five times smaller. This is thanks to an unusual feature of a bovine antibody known as a knob domain.

The potential medical implications of the new antibodies' diminutive size are huge. For instance, they may bind to sites on pathogens that regular antibody molecules are too large to latch on to, triggering the destruction of invasive microbes. They may also be able to gain access to sites of the body which larger antibodies can't.

Antibodies consist of chains of amino acids (the building blocks of proteins) that join together in a loopy structure. The loops in the chains, known as complementarity determining regions, bind to antigen targets, thereby activating the immune system. Bovine antibodies are loopier than most, and around 10% include a knob domain - a characteristic that is unique among jawed vertebrates. These tightly packed bundles of mini-loops are presented on a protein stalk, far from other loops, and are thought to play a critical role in binding.

The reason knob domains are creating a stir is simple: isolated from the rest of the antibody, these loop extensions can function autonomously, effectively making tiny antibodies that can bind tightly to their targets.

Professor Jean van den Elsen from Bath's Department of Biology and Biochemistry, who was involved in the research, said this finding was surprising. "These knobs are able to bind their target as complete antibodies, so in effect we have been able to miniaturise antibodies for the first time."

These new molecules have been developed as part of a collaborative project between the University of Bath and global biopharma company UCB. They originate from cows that have been immunised by injection with an antigen (particles of a foreign body), eliciting an immune response. Natural antibodies are mined from the cow, through a process of sorting and 'deep sequencing' of antibody producing B-cells. The resulting antibodies are then manufactured in the lab in cultures of human cells.

Regular antibodies are made by the human body as part of its natural response to an infection, whereas monoclonal antibodies are administered to a patient when an infection has taken hold and they are struggling to beat it unaided. Over the past few decades, mAbs have emerged as effective treatments for various medical conditions, including cancers, autoimmune disorders and serious viral infections. It is hoped that miniaturised mAbs will eventually be involved in a range of drug therapies.

The antigen used by the Bath researchers to elicit an immune response in cows is called Complement component C5, and C5 plays a role in many human diseases (including COVID-19), where there is an inflammatory response.

Not only do these novel monoclonal antibodies have a size advantage over regular mAbs but they are also more robust, meaning they remain stable for longer.

"They have very sturdy, tightly packed structures," said Professor van den Elsen. "So not only do they get to places better than other antibodies but they may also have a far longer shelf life."

Alex Macpherson, a PhD student at Bath and a biochemist at UCB, who is lead author on the paper, added: "Antibody drug discovery is an established field but this research opens up entirely new opportunities. There is huge potential use for these miniaturised antibodies."

Alastair Lawson, immunology Fellow at UCB and UCB lead on the project said: "This research has led to the discovery of the smallest clinically relevant antibody fragments ever reported and we are very excited about their potential."

Credit: 
University of Bath

Machine learning homes in on catalyst interactions to accelerate materials development

A machine learning technique rapidly rediscovered rules governing catalysts that took humans years of difficult calculations to reveal--and even explained a deviation. The University of Michigan team that developed the technique believes other researchers will be able to use it to make faster progress in designing materials for a variety of purposes.

"This opens a new door, not just in understanding catalysis, but also potentially for extracting knowledge about superconductors, enzymes, thermoelectrics, and photovoltaics," said Bryan Goldsmith, an assistant professor of chemical engineering, who co-led the work with Suljo Linic, a professor of chemical engineering.

The key to all of these materials is how their electrons behave. Researchers would like to use machine learning techniques to develop recipes for the material properties that they want. For superconductors, the electrons must move without resistance through the material. Enzymes and catalysts need to broker exchanges of electrons, enabling new medicines or cutting chemical waste, for instance. Thermoelectrics and photovoltaics absorb light and generate energetic electrons, thereby generating electricity.

Machine learning algorithms are typically "black boxes," meaning that they take in data and spit out a mathematical function that makes predictions based on that data.

"Many of these models are so complicated that it's very difficult to extract insights from them," said Jacques Esterhuizen, a doctoral student in chemical engineering and first author of the paper in the journal Chem. "That's a problem because we're not only interested in predicting material properties, we also want to understand how the atomic structure and composition map to the material properties."

But a new breed of machine learning algorithm lets researchers see the connections that the algorithm is making, identifying which variables are most important and why. This is critical information for researchers trying to use machine learning to improve material designs, including for catalysts.

A good catalyst is like a chemical matchmaker. It needs to be able to grab onto the reactants, or the atoms and molecules that we want to react, so that they meet. Yet, it must do so loosely enough that the reactants would rather bind with one another than stick with the catalyst.

In this particular case, they looked at metal catalysts that have a layer of a different metal just below the surface, known as a subsurface alloy. That subsurface layer changes how the atoms in the top layer are spaced and how available the electrons are for bonding. By tweaking the spacing, and hence the electron availability, chemical engineers can strengthen or weaken the binding between the catalyst and the reactants.

Esterhuizen started by running quantum mechanical simulations at the National Energy Research Scientific Computing Center. These formed the data set, showing how common subsurface alloy catalysts, including metals such as gold, iridium and platinum, bond with common reactants such as oxygen, hydroxide and chlorine.

The team used the algorithm to look at eight material properties and conditions that might be important to the binding strength of these reactants. It turned out that three mattered most. The first was whether the atoms on the catalyst surface were pulled apart from one another or compressed together by the different metal beneath. The second was how many electrons were in the electron orbital responsible for bonding, the d-orbital in this case. And the third was the size of that d-electron cloud.

The resulting predictions for how different alloys bind with different reactants mostly reflected the "d-band" model, which was developed over many years of quantum mechanical calculations and theoretical analysis. However, they also explained a deviation from that model due to strong repulsive interactions, which occurs when electron-rich reactants bind on metals with mostly filled electron orbitals.

Credit: 
University of Michigan

Scientists studied nanoparticles embedded in silver-ion-exchanged glasses

image: Researchers have registered the formation of silver nanoparticles in an ion-exchanged glass as a result of infrared laser irradiation.

Image: 
Peter the Great St.Petersburg Polytechnic University

Researchers from Peter the Great St. Petersburg Polytechnic University (SPbPU) in collaboration with colleagues from the Alferov University, Institute of Problems of Mechanical Engineering RAS and University of Technology of Troyes have registered the formation of silver nanoparticles in an ion-exchanged glass as a result of infrared laser irradiation. The research of current studies were published in the journal of Nanomaterials.

The international scientific group studies the growth and properties of metal nanoparticles placed on the surface of multicomponent glasses. Such structures are highly applicable for surface enhanced Raman spectroscopy (SERS). This special type of spectroscopy is employed for screening, monitoring, and analysis of micro doses of a matter. Substrates with the nanoparticles made of the glasses studied have antibacterial, antifungal and antiviral applications. They are also cheap and easy to prepare.

In this study the group or researchers put efforts to check feasibility of silver nanoparticles (SNPs) formed on the glass surface by infrared nanosecond laser pulses for Raman spectroscopy. There are different ways of placing silver nanoparticles onto the glass surface including lithographic techniques, laser ablation, sedimentation of SNPs from solutions, thermal or reactive reduction of silver ions followed by out-diffusion of neutral silver. Method applied in the study allowed to «draw» the exact structures consisting of the SNPs on the glass surface.

«Silver-to-sodium ion exchanged glass contains silver ions evenly distributed in the subsurface layer of the material. Under the laser irradiation these ions transform to neutral atoms which cluster together into nanoparticles. When SNPs with a diameter of 20-30 nm are formed, collective oscillation of electrons in the metal nanoparticles, being optically excited in the proper wavelength range, demonstrate surface plasmon resonance. And close to the resonance of the system the sharp rise of the electric field at optical frequency takes place. This phenomenon is used for signal enhancement in the Raman spectroscopy», explains Andrey Lipovskii, professor of Higher School of Engineering Physics of SPbPU.

The sensitive elements obtained can be used as substrates for Raman analysis of different kinds of reagents, including the biological ones.

«Notably the signal enhancement increases in 105-106 times. This is a huge gain», adds prof. Lipovskii.

Such sensing elements have a wide range of application. There are commercial substrates with silver nanoparticles suitable for the Raman scattering but they are quite fragile, subject for oxidation and should be preserved only in special settings. In multicomponent silver-to-sodium ion exchanged glasses SNPs are protected by ~20 nm glass layer. The samples could be carried in the pocket which is suitable for the field work. One should only perform slight chemical etching to reveal the inner structures, and the substrate is ready for work.

This research is a result of many years' collaboration in the study of metal nanoparticles. Now scientists are planning to continue the study.

«We have been working with silver nanoparticles for SERS for a long time. In the next series of experiments, we are going to study the growth of the submicrometer-size silver needles, that could lead to even higher surface enhancement», noted prof. Lipovskii.

Credit: 
Peter the Great Saint-Petersburg Polytechnic University

Scientists got one step closer to solving a major problem of hydrogen energy

image: Fe-Ni-Mo-B Metallic Glass, graphicall abstract

Image: 
FEFU press office

A team of scientists from Far Eastern Federal University (FEFU) together with their colleagues from Austria, Turkey, Slovakia, Russia (MISIS, MSU), and the UK found a way to hydrogenate thin metallic glass layers at room temperature. This technology can considerably expand the range of cheap, energy-efficient, and high-performance materials and methods that can be used in the field of hydrogen energy. An article about the study was published in the Journal of Power Sources.

The team developed an amorphous nanostructure (FeNi-based metallic glass) that can be used in the field of hydrogen energy to accumulate and store hydrogen, in particular, as a replacement for Li-ion batteries in small-sized systems.

Metallic glass has the potential to replace palladium, an expensive element that is currently used in hydrogen systems. The lack of economically feasible energy storage systems is the main hindrance preventing hydrogen energy from scaling up to the industrial level. With the new development, the team came one step closer to solving this problem.

"Hydrogen is the most common chemical element in the Universe, a source of clean renewable energy that has the potential to replace all types of fuel used today. However, its storage poses a major technological problem. One of the key materials used to store and catalyze hydrogen is palladium. However, it is very expensive and has a low affinity to oxidizing or reducing environments under extreme conditions. These factors prevent hydrogen energy from being used on the industrial level. The problem can be solved with metallic glasses. They are amorphous metals and lack long range atomic order. Compared to crystalline palladium, metallic glasses are much cheaper and more resistant to aggressive environments. Moreover, due to the so-called atomic free volume (i.e. space between atoms), such glasses can 'soak up' hydrogen more effectively than any other materials with crystalline structure," said Yurii Ivanov, an assistant professor of the Department of Computer Systems at the School of Natural Sciences, FEFU.

According to the researcher, metallic glass has enormous potential in the energy industry thanks to its amorphous structure, lack of certain defects that are typical for polycrystalline metals (such as grain boundaries), and high resistance to oxidation and corrosion.

What makes this work unique is the fact that electrochemical methods were used both to hydrogenate metallic glasses and to study their ability to absorb hydrogen. Standard hydrogenation methods (such as gas adsorption) require high temperature and pressure which has a negative effect on the properties of metallic glasses and narrows the range of materials that can be used in the study. Unlike gas adsorption, electrochemical hydrogenation causes hydrogen to react with the surface of an electrode (made of FeNi metallic glass) at room temperature, just like in the case with palladium.

The new method can work as an alternative to the common gas-solid reaction for alloys with low capacity or hydrogen absorption/release speed.

The team also suggested a new concept of 'effective volume' that can be used to analyze the efficiency of hydrogen absorption and release by metallic glasses. To do so, the thickness and composition of the glass-hydrogen reaction area are measured using high-resolution electron microscopy and X-ray photoelectron spectroscopy.

In the future, the team plans to develop and optimize new metallic glass compositions for practical energy applications.

Earlier a team of material scientists from FEFU, Cambridge (UK), and the Chinese Academy of Sciences had developed a method of 'rejuvenation' of 3D metallic glasses that are the most promising for practical use. The glasses had been made more moldable and resistant to above-critical loads. The improved metallic glasses can be used in many fields, from plastic electronics to various sensors and transformer cores, medical implants, and protective coatings of satellites.

Credit: 
Far Eastern Federal University

Brain circuitry shaped by competition for space as well as genetics

image: The rat barrel cortex.

Image: 
James et al. (CC BY 4.0)

Complex brain circuits in rodents can organise themselves with genetics playing only a secondary role, according to a new computer modelling study published today in eLife.

The findings help answer a key question about how the brain wires itself during development. They suggest that simple interactions between nerve cells contribute to the development of complex brain circuits, so that a precise genetic blueprint for brain circuitry is unnecessary. This discovery may help scientists better understand disorders that affect brain development and inform new ways to treat conditions that disrupt brain circuits.

The circuits that help rodents process sensory information collected by their whiskers are a great example of the complexity of brain wiring. These circuits are organised into cylindrical clusters or 'whisker barrels' that closely match the pattern of whiskers on the animal's face.

"The brain cells within one whisker barrel become active when its corresponding whisker is touched," explains lead author Sebastian James, Research Associate at the Department of Psychology, University of Sheffield, UK. "This precise mapping between the individual whisker and its brain representation makes the whisker-barrel system ideal for studying brain wiring."

James and his colleagues used computer modelling to determine if this pattern of brain wiring could emerge without a precise genetic blueprint. Their simulations showed that, in the cramped quarters of the developing rodent brain, strong competition for space between nerve fibers originating from different whiskers can cause them to concentrate into whisker-specific clusters. The arrangement of these clusters to form a map of the whiskers is assisted by simple patterns of gene expression in the brain tissue.

The team also tested their model by seeing if it could recreate the results of experiments that track the effects of a rat losing a whisker on its brain development. "Our simulations demonstrated that the model can be used to accurately test how factors inside and outside of the brain can contribute to the development of cortical fields," says co-author Leah Krubitzer, Professor of Psychology at the University of California, Davis, US.

The authors suggest that this and similar computational models could be adapted to study the development of larger, more complex brains, including those of humans.

"Many of the basic mechanisms of development in the rodent barrel cortex are thought to translate to development in the rest of cortex, and may help inform research into various neurodevelopmental disorders and recovery from brain injuries," concludes senior author Stuart Wilson, Lecturer in Cognitive Neuroscience at the University of Sheffield. "As well as reducing the number of animal experiments needed to understand cortical development, exploring the parameters of computational models like ours can offer new insights into how development and evolution interact to shape the brains of mammals, including ourselves."

Credit: 
eLife

Novel Drosophila-based disease model to study human intellectual disability syndrome

image: Researchers of TalTech Neurobiology Laboratory Mari Palgi and Laura Tamberg.

Image: 
TalTech

The researchers from the TalTech molecular neurobiology laboratory headed by professor Tõnis Timmusk used the fruit fly, Drosophila melanogaster to develop a novel disease model for Pitt-Hopkins syndrome (PTHS). Their study was reported in the July issue of Disease Models and Mechanisms.

The PTHS syndrome is caused by mutations in one of the two copies of the TCF4 gene. PTHS patients suffer from moderate to severe intellectual disability: they typically will never learn how to speak. They also have severly imparied motor skills, including delayed acquisition of walking capability. There are around 500 cases documented all around the world, but because of the symptomatic similarity with other intellectual disability syndromes (Angelman, Rett etc.), the PTHS could be underdiagnosed.

75% of human genes known to be associated with diseases have their corresponding genes in the fruit fly genome, making Drosophila a widely used experimental system for modeling human diseases. The Drosophila counterpart of TCF4 gene is known as daughterless (da).

Researcher of molecular neurobiology laboratory Mari Palgi explains: "When we genetically reduced the amount of da gene product specifically in the learning and memory center of the fly brain, the animals exhibited defects of associative memory. Namely, they had lost the ability to associate specific odors with food availability. In addition, the locomotor skills of these fruit flies were impaired as revealed by a test known as the climbing assay. Next we aimed to enhance the activity of da in the mutants flies to restore both their learning and locomotor abilities, i.e to rescue the defective phenotype."

First author of Article Laura Tamberg says: "We fed the mutant flies two different substances that had been shown to enhance the activity of TCF4 in a cell culture-based assay. We found that these substances were able to enhance the learning ability of compromised animals. In the climbing assay, the compromised flies' regained the ability to climb upwards following administration of one compound. The rescue effect was more pronounced in female flies probably since they eat more to reproduce and had thus consumed more of the substances - which had been delivered with food. "

Mari Palgi adds: "These findings suggest that these two substances could be potentially helpful for PTHS patients. One of these, resveratrol, is a food supplement found in number of foodstuffs, such as red grapes and blueberries, and the other one, SAHA, is a drug already in clinical use for treatment of certain lymphomas."

Besides PTHS, commonly occurring variants of TCF4 gene have been associated with other psychiatric diseases such as schizophrenia and bipolar disorder. The authors' Drosophila model could be used to study certain aspects of these diseases by measuring the changes in endophenotypes such as prepulse inhibition.

In the future it is feasible to use this Drosophila model in combination with two straightforward behavioral tests - associative learning and negative geotaxis assays - to screen for additional therapeutics for their ability to enhance the activity of TCF4.

Credit: 
Estonian Research Council

Why disordered light-harvesting systems produce ordered outcomes

image: Similarities Emerging from Disorder: Disordered molecular structures of artificial light-harvesting complexes produce well-defined optical properties

Image: 
Ilias Patmanidis and Misha Pchenitchnikov

Scientists typically prefer to work with ordered systems. However, a diverse team of physicists and biophysicists from the University of Groningen found that individual light-harvesting nanotubes with disordered molecular structures still transport light energy in the same way. By combining spectroscopy, molecular dynamics simulations and theoretical physics, they discovered how disorder at the molecular level is effectively averaged out at the microscopic scale. The results were published on 28 September in the Journal of the American Chemical Society.

The double-walled light-harvesting nanotubes self-assemble from molecular building blocks. They are inspired by the multi-walled tubular antenna network of photosynthetic bacteria found in nature. The nanotubes absorb and transport light energy, although it was not entirely clear how. 'The nanotubes have similar sizes but they are all different at the molecular level with the molecules arranged in a disordered way,' explains Maxim Pshenichnikov, Professor of Ultrafast Spectroscopy at the University of Groningen.

Single-molecule

Björn Kriete, a PhD student in Pshenichnikov's group, used spectroscopy to measure how light-harvesting systems, each consisting of a double-wall nanotube composed of a few thousand molecules, behaved. 'We examined around fifty of these systems and found that they had very similar optical properties despite showing significant differences at the molecular level.' Measuring individual light-harvesting systems requires the use of the latest single-molecule spectroscopy techniques. Earlier studies only looked at bulk material that contains millions of these systems.

So, how can disorder at the molecular level be reconciled with individual systems' very ordered responses to light? To answer this question, Pshenichnikov received help from both the Molecular Dynamics group and the Theoretical Physics group at the University of Groningen. Postdoctoral researchers Riccardo Alessandri and Anna Bondarenko were responsible for simulating the nanotube system in solution. 'It was quite a challenge to simulate a system with thousands of molecules, to try to compute the disorder in an efficient way,' Alessandri explains. Overall, the simulation contained around 4.5 million atoms.

Tuning forks

In the end, the simulation revealed a bigger picture that was in agreement with the experimental results obtained by Pshenichnikov, but it also revealed additional molecular detail. This helped Jasper Knoester, Professor of Theoretical Physics, to connect all the dots. He recognized a pattern in the data that is referred to as 'exchange narrowing'. This effect is responsible for averaging out small differences at the molecular level. 'You can compare it to the classical experiment with tuning forks in which a vibration in one fork can transfer to a second fork if it is tuned to roughly the same frequency,' Knoester explains.

The energy that is harvested by the light-sensitive systems is transported in the form of excitons, which are quantum-mechanical wave functions, comparable to vibrations. Each exciton spreads out over 100 to 1,000 molecules. Pshenichnikov: 'These molecules are not ordered, but they are linked through dipole-dipole coupling.' This linkage allows the molecules that make up the nanotubes to vibrate together. Minor differences between them are averaged out, which results in light-harvesting systems that have similar optical properties.

Bricklayer

It is now clear how ordered optical behaviour can emerge from a disordered molecular structure. The link between the molecules is vital. Pshenichnikov: 'Think of a poorly trained bricklayer, who just puts bricks together in no particular pattern. If they are cemented to each other well, you still end up with a strong wall.' For the nanotubes, this means that a certain amount of disorder is quite acceptable in these light-harvesting systems. 'I believe that the implications are even wider,' says Pshenichnikov. 'The next step is to investigate how these properties can emerge in systems and use this in the design and creation of new functional materials.'

Simple Science Summary

Scientists at the University of Groningen created a light-harvesting system that is made from double-walled nanotubes that self-assemble from molecular building blocks. One light-harvesting nanotube contains thousands of molecules. And while the tubes look alike and have very similar optical properties, there are big differences at the level of the molecules. So, how can they still behave in the same way? By simulating the interaction of the atoms inside the nanotubes and analysing their behaviour, the scientists managed to solve this riddle. The light energy causes a vibration in the nanotubes, which is spread out over hundreds of molecules. They vibrate as one, because they are linked by connecting forces and are all sensitive to approximately the same frequency. An analogy is the vibration of a tuning fork, which can be transferred to a second fork that has a similar frequency. The results show that functional materials do not have to be highly ordered: as long as the building blocks are firmly linked, they can work in unison.

Credit: 
University of Groningen

Argonne targets lithium-rich materials as key to more sustainable cost-effective batteries

image: Simulations capture migrating chromium atoms within the model, lithium-rich cathode.

Image: 
(Juan Garcia and Hakim Iddir / Argonne National Laboratory.)

Next-generation batteries using lithium-rich materials could be more sustainable and cost-effective, according to a team of researchers with the U.S. Department of Energy’s (DOE) Argonne National Laboratory.

The pivotal discovery, which aims to improve the understanding of advanced materials for transportation and grid storage, was published in the article, “Harbinger of hysteresis in lithium-rich oxides: Anionic activity or defect chemistry of cation migration,” on June 24 in the Journal of Power Sources.

“Lithium-rich oxides offer the possibility of more sustainable and cost-effective options over many current technologies as they can be made predominantly of Earth-abundant elements, with manganese being the most promising candidate,” said Jason R. Croy, a physicist in Argonne’s Chemical Sciences and Engineering (CSE) division.

Lithium-ion batteries rely on the movement of lithium between electrodes (cathode and anode) when charging and discharging. Moving more lithium, initially contained in and supplied by the cathode, amounts to more generated current available to power devices, such as phones, computers and cars. A promising class of cathode materials, known as lithium-rich oxides, includes materials that contain substantially more lithium than currently used oxides. In addition, they are composed of high amounts of Earth-abundant elements, such as manganese.

“Lithium-rich oxides offer the possibility of more sustainable and cost-effective options over many current technologies as they can be made predominantly of Earth-abundant elements, with manganese being the most promising candidate.” — Jason R. Croy, physicist, Chemical Sciences and Engineering division, Argonne.

The complex way that elements arrange themselves at the atomic level in these oxides, including lithium, manganese, titanium, oxygen and fluorine, leads to unique processes when a battery operates. Most notable is the participation of oxygen in helping to generate current, a process that does not occur in traditional lithium-ion batteries, but is one of the main reasons why lithium-rich oxides can utilize larger amounts of lithium. One drawback of these cathodes is the appearance of an energy inefficiency, or “hysteresis,” between charge and discharge. This means, specifically, the amount of energy needed to charge the battery is more than the energy obtained on discharge.

Until now, the research community has generally accepted that the participation of oxygen is responsible for this inefficiency, and efforts to improve these materials have focused on oxygen. While these efforts have led to advances in the design of better materials, hysteresis persists in all such oxides.

To advance our understanding of hysteresis and gain insights on other mechanisms that play an important role, the researchers designed an experiment based on a model, lithium-rich oxide. This material is a model because it contains substantial amounts of lithium and displays a hysteresis between charge and discharge; however, oxygen does not participate in the charge/discharge processes of the cell. Through the design of this system, along with theoretical calculations and measurements of working cells, the team showed that the movement of elements, in this case titanium and chromium, between certain sites within the oxide was the sole cause of hysteresis in this material — not the processes directly involving oxygen. This is important because all lithium-rich materials form similar atomic-level configurations, and regardless of oxygen’s participation in the charge/discharge processes, the movement of other elements within the materials, like what is found in this study, can also occur.

The researchers hope the findings will attract attention to these types of mechanisms and generate new insights, ideas and designs for advancing these materials toward practical use.

The researchers are part of a DOE consortium consisting of five national laboratories working on the development of next-generation cathode oxides, an important component of the applied battery research and development program managed by DOE’s Office of Energy Efficiency and Renewable Energy Vehicle Technologies Office (VTO). DOE and VTO are committed to the development of cobalt-free and other sustainable energy storage materials. Lithium-rich oxides provide an avenue to attain high-energy, cost-effective, cobalt-free oxides.

“Working on this project allowed us to design a modeling experiment to address the specific fundamental question of oxygen redox in a model system,” said Hakim Iddir, a physicist in CSE.

Iddir and Juan C. Garcia, an assistant materials scientist in CSE, performed the computational analysis to verify the design goals and gained insights into the mechanisms that took place during electrochemical reactions. Croy synthesized the designed cathode material, performed electrochemical analyses and participated in synchrotron X-ray studies.

Working at Sector 20 at the Advanced Photon Source (APS), a DOE Office of Science User Facility at Argonne, Mahalingam Balasubramanian, a physicist at the APS, and Croy verified the initial hypothesis concerning the migration tendency of some metal atoms and confirmed the theoretical predictions.

“This work was a true collaboration between theory and experimental groups, and this joint effort was critical to verify our initial hypothesis,” said Balasubramanian.

Steve Trask, an engineering specialist at the Cell Analysis, Modeling and Prototyping (CAMP) Facility, worked with the team to design and build cells that could be probed by X-rays in a working cell.

“The CAMP Facility was critical in the design and fabrication of cells tailored to this study,” said Croy. “The range of X-ray energies used, along with electrode thickness and cell designs that enable practical electrochemical operation, all had to be taken into account.”

The research also used the computer facilities at Argonne’s Laboratory Computing Resource Center (LCRC).

This project was funded by VTO. Sector 20 operations are supported by DOE’s Office of Science and the Canadian Light Source.

About the Advanced Photon Source

The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

This research used resources of the Advanced Photon Source, a U.S. DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.

Credit: 
DOE/Argonne National Laboratory

Lessons from a cooling climate

Usually, talk of carbon sequestration focuses on plants: forests storing carbon in the trunks of massive trees, algae blooming and sinking to the seabed, or perhaps peatlands locking carbon away for tens of thousands of years.

While it's true that plants take up large amounts of carbon from the atmosphere, the rocks themselves mediate a great deal of the carbon cycle over geological timescales. Processes like volcano eruptions, mountain building and erosion are responsible for moving carbon through Earth's atmosphere, surface and mantle.

In March 2019, a team led by UC Santa Barbara's Francis Macdonald published a study proposing that tectonic activity in the tropics, and subsequent chemical weathering by the abundant rainfall, could account for the majority of carbon capture over million-year timeframes.

Now, Macdonald, doctoral student Eliel Anttila and their collaborators have applied their new model to the emergence of the Southeast Asian archipelago -- comprising New Guinea, Indonesia, Malaysia, the Philippines and other nearby islands -- over the past 15 million years. Using data from the paleo-record, they determined that the islands are a modern hotspot of carbon dioxide consumption. Their results, published in the Proceedings of the National Academy of Sciences, deepen our understanding of past climatic transitions and shed light on our current climate crisis.

The primary means by which carbon is recycled into the planet's interior is through the breakdown of silicate rocks, especially rocks high in calcium and magnesium. Raindrops absorb carbon dioxide from the atmosphere and bring it to the surface. As the droplets patter against the stone, the dissolved carbon dioxide reacts with the rocks, releasing the calcium and magnesium into rivers and the ocean. These ions then react with dissolved carbon in the ocean and form carbonate compounds like calcite, which consolidates on the sea floor, trapping the atmospheric carbon for tens of millions of years or longer.

Given the right conditions, and enough time, the deep carbon cycle can lock away enough carbon to plunge Earth into an ice age. "Last year we found that there was a nice correlation between when we make a bunch of mountains in the tropical rain belt and when we have cooling events," said Macdonald, a professor in the Department of Earth Science.

Carbon dioxide levels in the atmosphere spiked in the mid-Miocene climatic maximum, around 15 million years ago. Although there is still some uncertainty, scientists believe that atmospheric CO2 levels were between 500 and 750 parts per million (ppm), compared to pre-industrial levels of around 280 ppm. During the mid-Miocene, warmer conditions stretched across the globe, the Antarctic ice was meager, and the Arctic was completely ice free.

Today we are around 411 ppm, and climbing, Macdonald pointed out.

Around that time, the Eurasian and Australian plates began colliding and creating the Southeast Asian archipelago and few of the present islands were emergent above sea-level. "This is the most recent example of an arc-continent collision in the tropics," Macdonald noted, "and throughout this period we actually have proxy data for the change in CO2 levels and temperatures."

The team was curious how large an effect the emergence of the islands may have had on the climate. Based on their previous hypothesis, the formation of these largely volcanic rock provinces in the tropics should be a major factor in determining CO2 levels in the atmosphere.

They applied geological data of ancient shorelines and lithology to a joint weathering and climate model, which accounted for four major variables: latitude, topography, total area and rock type. In the tropics, a more mountainous region will experience more rain, and have a greater surface area for weathering to occur. Once the surface rocks are weathered, the combination of erosion and uplift exposes fresh rock.

"What you need to do is just keep removing that soil, keep getting fresh rock there, and keep dissolving it," explained Macdonald. "So having active tectonic topography is key. All of Southeast Asia has active topography, and this is a big reason why it's just so much more effective at breaking rocks down into their constituent ions so they can join into the geochemical cycles."

The team's analysis bore this out. They found that weathering, uplift, and erosion just in the Southeast Asian islands could have accounted for most of the drop in CO2 levels between the mid-Miocene climate maximum and the Pleistocene ice ages, when carbon dioxide was around 200 ppm.

These findings could provide insights on our current climate crisis. "The reason scientists are so interested in understanding the Miocene is because we think of this as perhaps the best natural analogue to what the world may look like at a CO2 level over 500 ppm," said Macdonald. "It was the most recent time where we had significantly less ice on Earth, and we had CO2 levels that are in the range of where we're going in our current anthropogenic experiment."

"People should be worried not necessarily about the amplitude of the increase, but the slope," added Anttila. "That's that real problem right now." Humans have moved a comparable amount of carbon into the atmosphere in just a few generations as it took the Earth to pull out of the atmosphere over millions of years.

"You realize that we are more effective than any geological processes at geoengineering," Macdonald said.

The team is currently developing a model and looking at the rocks themselves to reevaluate previous hypotheses for the initial cooling. By a stroke of good fortune, the original specimens used to develop these hypotheses are from the Monterey Formation, a layer of rock that crops up throughout the Santa Barbara basin. These rocks dominate cliff faces from Santa Barbara to Goleta Pier and from Coal Oil Point to Gaviota.

"We've got this amazing opportunity right here to reconstruct this time period, right in our backyard," said Macdonald.

"These records of going from a warmer climate in the Miocene to the cooler climate of today are recorded right here in the cliffs," he added. "So further tests of the hypotheses -- especially in quarantine times, when we can't travel -- may just involve going out to the beach."

Credit: 
University of California - Santa Barbara

Achieving clean air for all is possible

Air pollution is currently the largest environmental risk factor for human health globally and can be linked to several million cases of premature deaths every year. A new study however shows that it is possible to achieve clean air worldwide with fundamental transformations of today's practices in many sectors, supported by strong political will.

There is strong scientific evidence that air pollution causes negative health impacts even below the present World Health Organization (WHO) guideline values, especially for fine particles (PM2.5). In fact, the current WHO guideline value for PM2.5 - which is exceeded by more than a factor of 10 in many parts of the world, especially in developing countries - is often considered unattainable.

The new study, published in Philosophical Transactions A, a journal of the Royal Society, aimed to examine the interplay of key determinants of air pollution (e.g., economic growth, technological development, environmental policy interventions) in the past, and what this means for the future. Building on the insights obtained from the first part of the study, the authors explored whether it would be conceivable that clean air could be achieved worldwide, and what it would take to achieve this. The paper forms part of a special theme issue of the journal, which contains a series of papers by prominent academics, and aims to provide an overview of global air quality and the policies developed to mitigate its effects.

The authors identified key factors that contributed to historic air pollution trends in different world regions, outlined conceivable ranges of their future development, and examined their interplay on global air quality in the next decades. Their efforts resulted in a first integrated perspective on achieving clean air around the world by combining different policy areas beyond air pollution.

"Policy interventions were instrumental in decoupling energy-related air pollution from economic growth in the past, and further interventions will determine future air quality," explains lead author and IIASA Air Quality and Greenhouse Gases Program Director, Markus Amann. "Theoretically, a portfolio of ambitious policy interventions could bring ambient PM2.5 concentrations below the WHO air quality guideline in most parts of the world, except in areas where natural sources such as soil dust, contribute major shares to, or even exceed the current guideline value."

Achieving clean air, which would save millions of premature deaths annually, needs integration over multiple policy domains, including environmental policies focusing on pollution controls, energy and climate policies, policies to transform the agricultural production system, and policies to modify human food consumption patterns. However, the authors emphasize that none of these policy areas alone can deliver clean air, and interventions need to be coordinated across sectors. In addition, such policy interventions would simultaneously deliver a wide range of benefits on other policy priorities, while also making substantial contributions to human development in terms of the Sustainable Development Goals.

"Even if WHO air quality standards are currently exceeded by more than a factor of 10 in many parts of the world, clean air is achievable globally with enhanced political will. The required measures, in addition to their local health benefits, would also contribute to the long-term transformational changes that are required for global sustainable development," Amann concludes.

Credit: 
International Institute for Applied Systems Analysis

Sensational COVID-19 communication erodes confidence in science

ITHACA, N.Y. - Scientists, policymakers and the media should acknowledge inherent uncertainties in epidemiological models projecting the spread of COVID-19 and avoid "catastrophizing" worst-case scenarios, according to new research from Cornell University.

Threats about dire outcomes may mobilize more people to take public health precautions in the short term but invite criticism and backlash if uncertainties in the models' data and assumptions are not transparent and later prove flawed, researchers found.

Among political elites, criticism from Democrats in particular may have the unintended consequence of eroding public trust in the use of models to guide pandemic policies and in science more broadly, their research shows.

"Acknowledging that models are grounded in uncertainty is not only the more accurate way to talk about scientific models, but political leaders and the media can do that without also having the effect of undermining confidence in science," said Sarah Kreps, government professor and co-author of the study.

Kreps and Doug Kriner, government professor, conducted five experiments - surveying more than 6,000 American adults in May and June - to examine how politicians' rhetoric and media framing affected support for using COVID-19 models to guide policies about lockdowns or economic reopenings, and for science generally.

The researchers found that different presentations of scientific uncertainty - acknowledging it, contextualizing it or weaponizing it - can have important implications for public policy preferences and attitudes.

For example, they said, Republican elites have been more likely to attack or "weaponize" uncertainty in epidemiological models. But the survey experiments showed that their criticism, which the public apparently expected, didn't shift confidence in models or in science. Support for COVID-19 science from several Republican governors who split with their party's mainstream also did not affect confidence.

Criticism by Democrats, in contrast, registered as surprising and was influential. When shown a quote by New York Gov. Andrew Cuomo disparaging virus models, survey respondents' support for using models to guide reopening policy dropped by 13% and support for science in general decreased, too.

"It suggests that the onus is on Democrats to be particularly careful with how they communicate about COVID-19 science," Kriner said. "Because of popular expectations about the alignments of the parties on science more broadly and on issues like COVID-19 and climate change, they can inadvertently erode confidence in science even when that isn't their intent."

Another way of ignoring or downplaying uncertainty is to present narratives that sensationalize or "catastrophize" the most alarming projections and potential consequences of inaction. An April article in The Atlantic about Georgia's reopening strategy, for example, referred to the state's "experiment in human sacrifice."

The researchers' experiments showed that type of COVID-19 communication significantly increased public support - by 21% - for using models to guide policy, with gains primarily attributed to people who were less scientifically literate.

Credit: 
Cornell University

Volcanic ash could help reduce CO2 associated with climate change

image: Volcanic debris flow avalanche on Soufrierre Hills volcano, Montserrat.

Image: 
Martin Palmer

University of Southampton scientists investigating ways of removing carbon dioxide (CO2) and other greenhouse gases from our atmosphere believe volcanic ash could play an important role.

A team from the University's School of Ocean and Earth Science has modelled the impact of spreading volcanic ash from a ship to an area of ocean floor to help amplify natural processes which lock away CO2 in the seabed. They found the technique has the potential to be cheaper, technologically simpler and less invasive than other techniques to remove harmful gases.

The researchers' findings are published in the journal Anthropocene.

Human-caused climate change is one of the most pressing topics in contemporary science and politics. The impact of hundreds of years of greenhouse gas emissions are becoming clearer every year, with environmental changes including heatwaves, droughts, wildfires, and other extreme weather events.

"As a result of overwhelming evidence, politicians have begun to take steps towards incorporating emissions reductions into policies, such as in the 2015 Paris Agreement with its long-term goal of ensuring that global average temperatures do not exceed 2°C above pre-industrial levels. However, it is becoming clear that to avoid the worst impacts of climate change, active greenhouse gas removal (GGR) will be required," explains study co-author and University of Southampton Professor of Geochemistry, Martin Palmer.

GGR techniques remove carbon dioxide and other gases from the atmosphere, thereby reducing the greenhouse effect, and in the longer term, slowing climate change. There are numerous potential approaches to GGR, from the simple, such as reforestation, to the complex, such as actively removing CO2 from the atmosphere.

Most volcanoes lie close to the oceans, and every year millions of tonnes of volcanic ash falls into them and settles to the seafloor. Once there, it increases carbon storage in marine sediments and reduces atmospheric CO2 levels. This is important because the oceans are the greatest sink of manmade CO2 on Earth.

"One of the ways oceans lock away CO2 is by storing it in sediments on the seafloor as calcium carbonate and organic carbon. In our work, we discuss how this natural process may be augmented by artificially adding ash to oceans," says Jack Longman, lead-author and former Post-Doctoral Research Assistant at the University of Southampton, who now holds a position at the Institute for Chemistry and Biology of the Marine Environment (ICBM), University of Oldenburg.

The scientists modelled the effect of distributing volcanic ash from a ship to an area of ocean. The results suggest that this method could sequester as much as 2300 tonnes of CO2 per 50,000 tonnes of ash delivered for a cost of $50 per tonne of CO2 sequestered - much cheaper than most other GGR methods. In addition, the approach is simply an augmentation of a naturally occurring process, it does not involve expensive technology and it does not require repurposing valuable agricultural land.

The scientists say further research is needed though to test the efficiency of enhanced ash deposition in the oceans and to make sure there are no unforeseen side effects, but initial indications suggest that it could be applied easily and cheaply in many areas of the world.

Credit: 
University of Southampton

Smart cruise control steers drivers toward better decisions

image: "On the freeway, one bad decision propagates other bad decisions. If we can consider what's happening 300 meters in front of us, it can really improve road safety. It reduces congestion and accidents." Michighan Tech engineer Kuilin Zhang studies how features like smart cruise control can improve driver performance.

Image: 
Sarh Bird/Michigan Tech

Vehicle manufacturers offer smart features such as lane and braking assist to aid drivers in hazardous situations when human reflexes may not be fast enough. But most options only provide immediate benefits to a single vehicle.

What if, like a murmuration of starlings, our cars and trucks moved cooperatively on the road in response to each vehicle's environmental sensors, reacting as a group to lessen traffic jams and protect the humans inside?

This question forms the basis of Kuilin Zhang's National Science Foundation CAREER Award research. Zhang, an associate professor of civil and environmental engineering at Michigan Technological University, has published "A distributionally robust stochastic optimization-based model predictive control with distributionally robust chance constraints for cooperative adaptive cruise control under uncertain traffic conditions" in the journal Transportation Research Part B: Methodological.

The paper is coauthored with Shuaidong Zhao, now a senior quantitative analyst at National Grid, where he continues to conduct research on the interdependency between smart grid and electric vehicle transportation systems.

Creating vehicle systems adept at avoiding traffic accidents is an exercise in proving Newton's First Law: An object in motion remains so unless acted on by an external force. Without much warning of what's ahead, car accidents are more likely because drivers don't have enough time to react. So what stops the car? A collision with another car or obstacle -- causing injuries, damage and in the worst case, fatalities.

But cars communicating vehicle-to-vehicle can calculate possible obstacles in the road at increasing distances -- and their synchronous reactions can prevent traffic jams and car accidents.

"On the freeway, one bad decision propagates other bad decisions," Zhang said. "If we can consider what's happening 300 meters in front of us, it can really improve road safety. It reduces congestion and accidents."

Zhang's research asks how vehicles connect to other vehicles, how those vehicles make decisions together based on data from the driving environment and how to integrate disparate observations into a network.

Zhang and Zhao created a data-driven, optimization-based control model for a "platoon" of automated vehicles driving cooperatively under uncertain traffic conditions. Their model, based on the concept of forecasting the forecasts of others, uses streaming data from the modeled vehicles to predict the driving states (accelerating, decelerating or stopped) of preceding platoon vehicles. The predictions are integrated into real-time, machine-learning controllers that provide onboard sensed data. For these automated vehicles, data from controllers across the platoon become resources for cooperative decision-making.

The model indicates controllers could help vehicles maintain constant time gaps between themselves to reduce congestion and traffic accidents and could also conserve energy by reducing the need to accelerate and decelerate.

The next phase of Zhang's CAREER Award-supported research is to test the model's simulations using actual connected, autonomous vehicles. Among the locations well-suited to this kind of testing is Michigan Tech's Keweenaw Research Center, a proving ground for autonomous vehicles, with expertise in unpredictable environments.

Ground truthing the model will enable data-driven, predictive controllers to consider all kinds of hazards vehicles might encounter while driving and create a safer, more certain future for everyone sharing the road.

Credit: 
Michigan Technological University

Untapped potential exists for blending hydropower, floating PV

Hybrid systems of floating solar panels and hydropower plants may hold the technical potential to produce a significant portion of the electricity generated annually across the globe, according to an analysis by researchers at the U.S. Department of Energy's National Renewable Energy Laboratory (NREL).

The researchers estimate that adding floating solar panels to bodies of water that are already home to hydropower stations could produce as much as 7.6 terawatts of potential power a year from the solar PV systems alone, or about 10,600?terawatt-hours of potential annual generation. Those figures do not include the amount generated from hydropower.

For comparison, global final electricity consumption was just over 22,300 terawatt-hours in 2018, the most recent year for which statistics are available, according to the International Energy Agency.

"This is really optimistic," said Nathan Lee, a researcher with NREL's Integrated Decision Support group and lead author of a new paper published in the journal Renewable Energy. "This does not represent what could be economically feasible or what the markets could actually support. Rather, it is an upper-bound estimate of feasible resources that considers waterbody constraints and generation system performance."

The article, "Hybrid floating solar photovoltaics-hydropower systems: Benefits and global assessment of technical potential," was co-authored by NREL colleagues Ursula Grunwald, Evan Rosenlieb, Heather Mirletz, Alexandra Aznar, Robert Spencer, and Sadie Cox.

Floating photovoltaics (PV) remain a nascent technology in the United States, but their use has caught on overseas where space for ground-mounted systems is less available. Previous NREL work estimated that installing floating solar panels on man-made U.S. reservoirs could generate about 10 percent of the nation's annual electricity production.

So far, only a small hybrid floating solar/hydropower system has been installed, and that is in Portugal.

NREL estimates 379,068 freshwater hydropower reservoirs across the planet could host combined floating PV sites with existing hydropower facilities. Additional siting data is needed prior to any implementation because some reservoirs may be dry during parts of the year or may not be otherwise conducive to hosting floating PV.

Potential benefits exist by coupling floating PV with hydropower. For example, a hybrid system would reduce transmission costs by linking to a common substation. Additionally, the two technologies can balance each other. The greatest potential for solar power is during dry seasons, while for hydropower rainy seasons present the best opportunity. Under one scenario, that means operators of a hybrid system could use pumped storage hydropower to store excess solar generation.

Credit: 
DOE/National Renewable Energy Laboratory

Many ventilation systems may increase risk of COVID-19 exposure, study suggests

Ventilation systems in many modern office buildings, which are designed to keep temperatures comfortable and increase energy efficiency, may increase the risk of exposure to the coronavirus, particularly during the coming winter, according to research published in the Journal of Fluid Mechanics.

A team from the University of Cambridge found that widely-used 'mixing ventilation' systems, which are designed to keep conditions uniform in all parts of the room, disperse airborne contaminants evenly throughout the space. These contaminants may include droplets and aerosols, potentially containing viruses.

The research has highlighted the importance of good ventilation and mask-wearing in keeping the contaminant concentration to a minimum level and hence mitigating the risk of transmission of SARS-CoV-2, the virus that causes COVID-19.

The evidence increasingly indicates that the virus is spread primarily through larger droplets and smaller aerosols, which are expelled when we cough, sneeze, laugh, talk or breathe. In addition, the data available so far indicate that indoor transmission is far more common than outdoor transmission, which is likely due to increased exposure times and decreased dispersion rates for droplets and aerosols.

"As winter approaches in the northern hemisphere and people start spending more time inside, understanding the role of ventilation is critical to estimating the risk of contracting the virus and helping slow its spread," said Professor Paul Linden from Cambridge's Department of Applied Mathematics and Theoretical Physics (DAMTP), who led the research.

"While direct monitoring of droplets and aerosols in indoor spaces is difficult, we exhale carbon dioxide that can easily be measured and used as an indicator of the risk of infection. Small respiratory aerosols containing the virus are transported along with the carbon dioxide produced by breathing, and are carried around a room by ventilation flows. Insufficient ventilation can lead to high carbon dioxide concentration, which in turn could increase the risk of exposure to the virus."

The team showed that airflow in rooms is complex and depends on the placement of vents, windows and doors, and on convective flows generated by heat emitted by people and equipment in a building. Other variables, such as people moving or talking, doors opening or closing, or changes in outdoor conditions for naturally ventilated buildings, affect these flows and consequently influence the risk of exposure to the virus.

Ventilation, whether driven by wind or heat generated within the building or by mechanical systems, works in one of two main modes. Mixing ventilation is the most common, where vents are placed to keep the air in a space well mixed so that temperature and contaminant concentrations are kept uniform throughout the space.

The second mode, displacement ventilation, has vents placed at the bottom and the top of a room, creating a cooler lower zone and a warmer upper zone, and warm air is extracted through the top part of the room. As our exhaled breath is also warm, most of it accumulates in the upper zone. Provided the interface between the zones is high enough, contaminated air can be extracted by the ventilation system rather than breathed in by someone else. The study suggests that when designed properly, displacement ventilation could reduce the risk of mixing and cross-contamination of breath, thereby mitigating the risk of exposure.

As climate change has accelerated since the middle of the last century, buildings have been built with energy efficiency in mind. Along with improved construction standards, this has led to buildings that are more airtight and more comfortable for the occupants. In the past few years however, reducing indoor air pollution levels has become the primary concern for designers of ventilation systems.

"These two concerns are related, but different, and there is tension between them, which has been highlighted during the pandemic," said Dr Rajesh Bhagat, also from DAMTP. "Maximising ventilation, while at the same time keeping temperatures at a comfortable level without excessive energy consumption is a difficult balance to strike."

In light of this, the Cambridge researchers took some of their earlier work on ventilation for efficiency and reinterpreted it for air quality, in order to determine the effects of ventilation on the distribution of airborne contaminants in a space.

"In order to model how the coronavirus or similar viruses spread indoors, you need to know where people's breath goes when they exhale, and how that changes depending on ventilation," said Linden. "Using these data, we can estimate the risk of catching the virus while indoors."

The researchers explored a range of different modes of exhalation: nasal breathing, speaking and laughing, each both with and without a mask. By imaging the heat associated with the exhaled breath, they could see how it moves through the space in each case. If the person was moving around the room, the distribution of exhaled breath was markedly different as it became captured in their wake.

"You can see the change in temperature and density when someone breathes out warm air - it refracts the light and you can measure it," said Bhagat. "When sitting still, humans give off heat, and since hot air rises, when you exhale, the breath rises and accumulates near the ceiling."

Their results show that room flows are turbulent and can change dramatically depending on the movement of the occupants, the type of ventilation, the opening and closing of doors and, for naturally ventilated spaces, changes in outdoor conditions.

The researchers found that masks are effective at reducing the spread of exhaled breath, and therefore droplets.

"One thing we could clearly see is that one of the ways that masks work is by stopping the breath's momentum," said Linden. "While pretty much all masks will have a certain amount of leakage through the top and sides, it doesn't matter that much, because slowing the momentum of any exhaled contaminants reduces the chance of any direct exchange of aerosols and droplets as the breath remains in the body's thermal plume and is carried upwards towards the ceiling. Additionally, masks stop larger droplets, and a three-layered mask decreases the amount of those contaminants that are recirculated through the room by ventilation."

The researchers found that laughing, in particular, creates a large disturbance, suggesting that if an infected person without a mask was laughing indoors, it would greatly increase the risk of transmission.

"Keep windows open and wear a mask appears to be the best advice," said Linden. "Clearly that's less of a problem in the summer months, but it's a cause for concern in the winter months."

The team are now working with the Department for Transport looking at the impacts of ventilation on aerosol transport in trains and with the Department for Education to assess risks in schools this coming winter.

Credit: 
University of Cambridge