Tech

How to make lithium-ion batteries invincible

image: Jingyang Wang holds up a ceramic palette sample prepared for the DRX research program co-led by Gerbrand Ceder and Guoying Chen at Berkeley Lab.

Image: 
Marilyn Sargent/Berkeley Lab

In our future electrified world, the demand for battery storage is projected to be enormous, reaching to upwards of 2 to 10 terawatt-hours (TWh) of annual battery production by 2030, from less than 0.5 TWh today. However, concerns are growing as to whether key raw materials will be adequate to meet this future demand. The lithium-ion battery - the dominant technology for the foreseeable future - has a component made of cobalt and nickel, and those two metals face severe supply constraints on the global market.

Now, after several years of research led by Lawrence Berkeley National Laboratory (Berkeley Lab), scientists have made significant progress in developing battery cathodes using a new class of materials that provide batteries with the same if not higher energy density than conventional lithium-ion batteries but can be made of inexpensive and abundant metals. Known as DRX, which stands for disordered rocksalts with excess lithium, this novel family of materials was invented less than 10 years ago and allows cathodes to be made without nickel or cobalt.

"The classic lithium-ion battery has served us well, but as we consider future demands for energy storage, its reliance on certain critical minerals exposes us not only to supply-chain risks, but also environmental and social issues," said Ravi Prasher, Berkeley Lab's Associate Lab Director for Energy Technologies. "With DRX materials, this offers lithium batteries the potential to be the foundation for sustainable battery technologies for the future."

The cathode is one of the two electrodes in a battery and accounts for more than one-third of the cost of a battery. Currently the cathode in lithium-ion batteries uses a class of materials known as NMC, with nickel, manganese, and cobalt as the key ingredients.

"I've done cathode research for over 20 years, looking for new materials, and DRX is the best new material I've ever seen by far," said Berkeley Lab battery scientist Gerbrand Ceder, who is co-leading the research. "With the current NMC class, which is restricted to just nickel, cobalt, and an inactive component made of manganese, the classic lithium-ion battery is at the end of its performance curve unless you transfer to new cathode materials, and that's what the DRX program offers. DRX materials have enormous compositional flexibility - and this is very powerful because not only can you use all kinds of abundant metals in a DRX cathode, but you can also use any type of metal to fix any problem that might come up during the early stages of designing new batteries. That's why we're so excited."

Cobalt and nickel supply-chain risks

The U.S. Department of Energy (DOE) has made it a priority to find ways to reduce or eliminate the use of cobalt in batteries. "The battery industry is facing an enormous resource crunch," said Ceder. "Even at 2 TWh, the lower range of global demand projections, that would consume almost all of today's nickel production, and with cobalt we're not even close. Cobalt production today is only about 150 kilotons, and 2 TWh of battery power would require 2,000 kilotons of nickel and cobalt in some combination."

What's more, over two-thirds of the world's nickel production is currently used to make stainless steel. And more than half of the world's production of cobalt comes from the Democratic Republic of Congo, with Russia, Australia, the Philippines, and Cuba rounding out the top five producers of cobalt.

In contrast, DRX cathodes can use just about any metal in place of nickel and cobalt. Scientists at Berkeley Lab have focused on using manganese and titanium, which are both more abundant and lower cost than nickel and cobalt.

"Manganese oxide and titanium oxide cost less than $1 per kilogram whereas cobalt costs about $45 per kilogram and nickel about $18," said Ceder. "With DRX you have the potential to make very inexpensive energy storage. At that point lithium-ion becomes unbeatable and can be used everywhere - for vehicles, the grid - and we can truly make energy storage abundant and inexpensive."

Ordered vs. disordered

Ceder and his team developed DRX materials in 2014. In batteries, the number and speed of lithium ions able to travel into the cathode translates into how much energy and power the battery has. In conventional cathodes, lithium ions travel through the cathode material along well-defined pathways and arrange themselves between the transition metal atoms (usually cobalt and nickel) in neat, orderly layers.

What Ceder's group discovered was that a cathode with a disordered atomic structure could hold more lithium - which means more energy - while allowing for a wider range of elements to serve as the transition metal. They also learned that within that chaos, lithium ions can easily hop around.

In 2018, the Vehicle Technologies Office in DOE's Office of Energy Efficiency and Renewable Energy provided funding for Berkeley Lab to take a "deep dive" into DRX materials. In collaboration with scientists at Oak Ridge National Laboratory, Pacific Northwest National Laboratory, and UC Santa Barbara, Berkeley Lab teams led by Ceder and Guoying Chen have made tremendous progress in optimizing DRX cathodes in lithium-ion batteries.

For example, the charge rate - or how fast the battery can charge - of these materials was initially very low, and its stability was also poor. The research team has found ways to address both of these issues through modeling and experimentation. Studies on using fluorination to improve stability have been published in Advanced Functional Materials and Advanced Energy Materials; research on how to enable a high charging rate was recently published in Nature Energy.

Since DRX can be made with many different elements, the researchers have also been working on which element would be best to use, hitting the sweet spot of being abundant, inexpensive, and providing good performance. "DRX has now been synthesized with almost the whole periodic table," Ceder said.

"This is science at its best - fundamental discoveries that will serve as the bedrock of systems in future homes, vehicles, and grids," said Noel Bakhtian, director of Berkeley Lab's Energy Storage Center. "What has made Berkeley Lab so successful in battery innovation for decades now is our combination of breadth and depth of expertise - from fundamental discovery to characterization, synthesis, and manufacturing, as well as energy markets and policy research. Collaboration is key - we partner with industry and beyond to solve real-world problems, which in turn helps galvanize the world-leading science we do at the Lab."

Fast progress

New battery materials have traditionally taken 15 to 20 years to commercialize; Ceder believes progress on DRX materials can be accelerated with a larger team. "We've made great progress in the last three years with the deep dive," Ceder said. "We've come to the conclusion that we're ready for a bigger team, so we can involve people with a more diverse set of skills to really refine this."

An expanded research team could move quickly to address the remaining issues, including improving the cycle life (or the number of times the battery can be recharged and discharged over its lifetime) and optimizing the electrolyte, the chemical medium that allows the flow of electrical charge between the cathode and anode. Since being developed in Ceder's lab, groups in Europe and Japan have also launched large DRX research programs.

"Advances in battery technologies and energy storage will require continued breakthroughs in the fundamental science of materials," said Jeff Neaton, Berkeley Lab's Associate Lab Director for Energy Sciences. "Berkeley Lab's expertise, unique facilities, and capabilities in advanced imaging, computation, and synthesis allow us to study materials at the scale of atoms and electrons. We are well poised to accelerate the development of promising materials like DRX for clean energy."

Credit: 
DOE/Lawrence Berkeley National Laboratory

East Antarctic summer cooling trends caused by tropical rainfall clusters

image: (Left) East Antarctic warming associated with an anomalous high pressure (H) excited by the MJO rainfall events in the Indian Ocean. (Right) East Antarctic cooling associated with an anomalous low pressure (L) caused by the MJO rainfall events in the western tropical Pacific. The blue (red) line indicates anomalous atmospheric low (high) pressure at sea level.

Image: 
Zhen Fu

Our planet is warming due to anthropogenic greenhouse gas emissions; but the warming differs from region to region, and it can also vary seasonally. Over the last four decades scientists have observed a persistent austral summer cooling on the eastern side of Antarctica. This puzzling feature has received world-wide attention, because it is not far away from one of the well-known global warming hotspots - the Antarctic Peninsula.

A new study published in the journal Science Advances by a team of scientists from the IBS Center for Climate Physics at Pusan National University in South Korea, Nanjing University of Information Science and Technology, NOAA Geophysical Fluid Dynamics Laboratory, University Corporation for Atmospheric Research, Ewha Womans University, and National Taiwan University, uncovers a new mechanism that can explain the regional warming/cooling patchwork over Antarctica. At the heart of the mechanism are clusters of rainfall events in the western tropical Pacific, which release massive amounts of heat into the atmosphere by condensation of water vapor. Warm air rises over the organized rainfall clusters and sinks farther away. This pressure difference creates winds which are further influenced by the effect of earth's rotation. The interplay of these factors generates a large-scale atmospheric pressure wave which travels from west to east along the equator with a speed of about several hundred kilometers per day and which drags along with it the initial rainfall clusters. This propagating atmospheric wave is known as the Madden-Julian Oscillation (MJO), named after Roland Madden and Paul Julian, who discovered this phenomenon in 1971. The characteristic atmospheric pressure, convection and wind anomalies, which fluctuate on timescales of 20-70 days, can extend into the extratropics, reaching even Antarctica.

The international research team arrived at their conclusions by analyzing observational datasets and specially designed supercomputer climate model simulations. "Our analysis provides clear evidence that tropical weather systems associated with the Madden-Julian Oscillation can directly impact surface temperatures over East Antarctica." says Prof. Pang-Chi Hsu from Nanjing University of Information Science and Technology, who co-led the study.

More specifically, as the MJO rainfall clusters move into the western Pacific towards the location of the Solomon Islands, the corresponding global atmospheric wave tends to cool East Antarctica three to eleven days later (Image, right panel). In contrast, when the MJO-related rainfall occurs in the Indian Ocean, East Antarctic shows a pronounced warming (Image, left panel).

"During recent decades, MJO rainfall and pressure changes preferably occurred over the western tropical Pacific but decreased over the Indian Ocean. This situation has favored cooling of East Antarctica during austral summer.", says Prof. June-Yi Lee from the IBS Center for Climate Physics and Pusan National University, and co-leader of the study.

The research team estimated that up to 20% to 40% of the observed summer cooling trend in East Antarctica from 1979 to 2014 can be attributed to the long-term changes in the character and longitudinal core location of the MJO. Other contributing factors include the ozone hole and the Interdecadal Pacific Oscillation - a slowly varying weaker companion of the El Niño-Southern Oscillation. The new Science Advances study highlights that climate change even in remote regions such as Antarctica, can be linked to processes that happen nearly 10,000 km away.

Credit: 
Institute for Basic Science

AI spots healthy stem cells quickly and accurately

image: DeepACT comprises two main modules: identifying human keratinocytes at single-cell resolution from phase-contrast images of cultures through deep learning and tracking keratinocyte motion in the colony using a state-space model. As human keratinocyte stem cell colonies exhibits a unique motion pattern, DeepACT can distinguish keratinocyte stem cell colonies from non-stem cell-derived colonies by analyzing the spatial and velocity information of cells. This system can be widely applied to stem cell cultures used in regenerative medicine and provides a platform for developing reliable and noninvasive quality control technology.

Image: 
Department of Stem Cell Biology,TMDU

Tokyo, Japan - Stem cell therapy is at the cutting edge of regenerative medicine, but until now researchers and clinicians have had to painstakingly evaluate stem cell quality by looking at each cell individually under a microscope. Now, researchers from Japan have found a way to speed up this process, using the power of artificial intelligence (AI).

In a study published in February in Stem Cells, researchers from Tokyo Medical and Dental University (TMDU) reported that their AI system, called DeepACT, can identify healthy, productive skin stem cells with the same accuracy that a human can.

Stem cells are able to develop into several different kinds of mature cells, which means they can be used to grow new tissues in cases of injury or disease. Keratinocyte (skin) stem cells are used to treat inherited skin diseases and to grow sheets of skin that is used to repair large burns.

"Keratinocyte stem cells are one of the few types of adult stem cells that grow well in the lab. The healthiest keratinocytes move more quickly than less healthy cells, so they can be identified by the eye using a microscope," explains Takuya Hirose, one of the lead authors of the study. "However, this method is time-consuming, labor-intensive, and error-prone."

To address this, the researchers aimed to develop a system that would identify and track the movement of these stem cells automatically.

"We trained this system through a process called 'deep learning' using a library of sample images," says the co-lead author, Jun'ichi Kotoku. "Then we tested it on a new group of images and found that the results were very accurate compared with manual analysis."

In addition to detecting individual stem cells, the DeepACT system also calculates the 'motion index' of each colony, which indicates how fast thecells at the central region of the colony move compared with those at the marginal region. The colonies with the highest motion index were much more likely than the colonies with lower motion index to grow well, making them good candidates for generating sheets of new skin for transplantation to burn patients.

"DeepACT is a powerful new way to perform accurate quality control of human keratinocyte stem cells and will make this process both more reliable and more efficient," states Daisuke Nanba, senior author.

Given that skin transplants can fail if they contain too many unhealthy or unproductive stem cells, being able to quickly and easily identify the most suitable cells would be a considerable clinical advantage. Automated quality control could also be valuable for industrial stem cell manufacturing, to help ensure a stable cell supply and lower production costs.

Credit: 
Tokyo Medical and Dental University

Bioinspired mineralization of calcium carbonate in peptide hydrogel

image: Bioinspired mineralization

Image: 
Kazuki Murai et al., Journal of Asian Ceramic Societies, Taylor & Francis

A team of researchers developed a biomimetic mineralization of calcium carbonate using a multifunctional peptide template that can self-supply mineral sources, which in this case is a supply of carbonate ions, the precursor of calcium carbonate, and following the mechanism of biosynthesis of hard tissues by living organisms, called biomineralization, the ability to form hydrogels, which is modeled after the reaction environment of living organisms. Previous studies on mineralization have discussed the formation mechanism of inorganic crystals synthesized on templates with only a single function, such as a system supplying an external mineral source or a hydrogel system.

However, living organisms use their own enzymes to self-supply mineral sources and achieve control of the orientation, crystal phase, and morphology of inorganic crystals by using 3D assemblies with controlled structures as reaction fields. Therefore, elucidating the formation mechanism of inorganic crystals in a mineralization reaction environment that is closer to the biological environment such as the hierarchical hydrogel-like 3D assemblies in addition to the self-supply of mineral sources, is important for clarifying the true relationship for structural control between organic templates and inorganic materials that is achieved in biomineralization. It is important to clarify the true relationship for structural control between organic templates and inorganic materials achieved in biomineralization. The research group led by Assistant Professor Kazuki Murai of Shinshu University's Department of Chemistry and Materials, Faculty of Textile Science and Technology was able to examine the nucleation and crystal growth mechanisms of calcium carbonate under conditions more similar to the biological environment through the self-supply of mineral sources through the expression of enzyme-like activities, and spontaneous formation of hydrogels, which is a model environment for cells. Therefore, the group's findings will facilitate the understanding of the nucleation and crystal growth of inorganic crystals in biomineralization and the role of organic templates for crystal control.

Assistant Professor Kazuki Murai states, "the knowledge gained from this and other mineralization studies is the basis for revealing the amazing processes that organisms have acquired through evolution over a vast amount of time. We take our bones and teeth for granted in our daily life, but even they are not yet fully understood. I believe that the efforts of various researchers, including myself, will lead us to the "solutions" that have been acquired by living organisms over billions of years. I will be happy if my research can be a "stepping stone to unexpected inspiration and discovery."

This study was able to clarify three major points, that a single peptide molecule has the ability to self-supply minerals through enzyme-like activity, the ability to control the crystal phase and morphology of inorganic materials, and the ability to spontaneously form hydrogels. The group was able to investigate the nucleation and crystal growth mechanisms of calcium carbonate using it as a template for mineralization. This research strategy to mimic the reaction environment of living organisms will be a breakthrough for previously unknown or unclarifiable events.

The team of researchers hopes to fully elucidate the formation and growth mechanisms of inorganic crystals, in addition to the structural control factors that occur between organic templates and inorganic materials in biomineralization. However, there are many obstacles in acquiring these findings, including the need for a great deal of research knowledge and far reaching collaboration of researchers belonging to various academic fields.

The group is currently working on developing inorganic materials that are crucial in the engineering and medical fields by using a material synthesis method that is clean and gentle on the environment, as well as elucidating the nanostructure of the constructed materials, the complexation of organic and inorganic materials, and the clarification of the correlation between structure and function of such materials.

Credit: 
Shinshu University

Earth-like biospheres on other planets may be rare

image: An artistic representation of the potentially habitable planet Kepler 422-b (left), compared with Earth (right).

Image: 
Ph03nix1986 / Wikimedia Commons

A new analysis of known exoplanets has revealed that Earth-like conditions on potentially habitable planets may be much rarer than previously thought. The work focuses on the conditions required for oxygen-based photosynthesis to develop on a planet, which would enable complex biospheres of the type found on Earth. The study is published today in Monthly Notices of the Royal Astronomical Society.

The number of confirmed planets in our own Milky Way galaxy now numbers into the thousands. However planets that are both Earth-like and in the habitable zone - the region around a star where the temperature is just right for liquid water to exist on the surface - are much less common.

At the moment, only a handful of such rocky and potentially habitable exoplanets are known. However the new research indicates that none of these has the theoretical conditions to sustain an Earth-like biosphere by means of 'oxygenic' photosynthesis - the mechanism plants on Earth use to convert light and carbon dioxide into oxygen and nutrients.

Only one of those planets comes close to receiving the stellar radiation necessary to sustain a large biosphere: Kepler-442b, a rocky planet about twice the mass of the Earth, orbiting a moderately hot star around 1,200 light years away.

The study looked in detail at how much energy is received by a planet from its host star, and whether living organisms would be able to efficiently produce nutrients and molecular oxygen, both essential elements for complex life as we know it, via normal oxygenic photosynthesis.

By calculating the amount of photosynthetically active radiation (PAR) that a planet receives from its star, the team discovered that stars around half the temperature of our Sun cannot sustain Earth-like biospheres because they do not provide enough energy in the correct wavelength range. Oxygenic photosynthesis would still be possible, but such planets could not sustain a rich biosphere.

Planets around even cooler stars known as red dwarfs, which smoulder at roughly a third of our Sun's temperature, could not receive enough energy to even activate photosynthesis. Stars that are hotter than our Sun are much brighter, and emit up to ten times more radiation in the necessary range for effective photosynthesis than red dwarfs, however generally do not live long enough for complex life to evolve.

"Since red dwarfs are by far the most common type of star in our galaxy, this result indicates that Earth-like conditions on other planets may be much less common than we might hope," comments Prof. Giovanni Covone of the University of Naples, lead author of the study.

He adds: "This study puts strong constraints on the parameter space for complex life, so unfortunately it appears that the "sweet spot" for hosting a rich Earth-like biosphere is not so wide."

Future missions such as the James Webb Space Telescope (JWST), due for launch later this year, will have the sensitivity to look to distant worlds around other stars and shed new light on what it really takes for a planet to host life as we know it.

Credit: 
Royal Astronomical Society

10 keys to integrating health into urban and transport planning

image: Infographic that summarizes the 10 principles -- and corresponding indicators -- to help urban planners incorporate public health into their work.

Image: 
ISGlobal / Catalan Government

As much as 20% of premature mortality can be attributed to poor urban and transport planning. Nevertheless, quantitative indicators to guide the integration of health components into urban design have been lacking. To address this gap, a team from the Barcelona Institute for Global Health (ISGlobal), a centre supported by the "la Caixa" Foundation, has identified 10 principles--and corresponding indicators--to help urban planners incorporate public health into their work.

The new study, published in the International Journal of Hygiene and Environmental Health, was undertaken at the request of the Directorate-General for Environmental Policies and the Natural Environment, which forms part of the Catalan Department of Climate Action, Food and Rural Agenda, with the aim of guiding urban planners in the design of healthy cities. The researchers conducted a review of the scientific literature and organised a participatory process with relevant stakeholders in Catalonia, including the Catalan Government, the Barcelona Provincial Council, the Barcelona Metropolitan Transport Authority (ATM) and the Catalan Land Institute (INCASÒL).

"This study is unique in that it brings together scientific evidence on urban health, which means that the proposed principles have a solid theoretical basis," commented Mark Nieuwenhuijsen, coordinator of the study and Director of the Urban Planning, Environment and Health Initiative at ISGlobal. "At the same time, having drawn on the experience of relevant stakeholders, we are able to guarantee that the indicators can be applied in practice and will be useful for decision-makers."

The review of the scientific literature was guided by four urban and transport planning objectives that previous studies have associated with favourable health and well-being outcomes: 1) development of a compact city with mixed land use and high street connectivity; 2) reduction of private motorised transport; 3) promotion of active and public transport (walking and cycling); and 4) development of green and public open space.

10 Principles and Indicators

The new paper summarises the scientific literature and the participatory process into 10 principles for integrating health into urban planning right from the outset (zoning phase) and provides a checklist for this purpose. For example, for the first principle, which has to do with the distribution of green and public open space, one of the proposed indicators is that at least 25% of the total land area should be dedicated to these types of spaces.

The checklist is designed to be used right from the outset of urban development. It can be used in all sorts of contexts but is especially intended for European cities with more than 50,000 inhabitants. For application in other contexts, the indicators can be adapted to local conditions.

"If implemented, the principles identified in this study should reduce the burden of disease and death associated with urban and transport design and lead to cities that are not only healthy but also liveable, desirable, equitable, sustainable and climate change-resilient, thereby achieving co-benefits," explained ISGlobal researcher Natalie Mueller, lead author of the study. "For example, a shift from private car use to active and public transport and the greening of cities is not only beneficial for health, but also reduces the carbon footprint and helps to mitigate the effects of climate change." She added: "However, it is important that cities are improved consistently and equitably without leaving any neighbourhoods or groups behind."

"Building healthy cities requires a multidisciplinary approach that involves all stakeholders, from urban planners to public health experts," concluded Nieuwenhuijsen.

The 10 principles for designing healthy cities are as follows:

1) Land-use mix

2) Street connectivity

3) Density

4) Motorised transport reductions

5) Walking

6) Cycling

7) Public transport

8) Multi-modality

9) Green and public open space

10) Integration of all planning principles

Credit: 
Barcelona Institute for Global Health (ISGlobal)

New research reveals remarkable resilience of sea life in the aftermath of mass extinctions

image: At the Cretaceous-Paleogene boundary, not only dinosaurs went extinct. The loss of species in the upper part of the ocean had profound impacts on its diversity and function. Image shows small deprived Cretaceous fauna after the extinction.

Image: 
Brian Huber

Pioneering research has shown marine ecosystems can start working again, providing important functions for humans, after being wiped out much sooner than their return to peak biodiversity.

The study, led by the University of Bristol and published today in Proceedings of the Royal Society B, paves the way for greater understanding of the impact of climate change on all life forms.

The international research team found plankton were able to recover and resume their core function of regulating carbon dioxide levels in the atmosphere more than twice as fast as they regained full levels of biodiversity.

Senior author Daniela Schmidt, Professor of Palaeobiology at the University of Bristol, said: "These findings are hugely significant, given growing concern around the extinctions of species in response to dramatic environmental shifts. Our study indicates marine systems can accommodate some losses in terms of biodiversity without losing full functionality, which provides hope. However, we still don't know the precise tipping point so the focus should very much remain on preserving this fragile relationship and protecting biodiversity."

While previous research has shown that functionality resumes quicker than biodiversity in algae, this is the first study to corroborate the discovery further up the food chain in zooplankton, which are vital for sea life as part of the food web supporting fish.

The scientists analysed tiny organisms called foraminifer, the size of grains of sand, from the mass extinction, known as the Cretaceous-Paleogene (K-Pg), which took place around 66 million years ago and eradicated three-quarters of the Earth's plant and animal species. This is the most catastrophic event in the evolutionary history of modern plankton, as it resulted in the collapse of one of the ocean's primary functions, the 'biological pump' which sucks vast amounts of carbon dioxide out of atmosphere into the ocean where it stays buried in sediments for thousands of years. The cycle not only influences nutrient availability for marine life, but also carbon dioxide levels outside the sea and therefore the climate at large.

Lead author Dr Heather Birch, a former researcher at the university's School of Earth Sciences and Cabot Institute for the Environment, said: "Our research shows how long - approximately 4 million years - it can take for an ecosystem to fully recover after an extinction event. Given human impact on current ecosystems, this should make us mindful. However, importantly the relationship between marine organisms and the marine carbon pump, which affects atmosphere CO2, appears not to be closely related."

Professor Schmidt added: "The results highlight the importance of linking climate projections with ecosystems models of coastal and open ocean environments to improve our ability to understand and forecast the impact of climate-induced extinctions on marine life and their services to people, such as fishing. Further research is needed to look at what happens and whether the same patterns are evident higher up the food web, for instance with fish."

Credit: 
University of Bristol

Use of additional Metop-C and Fengyun-3 C/D data improves regional weather forecasts

image: PMW radiance observation coverage over the MetCoOp domain for different times of day with current (left) and enhanced (right) PMW radiance observation usage and with operational observation handling settings. Right image includes images of Metop-C (left), Fengyun-3 C (middle), and D (right) satellites, whose instruments provide additional initialization data.

Image: 
Magnus Lindskog

Modern weather forecasts rely heavily on data retrieved from numerical weather prediction models. These models continue to improve and have advanced considerably throughout more than half a century. However, forecast reliability depends on the quality and accuracy of initialization data, or a sample of the current global atmosphere when the model run is started. This process of bringing surface observations, radiosonde data, and satellite imagery together to create a picture of the initial atmospheric state is called data assimilation. Satellite upgrades have significantly improved this process, providing more data than ever before. Several recent studies show that passive microwave (PMW) radiance observations from polar orbiting satellites are critical to input into both global and regional weather prediction models.

However, fully utilizing this information comes with challenges. PMW radiance observation coverage varies throughout a given day. Sometimes, data is delayed, making accurate data assimilation difficult. That said, scientists are working toward solutions to use these vital observations more effectively. A paper recently published in Advances in Atmospheric Sciences shows how researchers improved daily PMW radiance observation coverage using instruments onboard the Metop-C, Fengyun-3 C/D, and several other operational meteorological satellites.

"With these additional observations included in different assimilation cycles, there is a more even distribution of the fraction of the area covered by PMW radiances." said lead author Magnus Lindskog with the Swedish Meteorological and Hydrological Institute.

Results show that almost 80% of the model's domain, or coverage area, is accessible by PMW radiance observations for all assimilation cycles. Particularly, for the 0000 UTC model run, a large part of the domain is covered by PMW data alongside additional satellite radiances. However, none of these observations exist in the operational reference version due to the satellite's position at that specific time of day.

Thus, adding more PMW satellite radiances to evenly distribute data points throughout the day has the potential to improve forecast quality by filling existing data gaps in the applied regional weather prediction system. Likewise, enhancing and increasing the use of PMW radiances positively impacts a model's ability to use and process this data, improving its short-range regional weather forecasts.

Lindskog's study also highlights the next research opportunities within the regional weather prediction system. Satellite scientists should consider improving PMW radiances that are influenced by clouds as well as the effect of different surface weather characteristics at initialization. Finally, further research should also focus on developing and applying more refined data assimilation techniques than the current three-dimensional variational technique. A more efficient process should increase the benefits of enhanced PMW radiance observation data.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Magneto-thermal imaging brings synchrotron capabilities to the lab

ITHACA, N.Y. - Coming soon to a lab tabletop near you: a method of magneto-thermal imaging that offers nanoscale and picosecond resolution previously available only in synchrotron facilities.

This innovation in spatial and temporal resolution will give researchers extraordinary views into the magnetic properties of a range of materials, from metals to insulators, all from the comfort of their labs, potentially boosting the development of magnetic storage devices.

"Magnetic X-ray microscopy is a relatively rare bird," said Greg Fuchs, associate professor of applied and engineering physics, who led the project. "The magnetic microscopies that can do this sort of spatial and temporal resolution are very few and far between. Normally, you have to pick either spatial or temporal. You can't get them both. There's only about four or five places in the world that have that capability. So having the ability to do it on a tabletop is really enabling spin dynamics at nanoscale for research."

His team's paper, "Nanoscale Magnetization and Current Imaging Using Time-Resolved Scanning-Probe Magnetothermal Microscopy," published June 8 in the American Chemical Society's journal Nano Letters. The lead author is postdoctoral researcher Chi Zhang.

The paper is the culmination of a nearly 10-year effort by the Fuchs group to explore magnetic imaging with magneto-thermal microscopy. Instead of blasting a material with light, electrons or X-rays, the researchers use a laser focused onto the scanning probe to apply heat to a microscopic swath of a sample and measure the resulting electrical voltage for local magnetic information.

Fuchs and his team pioneered this approach and over the years have developed an understanding of how temperature gradients evolve in time and space.

"You think about heat as being a very slow, diffusive process," Fuchs said. "But in fact, diffusion on nanometer length scales has picosecond times. And that's a key insight. That is what gives us the time resolution. Light is a wave and diffracts. It doesn't want to live down at these very small length scales. But the heat can."

The group has previously used the technique to image and manipulate antiferromagnetic materials - which are difficult to study because they don't produce a magnetic field - as well as magnetic metals and insulators.

While it is easy enough to focus a laser, the major hurdle has been confining that light and generating enough heat on a nanometer scale to get the process to work. And because some phenomena at that scale occur so quickly, the imaging needs to be equally speedy.

"There's a lot of situations in magnetism where stuff is wiggling, and it's small. And this is basically what you need," Fuchs said.

Now that they have refined the process and successfully achieved a spatial resolution of 100 nanometers and a temporal resolution below 100 picoseconds, the team can explore the real minutiae of magnetism, such as skyrmions, quasi-particles in which the magnetic order is twisted. Understanding these kinds of "spin textures" could lead to new high-speed, high-density magnetic storage and logic technologies.

In addition to magnetism, the technique's dependence on electrical voltage means it can be used to measure current density when the voltage interacts with a material. This is a novel approach, since other imaging techniques measure current by gauging the magnetic field the current produces, not the current itself.

Magneto-thermal microscopy does have limitations. Because samples need to be configured with electrical contacts, the material has to be patterned into a device. As a result, the technique can't be applied to bulk samples. Also, the device and the scanning probe must be scaled together. So if you want to measure a phenomenon at the nanoscale, the sample has to be small.

But those limitations are minor compared with the benefits of a relatively low-cost form of magneto-thermal microscopy in your own lab.

"Right now, people have to go to a public facility, like a synchrotron facility, for doing these types of measurements," Zhang said. "You write a proposal, you get a beam time, and you have maybe a few weeks to work, at best. If you didn't get the result you want, then it's maybe another couple of months. So this will be progress for the field."

Credit: 
Cornell University

2.5 grammes of pure cocoa found to improve visual acuity in daylight

image: Less than three grammes of pure cocoa powder improves daytime visual acuity.

Image: 
Brands & People.

Eating 2.5 grams of pure natural cocoa powder serves to improve visual acuity in healthy young adults and in daylight conditions, according to research by the Universidad Complutense de Madrid (UCM) and the ICTAN (Institute of Food and Nutrition Science and Technology) of the CSIC.

The study, published in the Journal of Functional Foods, analyse the effects of two dietary polyphenols: cocoa flavanols and red berry anthocyanins.

"Although this was the baseline hypothesis, we did not see any effect either on adaptation to darkness or on visual acuity measured in low light conditions (mesopic vision), either with cocoa or with berries," indicates María Cinta Puell Marín, researcher at the Optometry and Vision Department and Director of the Applied Vision Group at the UCM.

The researchers attributed the positive effects on photopic visual acuity to improved attention or processing of visual information thanks to the flavanols and theobromine, a group of alkaloids which stimulate the central nervous system and are found in cocoa, similar to caffeine in coffee.

In order to conduct the study, the volunteers drank a glass of milk with cocoa, berries, or just milk on three separate visits to provide an intervening washout period (time to eliminate the traces of each foodstuff). The levels of polyphenols in their urine were measured after three hours.

To measure visual acuity, letter charts were placed four metres from the individuals in different lighting conditions, one high (photopic) and one low (mesopic). Adaptation to darkness employed a psychophysical method measuring the sensitivity recovery dynamic subsequent to the whitening of the retinal photopigments.

Before these tests, a series of questionnaires and eye examinations were conducted to demonstrate the absence of any dietary factor or prior pathology which could give rise to any error in the analysis of the results and the conclusions drawn.

"We need to conduct certain further studies as proof of concept to confirm that the effect is real and that the results could be applied to the design of products which could help to improve visual acuity and attention in defined populations," adds Sonia de Pascual-Teresa of the ICTAN-CSIC.

Credit: 
Universidad Complutense de Madrid

Higher selenium and manganese levels during pregnancy may protect babies from future high blood pressure

Children who were exposed to higher levels of trace minerals manganese and selenium during their mothers' pregnancy had a lower risk of high blood pressure in childhood, according to a study led by researchers at the Johns Hopkins Bloomberg School of Public Health.

The researchers analyzed the levels of toxic metals and trace minerals in blood samples drawn from nearly 1,200 women in the Boston area who gave birth between 2002 and 2013. They found that higher levels of selenium or manganese in the mothers' blood were associated with lower blood pressure readings in their children at clinic visits 3 to 15 years later.

The researchers also observed that manganese had a stronger inverse relationship with childhood blood pressure when maternal blood levels of cadmium, a toxic heavy metal, were higher--hinting that manganese lowers blood pressure in part by countering a blood pressure-raising effect of cadmium.

The results appear online June 23 in Environmental Health Perspectives.

"These results suggest that healthy levels of selenium and manganese in mothers' diets during pregnancy may protect their children against developing high blood pressure," says study senior author Noel Mueller, PhD, assistant professor in the Bloomberg School's Department of Epidemiology. "This work highlights the importance of nutrition and environmental exposures in the womb for a child's cardiovascular health and, as we continue research this further, could eventually lead to updated nutritional guidance and environmental regulations aimed at preventing disease."

Hypertension is one of the major modifiable risk factors for other debilitating and deadly diseases including heart disease, stroke, kidney failure, and Alzheimer's disease. It is also very common; the U.S. Centers for Disease Control and Prevention estimates that about half of Americans over the age of 20 have hypertension--defined as systolic blood pressure above 130 mm Hg or diastolic blood pressure above 80 mm Hg--or have been prescribed antihypertensive drugs.

Prior research suggests that the predisposition to hypertension can start early in life, even in the womb, and that protection from that predisposition also can start early. The researchers examined these questions in the study: They compared children's blood pressure readings to levels of toxic metals and trace minerals in their mothers' blood; they measured the toxic metals lead, mercury, and cadmium, which have been linked to hypertension in adults; and they looked at levels of the trace minerals manganese and selenium, which have been linked to lower blood pressure.

The dataset used for the analysis covered 1,194 mother-child pairs from a study known as the Boston Birth Cohort. Blood pressure readings in the children were taken at ages ranging from 3-15 years. Most of the mothers were Black (61 percent) or Hispanic (20 percent).

Although a preponderance of earlier evidence linked lead, mercury, and cadmium to high blood pressure and heart diseases in adults, the researchers did not find a link between these toxic metals with childhood blood pressure in this study. They did, however, observe a link between the mothers' levels of selenium and lower blood pressure in their offspring during childhood. For every doubling of maternal selenium levels, children's systolic blood pressure was found on average to be 6.23 points lower. Manganese showed a similar albeit weaker relationship to blood pressure: A doubling of exposure was associated with 2.62 points lower systolic blood pressure on average.

Although cadmium on its own was not linked to childhood blood pressure, the researchers found that when maternal blood levels of cadmium were higher, the inverse relationship between manganese and childhood blood pressure was significantly stronger. That finding hints that manganese can specifically protect against the hypertension-promoting effect of cadmium, and may even mask cadmium's hypertension-promoting effect in normal populations.

"People often assume that exposures to heavy metals such as cadmium occur only in occupational settings, but in fact these metals are all around us--for example, cadmium is found in ordinary cigarette smoke," says study first author Mingyu Zhang, a PhD candidate in Mueller's research group.

Underscoring the apparent cadmium link, the researchers observed that manganese was associated much more strongly with lower blood pressure in children whose mothers had smoked during pregnancy.

Manganese and selenium have antioxidant properties and are found in a variety of foods including nuts and grains, leafy vegetables, fish and shellfish.

The researchers will aim to replicate their findings in studies based on other birth cohorts. Johns Hopkins maintains a registry of birth cohort datasets under its Environmental influences on Child Health Outcomes (ECHO) Program.

Credit: 
Johns Hopkins Bloomberg School of Public Health

Being Anglo-Saxon was a matter of language and culture, not genetics

image: The famous Anglo-Saxon Sutton Hoo helmet from about 625 CE, part of the British Museum collection.
Photo: Elissa Blake/University of Sydney

Image: 
Photo: Elissa Blake/University of Sydney

A new study from archaeologists at University of Sydney and Simon Fraser University in Vancouver, has provided important new evidence to answer the question "Who exactly were the Anglo-Saxons?"

New findings based on studying skeletal remains clearly indicates the Anglo-Saxons were a melting pot of people from both migrant and local cultural groups and not one homogenous group from Western Europe.

Professor Keith Dobney at the University of Sydney said the team's results indicate that "the Anglo-Saxon kingdoms of early Medieval Britain were strikingly similar to contemporary Britain - full of people of different ancestries sharing a common language and culture".

The Anglo-Saxon (or early medieval) period in England runs from the 5th-11th centuries AD. Early Anglo-Saxon dates from around 410-660 AD - with migration occurring throughout all but the final 100 years (ie 410-560AD).

Studying ancient skulls

Published in PLOS ONE, the collaborative study by Professor Dobney at University of Sydney and Dr Kimberly Plomp and Professor Mark Collard at Simon Fraser University in Vancouver, looked at the three-dimensional shape of the base of the skull.

"Previous studies by palaeoanthropologists have shown that the base of the human skull holds a shape signature that can be used to track relationships among human populations in a similar way to ancient DNA," Dr Plomp said. "Based on this, we collected 3D data from suitably dated skeletal collections from Britain and Denmark, and then analysed the data to estimate the ancestry of the Anglo-Saxon individuals in the sample."

The researchers found that between two-thirds and three-quarters of early Anglo-Saxon individuals were of continental European ancestry, while between a quarter and one-third were of local ancestry.

When they looked at skeletons dated to the Middle Anglo-Saxon period (several hundred years after the original migrants arrived), they found that 50 to 70 percent of the individuals were of local ancestry, while 30 to 50 percent were of continental European ancestry, which probably indicates a change in the rate of migration and/or local adoption of culture over time.

"These findings tell us that being Anglo-Saxon was more likely a matter of language and culture, not genetics," Professor Collard said.

The debate about Anglo-Saxons

Although Anglo-Saxon origins can clearly be traced to a migration of Germanic-speaking people from mainland Europe between the 5th and 7th centuries AD, the number of individuals who settled in Britain is still contested, as is the nature of their relationship with the pre-existing inhabitants of the British Isles, most of whom were Romano-Celts.

The ongoing and unresolved argument is whether hordes of European invaders largely replaced the existing Romano-British inhabitants, or did smaller numbers of migrants settle and interact with the locals, who then rapidly adopted the new language and culture of the Anglo-Saxons?

"The reason for the ongoing confusion is the apparent contradiction between early historical texts (written sometime after the events that imply that the newcomers were both numerous and replaced the Romano-British population) and some recent biomolecular markers directly recovered from Anglo-Saxon skeletons that appears to suggest numbers of immigrants were few," said Professor Dobney.

"Our new data sits at the interface of this debate and implies that early Anglo-Saxon society was a mix of both newcomers and immigrants and, instead of wholesale population replacement, a process of acculturation resulted in Anglo-Saxon language and culture being adopted wholesale by the local population."

"It could be this new cultural package was attractive, filling a vacuum left at the end of the Roman occupation of Britain. Whatever the reason, it lit the fuse for the English nation we have today - still comprised of people of different origins who share the same language," Professor Dobney said.

Credit: 
University of Sydney

Roughness of retinal layers, a new Alzheimer's biomarker

Over recent years, the retina has established its position as one of the most promising biomarkers for the early diagnosis of Alzheimer's. Moving on from the debate as to the retina becoming thinner or thicker, researchers from the Universidad Complutense de Madrid and Hospital Clínico San Carlos are focusing their attention on the roughness of the ten retinal layers.

The study, published in Scientific Reports, "proves innovative" in three aspects according to José Manuel Ramírez, Director of the IIORC (Ramón Castroviejo Institute of Ophthalmologic Research) at the UCM. "This is the first study to propose studying the roughness of the retina and its ten constituent layers. They have devised a mathematical method to measure the degree of wrinkling, through the fractal dimension, and have discovered that in some layers of the retina these measurements indicate that wrinkling begins at very early stages of Alzheimer's disease," explains the IIORC expert.

To undertake the study, launched six years ago, the researchers developed computer programs allowing them to separate each layer of the retina. Following this subdivision, the problem which arose was how to distinguish the roughness of one layer from that of the neighbouring layers.

"As each is in contact with the others, the wrinkling of one layer is transmitted to the adjacent layers, and their roughness becomes blurred. The solution was to flatten each layer mathematically on each side and study the roughness remaining on the other side," indicates Lucía Jáñez, the lead author of the publication.

Software development to calculate roughness

The second problem faced in the research was to find a procedure to measure roughness. "The solution lay in calculating the fractal dimension of the side of each retinal layer studied," explains Luis Jáñez, researcher at the UCM's ITC (Institute of Knowledge Technology).

"A flat surface has only two dimensions: length and width, but if it is folded or wrinkled it progressively takes on body and begins to appear a three-dimensional solid object. The fractal dimension adopts fractional values between 2 and 3, and so is suitable to measure the degree of wrinkling of retinal layers," he adds.

The final step taken by the group was to incorporate the technology they had developed within the Optical Coherence Tomography (OCT) currently available on the market, using mathematical analysis to express this in software which calculates the roughness of each retinal layer, and establishes the boundary between health/illness.

For the patient, this is a simple, quick and low-cost test. "No prior preparation is required. They simply turn up for an ophthalmology appointment, sit facing the machine and spend about 4 seconds looking at a dot of light inside: that generates the OCT image. The analysis of the roughness of the image is performed by a computer program in less than one minute," the ITC researcher indicates.

After a decade working in this field, researchers understand how the eyesight of patients with Alzheimer's evolves, and the changes in retinal thickness. "From now on, with this new technique we can research how to use retinal roughness to monitor and ascertain the stage of Alzheimer's disease," predicts the IIORC researcher Elena Salobrar García.

As well as being used in Alzheimer's, the methods they have developed could be applied in studying other diseases, such as ALS or Parkinson's, "the effects of which on the retina we are now beginning to understand. As well as contributing to advances in neuroscience, this might also be useful in ophthalmology," concludes Omar Bachtoula, researcher at the UCM Psychology Faculty.

Credit: 
Universidad Complutense de Madrid

Sneeze cam reveals best fabric combos for cloth masks (video)

image: High-speed videos of a person sneezing reveal the best fabric combos for cloth masks.

Image: 
American Chemical Society

During the COVID-19 pandemic, cloth face masks became a way to help protect yourself and others from the virus. And for some people, they became a fashion statement, with many fabric choices available. But just how effective are they, especially in containing a sneeze? Now, researchers reporting in ACS Biomaterials Science & Engineering used high-speed videos of a person sneezing to identify the optimal cloth mask design. Watch a video of the sneeze cam here.

Early in the pandemic, worldwide shortages of surgical masks and N95 respirators led many people to make or purchase cloth face masks. Now, with safe and effective COVID-19 vaccines available, mask restrictions are easing in many states. However, face masks will likely still be required in certain settings for a while, especially with possible vaccine-resistant variants emerging. They might also be useful in future pandemics. Face masks help reduce disease spread by blocking tiny, virus-laden droplets expelled through the nose and mouth when a person speaks, coughs or sneezes. A few studies have examined the effectiveness of various fabrics for blocking droplets and aerosols made by a machine, but until now, none have been conducted under the explosive conditions of a real human sneeze. Shovon Bhattacharjee, Raina MacIntyre and colleagues at the University of New South Wales wanted to see how well masks made of various fabrics and layers blocked respiratory droplets from the sneezes of a healthy adult.

The researchers made simple face masks with 17 commonly available fabrics. Each mask had one, two or three layers of the same or different fabrics. A healthy 30-year-old volunteer donned each mask, tickled the inside of his nose with tissue paper on a cotton swab, and then readjusted the mask just before the onset of a sneeze. The researchers captured high-speed videos of the sneezes and computed the intensity of droplets in the images in a region 2 cm from his mouth. With each fabric layer, the droplet-blocking capability improved by more than 20-fold. Interestingly, all of the three-layer cloth combinations the researchers tested were more effective than a three-layer surgical mask. The best masks for blocking droplets contained a hydrophilic inner layer of cotton or linen, an absorbent middle layer of a cotton/polyester blend and a hydrophobic outer layer of polyester or nylon. Machine washing the masks didn't decrease their performance; in fact, masks containing cotton or polyester worked slightly better after washing because of pore shrinkage. Future studies are planned with more people and different age groups, the researchers say.

Credit: 
American Chemical Society

'Lady luck' - Does anthropomorphized luck drive risky financial behavior?

A new study published in the Journal of the Association for Consumer Research posits that increased accessibility to anthropomorphized luck (i.e., "lady luck") can lead consumers to be more likely to pursue higher-risk financial behavior. In "Lady Luck: Anthropomorphized Luck Creates Perceptions of Risk-Sharing and Drives Pursuit of Risky Alternatives," authors Katina Kulow, Thomas Kramer, and Kara Bentley propose that preferences for higher-risk options (like lottery tickets with worse odds or investment opportunities with a low chance of return) are driven by shared risk perceptions that might engender feelings of security provided by the idea of "lady luck." This behavior, the authors note, "bodes ill for consumer welfare, given that many financial maladaptive activities arise from repeated behaviors."

In four experiments, the authors conducted regression and spotlight analyses on data from studies involving MTurk panelists and undergraduate students who completed online studies that involved financial risk decisions like a lottery or startup investments and various risk perceptions. While the authors find increased preferences for higher-risk alternatives when consumers anthropomorphize luck for financial decisions, when given the opportunity to gamble on social versus financial capital, participants indicated that consumers might perceive that they have more control over outcomes and feel less in need of the security provided by an anthropomorphized entity.

The results hold public policy implications, such as whether marketers may be required to qualify references to anthropomorphized luck, particularly when consumers may be vulnerable to taking undue financial risks, such as in gambling establishments. For example, the research suggests that a sign in a casino insinuating that 'Lady Luck is on Your Side' could lead gamblers to engage in higher-risk behaviors than a sign that merely suggests that "Luck is on Your Side" or "Good Luck." Limiting the use of "lady luck" on lottery scratch-off tickets could prevent devastating financial losses among lower socioeconomic status consumers.

Credit: 
University of Chicago Press Journals