Tech

Face masks effectively limit SARS-CoV-2 transmission

'Don't forget the mask' - although most people nowadays follow this advice, professionals express different opinions about the effectiveness of face masks. An international team led by researchers from the Max Planck Institute for Chemistry in Mainz, Germany, has now used observational data and model calculations to answer open questions. The study shows under which conditions and in which way masks actually reduce individual and population-average risks of being infected with COVID-19 and help mitigate the corona pandemic. In most environments and situations, even simple surgical masks effectively reduce the transmission of SARS-CoV-2 and the effective reproduction number for COVID-19. In environments with potentially high airborne virus concentrations such as medical settings and densely occupied indoor spaces, however, masks with higher filtration efficiency (N95/FFP2) should be used and combined with other protective measures such as intensive ventilation.

Face masks are among the most simple, easy-to-use, and effective measure against the airborne transmission of infectious respiratory diseases, but their usefulness against COVID-19 is still under debate. Some earlier investigations found that masks were apparently not effective under certain conditions. Others found high efficacies, but a conclusive explanation for the apparent contradictions and inconsistencies had not been given.

Researchers from the Max Planck Institute for Chemistry (MPIC), the Medical Center of the Johannes Gutenberg University Mainz, and the Charité - Universitätsmedizin Berlin together with partners from China and the USA used observational data and a novel quantitative model of airborne virus exposure to elucidate how the efficacy of face masks depends on characteristic regimes of airborne virus concentration.

In most situations, even simple surgical masks are effective

"For the airborne transmission of SARS-CoV-2, we find that usually just a minor fraction of exhaled respiratory particles contains viruses. Most environments and contacts are under virus-limited conditions, where face masks, including simple surgical masks, have a high efficacy in preventing the spread of COVID-19", explains Yafang Cheng, the head of a Minerva Research Group at the MPIC. "Our study provides a detailed and novel mechanistic understanding of population-average mask efficacy, which explains why regions with a higher percentage of the population wearing masks have better control of the pandemic."

In virus-rich indoor environments with high infection probability, however, more advanced masks (N95/FFP2) and other protective equipment are required to prevent airborne transmission. The strong dependence of mask efficacy on airborne virus concentration highlights the importance of combining masks with other protective measures such as ventilation and distancing to keep the infection probability low.

"The combination of high-efficiency masks with other protective measures is particularly important for hospitals, medical centers, and other indoor environments, where high risk patients may encounter high virus concentrations", says Christian Witt, head of the Research Area Pneumology at the Charité - Universitätsmedizin Berlin. "Masks will remain an important protective measure against SARS-Cov-2 infection - even for vaccinated persons, especially when the protection provided by vaccination decreases over time."

The approach can be used to assess protection against more infectious mutants

"Our approach and results of relating the effectiveness of protective measures to the infection probability and basic reproduction number are applicable to a wide range of respiratory viruses and diseases, including coronaviruses, rhinoviruses, and influenza. They can also be used to assess the efficacy of masks and other preventive measures against new and more infectious mutants of SARS-CoV-2." says Hang Su, research group leader at the MPIC. "Our investigations also show that aerosol transmission does not necessarily lead to very high reproduction numbers as observed for measles, and that relatively low reproduction numbers do not rule out airborne transmission."

Moreover, the study demonstrates how important high compliance and correct use of masks are to ensure their effectiveness in reducing the reproduction number of COVID-19. To reduce the reproduction number from ~3 as originally observed to below 1, at least 60-70% compliance would be required for surgical masks (~40% for N95/FFP2 masks). Higher rates of compliance would be required for more infectious variants of SARS-CoV-2, which re-emphasizes that masks should be combined with other protective measures like ventilation and distancing for efficient reduction of infection probabilities and reproduction numbers.

"Our study explains quantitatively why and how face masks are highly effective in virus-limited environments and less effective in virus-rich environments - both at the individual and the population average level related to observed infection rates and effective reproduction numbers. This has not been achieved before and is essential to overcome inconclusive earlier results, arguments, and discussions. We are confident, that the mechanistic insights and quantitative results gained in our study constitute a scientific breakthrough that will help to settle the ongoing debate about the usefulness of masks and promote efficient mitigation of the COVID pandemic," summarizes Ulrich Pöschl, director of the MPIC Multiphase Chemistry Department.

Credit: 
Max Planck Institute for Chemistry

New AI-based tool can find rare cell populations in large single-cell datasets

image: Ken Chen, Ph.D.

Image: 
MD Anderson Cancer Center

HOUSTON - Researchers at The University of Texas MD Anderson Cancer Center have developed a first-of-its-kind artificial intelligence (AI)-based tool that can accurately identify rare groups of biologically important cells from single-cell datasets, which often contain gene or protein expression data from thousands of cells. The research was published today in Nature Computational Science.

This computational tool, called SCMER (Single-Cell Manifold presERving feature selection), can help researchers sort through the noise of complex datasets to study cells that would likely not be identifiable otherwise.

SCMER may be used broadly for many applications in oncology and beyond, explained senior author Ken Chen, Ph.D., associate professor of Bioinformatics & Computational Biology, including the study of minimal residual disease, drug resistance and distinct populations of immune cells.

"Modern techniques can generate lots of data, but it has become harder to determine which genes or proteins actually are important in those contexts," Chen said. "Small groups of cells can have important features that may play a role in drug resistance, for example, but those features may not be sufficient to distinguish them from more common cells. It's become very important in analyzing single-cell datasets to be able to detect these rare cells and their unique molecular features."

Developing methods to effectively study small or rare cell populations in cancer research is a direct response to one of the provocative questions posed by the National Cancer Institute (NCI) in 2020, designating this an important and underexplored research area. SCMER was designed to address the issue and to enable researchers to get the most out of increasingly complex datasets.

Rather than the traditional approach of sorting cells into clusters based on all data contained in a dataset, SCMER takes an unbiased look to detect the most meaningful distinguishing features that define unique groups of cells. This allows researchers not only to detect rare cell populations, but to generate a compact set of genes or proteins that can be used to detect those cells among many others. To highlight the utility of SCMER, the research team applied it to analyze several published single-cell datasets and found it compared favorably to currently available computational approaches.

In a reanalysis of more than 4,500 melanoma cells, SCMER was able to distinguish the cell types present using the expression of just 75 genes. The results also pointed to a number of genes involved in tumor development and drug resistance that were not identified as meaningful in the original study.

In a complex dataset of nearly 40,000 gastrointestinal immune cells, SCMER separated cells using only 250 distinct features. This analysis identified all the original cell types detected in the original study, but in many cases further defined subgroups of rare cells that were not previously identified.

Finally, the research team applied SCMER to study more than 1,400 lung cancer cells taken at various points in time after drug treatment. Using just 80 genes, the tool was able to accurately distinguish cells based on treatment responses and pointed to possible novel drivers of therapeutic resistance.

"Using state-of-the-art AI techniques, we have developed an efficient and user-friendly tool capable of uncovering new biological insights from rare cell populations," Chen said. "SCMER offers researchers the ability to reduce highly dimensional, complex datasets into a compact set of actionable features with biological significance."

Credit: 
University of Texas M. D. Anderson Cancer Center

Compound commonly found in candles lights the way to grid-scale energy storage

image: PNNL scientist Ruozhu Feng created a series of molecular engineering steps to cultivate fluorenone's energy-carrying capability, part of an effort at Pacific Northwest National Laboratory to develop new energy-storage technology for the grid.

Image: 
(Photo by Andrea Starr | Pacific Northwest National Laboratory)

A compound used widely in candles offers promise for a much more modern energy challenge--storing massive amounts of energy to be fed into the electric grid as the need arises.

Scientists at the U.S. Department of Energy's Pacific Northwest National Laboratory have shown that low-cost organic compounds hold promise for storing grid energy. Common fluorenone, a bright yellow powder, was at first a reluctant participant, but with enough chemical persuasion has proven to be a potent partner for energy storage in flow battery systems, large systems that store energy for the grid.

Development of such storage is critical. When the grid goes offline due to severe weather, for instance, the large batteries under development would kick in, boosting grid resilience and minimizing disruption. The batteries can also be used to store renewable energy from wind and solar, for use when the winds are quiet or the sun's not shining.

Details of the research, supported by DOE's Office of Electricity, are published in the May 21 issue of the journal Science.

"Flow battery technology is a critical part of the Department of Energy's goal to reduce the cost of grid energy storage over the next decade," said Imre Gyuk, director of Energy Storage at DOE's Office of Electricity. "Progress has been rapid, and the cost has come down significantly, but further research is needed to make grid-scale energy storage widely available."

Flow batteries for the grid: going organic

Scientists are making tremendous strides toward creating better batteries--storing more energy at lower cost and lasting longer than ever before. The results touch many aspects of our lives, translating to a more resilient electric grid, longer-lasting laptop batteries, more electric vehicles, and greater use of renewable energy from blowing wind, shining sun, or flowing water.

For grid-scale batteries, identifying the right materials and combining them to create a new recipe for energy storage is a critical step in the world's ability to harness and store renewable energy. The most widely used grid-scale batteries use lithium-ion technology, but those are difficult to customize moment to moment in ways most useful to the grid, and there are safety concerns. Redox flow batteries are a growing alternative; however, most use vanadium, which is expensive, not easily available, and prone to price fluctuations. Those traits pose barriers to widespread grid-scale energy storage.

Alternative materials for flow batteries include organic molecules, which are far more available, more environmentally friendly and less expensive than vanadium. But organics haven't held up well to the demands of flow-battery technology, usually petering out faster than required. Long-term stability of the molecules is important so they maintain their ability to perform chemical reactions for many years.

"These organic materials are made out of the most common materials available--carbon, hydrogen and oxygen," said Wei Wang, the PNNL scientist who leads the flow battery team. "They are easily available; they don't need to be mined, as substances like vanadium do. This makes them very attractive for grid-scale energy storage."

In the Science paper, Wang's team demonstrated that low-cost organic fluorenone is, surprisingly, not only a viable candidate but also a standout performer when it comes to energy storage.

In laboratory testing that mimicked real-world conditions, the PNNL battery operated continuously for 120 days, ending only when other equipment unrelated to the battery itself wore out. The battery went through 1,111 full cycles of charging and discharging--the equivalent of several years of operation under normal circumstances--and lost less than 3 percent of its energy capacity. Other organic-based flow batteries have operated for a much shorter period.

The flow battery the team created is only about 10 square centimeters, about the size of a large postage stamp, and puts out about 500 milliwatts of power, not even enough to power a cell phone camera. But the tiny structure embodies tremendous promise: Its energy density is more than twice that of the vanadium batteries in use today and its chemical components are inexpensive, long lasting and widely available.

Molecular engineering puts fluorenone into reverse

The development was made possible thanks to a team of scientists, including first author Ruozhu Feng, technical lead Xin Zhang and others.

PNNL scientists played an important role in developing the vanadium-based flow batteries used today. A few years ago the team turned its attention to organic molecules because of their broad availability and low cost. In 2018 Zhang joined the team as part of an effort to tune the material for energy storage, bringing deep knowledge of fluorenone from previous research in LEDs.

Fluorenone is also used in solar panels, in pharmaceuticals such as drugs to treat malaria, and in candles, to give them a pleasant scent. It's inexpensive and readily available as a waste product from coal tar and from the manufacture of benzoic acid, a common food additive.

Zhang focused his attention on fluorenone as the heart of an aqueous (water-based) flow battery, but there were barriers. For one, the molecule wasn't water-soluble enough. And the molecule hadn't displayed redox reversibility in aqueous solutions; that is, scientists hadn't demonstrated that it could both easily accept and donate electrons, two complementary and mandatory steps for a flow battery.

Feng created a series of complex chemical steps--what Wang calls "molecular engineering"--to transform fluorenone to a redox reversible, water-soluble compound. One part of the process has long been easy for fluorenone: to gain an electron in a process known as reduction. But it took dogged chemical persuasion by Feng to bring about the other half of the process--oxidation, the loss of an electron--to make the process reversible and suitable for energy storage.

Unexpectedly, Feng discovered that the ability of fluorenone to carry out reversible reactions is dependent on its concentration--more of the substance dissolved in the water makes the reversibility possible. Scientists hadn't witnessed the phenomenon with organic molecules before.

"This is a great demonstration of using molecular engineering to change a material from one widely considered impossible for use into something useful for energy storage," said Wang. "This opens up important new chemical space that we can explore."

The team also increased the solubility of fluorenone in water, from almost 0 with pristine fluorenone up to 1.5 moles per liter, depending on the modifications to the compound. Solubility in a water-based flow battery is vital; the more the material dissolves in water, the more it's available as a chemical partner in the swapping of electrons at the heart of the battery.

PNNL is encouraging commercialization of fluorenone-based aqueous redox flow batteries and, as a first step, has filed for a patent on the innovation.

The work on flow batteries is part of a large program at PNNL to develop and test new technologies for grid-scale energy storage. PNNL was chosen earlier this year as the home of the Grid Storage Launchpad, a facility created by DOE's Office of Electricity to accelerate development and testing of large grid batteries. A chief goal is to increase the use of readily available materials and bring down the cost, making storage of renewable energy possible for longer periods.

In addition to Feng, Zhang and Wang, authors include PNNL scientists Vijayakumar Murugesan, Aaron Hollas, Ying Chen, Yuyan Shao, Eric Walter, Nadeesha Wellala, Litao Yan and Kevin Rosso. Several measurements using mass spectrometry and nuclear magnetic resonance were done at EMSL, the Environmental Molecular Sciences Laboratory, a DOE Office of Science user facility.

Credit: 
DOE/Pacific Northwest National Laboratory

Immune cells promote proinflammatory fatty liver disease

A particular type of dendritic cell is responsible for the tissue damage that occurs in non-alcoholic steatohepatits (NASH) in mice and humans. The dendritic cells cause aggressive, proinflammatory behavior in T cells, as now discovered by researchers from the German Cancer Research Center (DKFZ) in collaboration with colleagues from Israeli research institutes. Blocking these dendritic cells alleviates symptoms in mice. This type of approach might also prevent the development of serious liver damage in NASH patients.

Obesity is extremely widespread in the Western world, and 90 percent of those affected show signs of fatty degeneration of the liver. If they maintain an unhealthy lifestyle over a long period (high-calorie diet, sedentary lifestyle), liver cell death occurs in around a fifth of these people, resulting in inflammation of the liver, referred to as non-alcoholic steatohepatitis (NASH). NASH can lead to liver fibrosis, life-threatening liver cirrhosis and liver cancer.

In addition to its well-known role in metabolism and in filtering toxins, the liver also has a strategic function as part of the immune system, acting as the primary line of defense against all microbial toxins and food contaminants that enter the body from the intestines via the portal vein. In order to perform this task, a whole army of different immune cells patrol the liver.

"We wanted to find out which immune or inflammatory cells in the liver promote NASH and the liver damage associated with it," explained Mathias Heikenwälder from the German Cancer Research Center (DKFZ). DKFZ researchers have now addressed the topic in collaboration with colleagues from the Weizmann Institute of Sciences and Sheba Medical Center in Israel. To do so, they analyzed the connection between the composition of the immune cell population in the liver and the degree of NASH-related liver damage. This enabled them to identify a particular type of immune cell that promotes progression of the disease - in both mice and humans.

Clue from laboratory mice on "junk food"

In order to investigate the immune system in NASH, the researchers fed laboratory mice a diet lacking essential nutrients but enriched with lipids and cholesterol - comparable to our "junk food" - and observed the development of NASH. They studied the liver immune cells using single-cell RNA sequencing and discovered an unusually high number of a particular kind of cell, known as type 1 dendritic cells (cDC1), in the liver of NASH mice.

This phenomenon was not limited to mice. In tissue samples taken from patients in liver biopsies, the researchers found a correlation between the number of cDC1 cells and the extent of liver damage typical of NASH.

Do the cDC1 cells actually have an effect on liver pathology? The researchers pursued two channels of investigation here. They studied genetically modified mice lacking cDC1. In addition, they blocked cDC1 in the liver using specific antibodies. In both approaches, lower cDC1 activity was associated with a decrease in liver damage.

Dendritic cells normally only survive for a few days and need to be continually replaced by the immune system. The researchers discovered that the NASH-related tissue damage modulates the hematopoietic system in the bone marrow, as a result of which the cDC1 precursors divide more often and replenish the supply more readily.

Dendritic cells induce aggressive behavior in T cells

In a normal immune response, dendritic cells screen the organs for conspicuous immunologic features and then continue on to the neighboring lymph nodes - the command centers of the immune response - to pass on this information to the T cells. In NASH subjects, the German-Israeli team has now discovered that the cDC1 induce inflammatory and more aggressive behavior in T cells in the lymph nodes responsible for the liver, causing liver damage and leading to progression of the disease. "It is only recently that we identified these autoaggressive T cells as being responsible for liver damage in NASH. Now we also understand what induces this harmful behavior," Mathias Heikenwälder remarked.

Now that the cDC1 have been shown to play a key role in the progression of NASH, targeted manipulation of these cells might offer a new way of treating inflammation of the liver and its serious repercussions. "We are increasingly recognizing that certain cells of the immune system are involved in the development of different diseases, including cancer, diabetes, and Alzheimer's disease. Medicine is thus increasingly using ways of modulating the immune system and using drugs to push it in the right direction. This kind of approach might also work to prevent serious liver damage in NASH patients," Heikenwälder explained.

Eran Elinav, also a senior author of the study and head of research groups at DKFZ and the Weizmann Institute, believes that it is highly probable that gut bacteria affect the immune cells in this disease: "We now aim to find out how the gut and its bacteria influence activation of the immune cells in the liver. By doing so, we hope to be able to develop new treatment strategies."

Credit: 
German Cancer Research Center (Deutsches Krebsforschungszentrum, DKFZ)

Solar geoengineering may be effective in alleviating impacts of global warming on crops

Solar geoengineering -- putting aerosols into the atmosphere to reflect sunlight and reduce global warming -- is not a fix-all for climate change but it could be one of several tools to manage climate risks. A growing body of research has explored the ability of solar geoengineering to reduce physical climate changes. But much less is known about how solar geoengineering could affect the ecosystem and, particularly, agriculture.

Now, research from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) finds that solar geoengineering may be surprisingly effective in alleviating some of the worst impacts of global warming on crops.

The research, a collaboration with the Norwegian Research Centre and the Bjerknes Centre for Climate Research, the Norwegian University of Science and Technology, the National Center for Atmospheric Research in Boulder, Seoul National University and the Chinese Academy of Sciences, is published in Nature Food.

"Research on solar geoengineering must address whether or not it is effective at reducing human impacts of climate change," said David Keith, the Professor of Applied Physics at SEAS and Professor of Public Policy at the Harvard Kennedy School. "Our paper helps fill that gap by using the best crop model yet embedded in a climate model to examine the potential impact of solar geoengineering on agricultural yields."

The team looked at three types of solar geoengineering -- stratospheric aerosol injection, marine sky brightening, and cirrus cloud thinning -- and their impact on the global yield of maize, sugarcane, wheat, rice, soy and cotton in a business-as-usual future where emissions continue at their current levels.

In such a future, the most effective way to protect crops against the worst effects of global climate change is to reduce the surface temperature. The researchers found that all three potential solar geoengineering methods have a strong cooling effect that would benefit crop yields.

Previous research suggested that cooling temperatures brought on by stratospheric aerosol injection may also lead to less rainfall, which could result in yield loss for rainfed crops. But these studies didn't look at one of the most important ecological factors in crop transpiration and productivity -- humidity.

"Relative humidity or vapor pressure deficit has stronger control on plant water use and crop productivity than precipitation," said Yuanchao Fan, a Fellow in the Harvard Solar Geoengineering Research Program and first author of the paper. "We found that in a cooler world under multiple scenarios, except cirrus cloud thinning, there will be higher relative humidity, which will alleviate water stress for rainfed crops. Our model shows that the change in precipitation resulting from all three solar geoengineering methods would, in fact, have very little effect on crops."

The researchers compared how agricultural productivity is affected by solar geoengineering and emissions reductions. The researchers found that while emissions reductions have strong cooling and humidity benefits, they may have a smaller benefit for crop yields than solar geoengineering because the reduction of CO2 fertilization reduces the productivity of most crops compared with solar geoengineering that achieves the same temperature reduction. The finding highlights the need to combine emissions reductions with other tools, including increasing the use of nitrogen fertilization and changes to land use.

"Climate risks cannot be resolved with any single tool; even if emissions were eliminated tomorrow the world's most vulnerable will still suffer from climate change," said Keith. "Policymakers need to consider how emissions cuts might be supplemented by specific local adaptations to help farmers reduce the impacts of climate on agriculture, and by global actions such as carbon removal and solar geoengineering."

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

Stress from 2016 US presidential election associated with increase in cardiac events

image: Stress from 2016 U.S. Presidential Election Associated with Significant Increase in Cardiac Events

Image: 
UNC School of Medicine

CHAPEL HILL, NC - American politics can be stressful and confrontational, which can lead to anger. The combination of intense stress and negative emotions can trigger potentially fatal cardiovascular events in people who are susceptible to these health issues. But the direct link between a stressful political election and an increase in cardiac events hadn't been established, until now. A new study in the Journal of the American Heart Association is the first to show that exposure to a stressful political election is strongly associated with an increase in potentially life-threatening cardiac events.

"This retrospective case-crossover study was conducted in North Carolina, which was a swing state in the 2016 U.S. presidential election," said lead author Lindsey Rosman, PhD, assistant professor of medicine in the division of cardiology at the UNC School of Medicine. "People living in North Carolina were exposed to a particularly high volume of negative political commercials, advertisements and campaign events that were very intense in rhetoric. So, their stress levels may have been especially high leading up to the 2016 election."

The study looked at data from implanted cardiac devices of 2,500 patients at three points in time: a six-week span leading up to and following the 2016 U.S. presidential election, and two control periods that consisted of a six-week span from June to July of 2016, and a six-week span from October to November of 2015. Rosman and her team found a 77% increase in the risk of arrhythmia - an abnormal heart rate or irregular heart rhythm - during the 2016 election period compared to the control periods.

"The increase in risk was significant, even after taking into account known risk factors for cardiovascular disease such as age, hypertension, health behaviors, and other medical conditions," Rosman said.

Researchers found a significant increase in the risk of both atrial arrhythmias, like atrial fibrillation, and potentially life-threatening ventricular arrhythmias.

"We also found a higher burden of atrial fibrillation during the election, and this is important because it can increase your risk of blood clots, stroke, and other heart-related complications," Rosman said.

The study also looked at whether registered Republicans or Democrats experienced more arrhythmias during the election period, and if political concordance - whether a person's registered affiliation matched the election results of the county they live in - had an impact on arrhythmic events.

"We were not able to conclusively show that the election was more stressful for one party over the other because of the size of our study," Rosman said. "Risk of heart events increased for people no matter their political affiliation, race or gender. But we did see that registered Democrats experienced nearly twice as many heart events as Republicans, which is a trend we would like to explore further."

When it comes to political concordance, those who were politically discordant (and may have felt socially or ideologically disconnected from their community) experienced a significant increase in arrhythmic events during the 2016 election. But, there was also an increase in risk for people who aligned as politically concordant.

The consequences of increased cardiac events due to stressful elections could be significant. With U.S. presidential elections happening every four years and midterms every two, Rosman says more research on the subject is needed to study the population-level health impact. She and her team hope to perform a similar study on a nationwide scale of stress and cardiac events during the 2020 U.S. presidential election.

Credit: 
University of North Carolina Health Care

Study finds evidence emotional support animals benefit those with chronic mental illness

image: Dr. Janet Hoy-Gerlach, professor of social work, gives a high-five to her dog, Henderson. Hoy-Gerlach is the lead author on a new study that finds emotional support animals can provide quantifiable benefits to individuals with serious mental illness.

Image: 
Daniel Miller | The University of Toledo

A team led by a social work researcher at The University of Toledo has published the first empirical evidence that emotional support animals can provide quantifiable benefits to individuals with serious mental illness who are experiencing depression, anxiety and loneliness.

The research brings credence to the many anecdotal reports of emotional support animals having positive impacts on chronic mental health issues.

"This is the first peer-reviewed, published scientific evidence that emotional support animals may benefit people's mental health," said Dr. Janet Hoy-Gerlach, a professor of social work and the lead investigator on the project. "My hope is that our pilot study catalyzes additional research in this area with more rigorous methodology."

Frequently misunderstood and often maligned, emotional support animals are neither household pets nor highly trained service animals.

Emotional support animals need no formal training or certification but are recognized in writing by a health or mental healthcare professional as therapeutically needed for a person with a health or mental health condition. The person's condition must meet the definition of a disability under the Fair Housing Act, a federal housing policy that protects against disability-related housing discrimination.

While there is a sizeable body of research on the benefits of pets that helps to inform the recommendation of emotional support animals in healthcare, there has been no previously published scientific research focusing specifically on the benefits of emotional support animals.

In the UToledo pilot study, researchers from the College of Health and Human Services followed a small group of study participants who were paired with a shelter dog or cat through the Hope and Recovery Pet Program, an innovative community partnership of UToledo, the Toledo Humane Society and ProMedica.

Participants in the study, all of whom met low-income criteria and were identified as at risk of social isolation, were referred by their mental health providers.

Hoy-Gerlach and her collaborators regularly tested participants for changes in a trio of biomarkers related to stress and bonding, and administered surveys about participants' depression, anxiety and loneliness prior to adoption and at the end of the 12-month study period.

At the conclusion of the study, they found a statistically significant decrease in participants' depression, anxiety and loneliness as measured by standardized scales.

The researchers also observed a consistent pattern of higher amounts of the bonding hormone oxytocin and lower amounts of the stress hormone cortisol after participants engaged in focused interactions with their emotional support animal for 10-minute periods.

While not a statistically significant finding, the analysis hinted that participants may have benefited from their animals at a biological level.

"The biomarker findings, along with the standardized stress, anxiety and loneliness surveys and qualitative interviews together suggest insights into how emotional support animals may help reduce symptoms and loneliness associated with chronic mental illness," Hoy-Gerlach said. "We can't make any generalizations or big sweeping claims, but the findings are pretty straightforward for this particular group of people."

Researchers observed the highest oxytocin increase at the 12-month mark, which could indicate participants' bond with their dog or cat had strengthened over time.

Qualitative research corroborated this idea: In open-ended interviews, study participants talked about feeling much more emotionally attached to their respective animals at the end of the study.

The research, published Monday in the Human-Animal Interaction Bulletin, builds on Hoy-Gerlach's previous research into the human-animal bond and could lead the way toward new thinking about how emotional support animals can be implemented as a strategy in managing chronic mental health issues.

A trained clinical social worker who has extensive experience in counseling, crisis work and public mental health, Hoy-Gerlach's interest in studying how animals affect mental health began after working on assessments for suicide and finding people's pets were frequently a protective factor.

She has since devoted much of her academic research to the topic. In 2017, she published the book "Human-Animal Interactions: A Social Work Guide."

While the recently published study was small in nature, Hoy-Gerlach said it could serve as a major step toward demonstrating the value of emotional support animals for human health.

"We have seen a significant increase in social isolation because of COVID-19, particularly among those most vulnerable to its effects. While our research was initiated before the pandemic, the findings couldn't be more applicable," she said. "Now more than ever, we need to be thinking about leveraging every resource at our disposal."

Such efforts can benefit both people and animals in need. The Hope and Recovery Pet Program exemplifies this, Hoy-Gerlach said, providing emotional support animals for people with mental illness while placing homeless animals into permanent, loving homes.

"The human-animal bond is an underutilized resource for both human and animal well-being," Hoy-Gerlach said.

Hoy-Gerlach's findings also serve to push back against the idea that emotional support animals are little more than a scheme aimed at exploiting the system to give household pets special status.

"The narrative of emotional support animal fraud has unfortunately gained traction in the media and public eye, and that obscures the very real ways in which emotional support animals can benefit people," Hoy-Gerlach said. "For the individuals in our study who are living with chronic mental illness, being paired with an appropriate animal appears to have demonstrable positive effects on their well-being."

Credit: 
University of Toledo

The driving force behind tropical mudslides

image: Nicolas Pérez-Consuegra hammering into a rock outcrop to obtain a sample for thermochronology analyses from the mountains in the Putumayo region of Colombia.

Image: 
Syracuse University

In April 2017, a landslide in Mocoa, Colombia, ripped through a local town, killing more than 300 people. Nicolás Pérez-Consuegra grew up about 570 miles north in Santander, Colombia, and was shocked as he watched the devastation on television. At that time, he was an undergraduate intern at the Smithsonian Tropical Research Institute in Panama. As a budding geologist raised hiking the tropical mountains of Colombia, he wondered, what causes greater erosion in some areas of the mountains than in others? And, is it tectonic forces - where Earth's tectonic plates slide against one another leading to the formation of steep mountains - or high precipitation rates, that play a more important role in causing erosion within that region?

To answer those questions would require a geological understanding of the evolution of the mountains in Colombia. During his undergraduate internship, Pérez-Consuegra studied the mountains near the towns of Sibundoy and Mocoa in the southern region of Colombia. There, he observed thick rainforests covering steep mountains and many landslide scars in the cliffs. There were also many landslides on the road leading him to believe that the tension and release of pressure along tectonic faults was shaking the landscape and removing rocks from its surface and shedding it into the rivers.

To find out more about the forces at play that were shaping the steep terrain of that region, Pérez-Consuegra pursued a doctoral degree in the College of Arts and Sciences' Department of Earth and Environmental Sciences (EES). He says the opportunity to develop his own research ideas was one of the key reasons he chose Syracuse University. Pérez-Consuegra led the study from start to finish, proposing the research questions, hypotheses and methodology, with help from his Ph.D. advisor Gregory Hoke, associate professor and associate chair of EES, and Paul Fitzgerald, professor and director of graduate studies in EES. He also obtained research grants and support from EES and a number of outside sources including a National Geographic Early Career Grant and more, which fully funded three field expeditions to Colombia and the analytical work on rock samples collected there.

Pérez-Consuegra and Hoke conducted field research in the Eastern Cordillera portion of the Colombian Andes. During those expeditions the team hiked and traveled by both car and boat to various altitudes to collect over 50 rock samples. Rocks were then shipped to Syracuse University and processed in labs to extract the thermochronology data.

According to Pérez-Consuegra, a thermochronometer is like a stopwatch that starts ticking once a rock cools through a specific range of temperatures, keeping track of the time it takes for the subsequent journey to the Earth's surface. The mineral apatite is the radioactive stopwatch that he employs in his studies. Several kilograms of rock sample are processed to yield a few grams of apatite which contain two types of temperature-dependent stopwatches, or thermochronometers. Researchers can figure out the long-term erosion rate by figuring out how fast a rock moves toward the Earth's surface, using a formula that converts temperature to depth below the Earth's surface and then dividing depth by time.

Pérez-Consuegra's study revealed that the highest erosion rates occur near the places that have the most tectonically active faults. While precipitation may act as a catalyst for erosion on the surface of the mountains, the main force at play are faults where rock is exhuming from deep below the Earth's surface at faster rates.

"Tectonically active faults are causing uplift of the mountains surrounding Mocoa and are also making the landscape steeper," Pérez-Consuegra says. "Steeper and taller mountains are more prone to have landslides. Rainfall, and specifically torrential rains, can trigger the landslides, but what sets the stage are the tectonic processes."

Hoke says that while geomorphologists would like to think that rainfall rates can take over as the major influence on mountain formation, Pérez-Consuegra's research proves that Earth's internal deformation is the main factor.

"While prior work within a bullseye of high-rainfall in Colombia's Eastern Cordillera initially pointed towards a strong climate control on mountain growth, Nicolás' work expanded the same types of observations to another precipitation hotspot over 250 miles away and found the rates at which rock is transported to the surface were dependent on fault activity, and not precipitation amount," Hoke says.

Pérez-Consuegra, who will start a postdoctoral fellowship in environmental sciences at MIT in the fall, notes that geological knowledge is essential for predicting what areas in a tropical mountain range are more prone to have landslides, earthquakes and volcanic eruptions, and the catastrophic consequences that these events might have in the surrounding populations.

"It is important to invest in doing better geological mapping in tropical mountains, to better understand the spatial distribution and geometries of tectonically active faults," Pérez-Consuegra says.

Read more about Pérez-Consuegra's research in the journal Tectonics: "The Case for Tectonic Control on Erosional Exhumation on the Tropical Northern Andes Based on Thermochronology Data."

Credit: 
Syracuse University

Solid-state batteries line up for better performance

image: Illustration of a conventional solid-state battery and the team's new high-performance design that contains tailored electrode-electrolyte interfaces.

Image: 
Graphic courtesy Beniamin Zahiri and Paul Braun

CHAMPAIGN, Ill. -- Solid-state batteries pack a lot of energy into a small space, but their electrodes are not good at keeping in touch with their electrolytes. Liquid electrolytes reach every nook and cranny of an electrode to spark energy, but liquids take up space without storing energy and fail over time. Researchers are now putting solid electrolytes in touch with electrodes made of strategically arranged materials - at the atomic level - and the results are helping drive better solid-state battery technologies.

A new study, led by University of Illinois Urbana-Champaign materials science and engineering professor Paul Braun, postdoctoral research associate Beniamin Zahiri, and Xerion Advanced Battery Corp. director of research and development John Cook, demonstrates how control over the atomic alignment of solid materials can improve the cathode-solid electrolyte interface and stability in solid-state batteries. The results are published in the journal Nature Materials.

"With batteries, it's not just materials that are important, but also how the atoms on the surfaces of those materials are arranged," Zahiri said. "Currently, solid-state battery electrodes contain materials with a large diversity of surface atom arrangements. This leads to a seemingly infinite number of electrode-solid electrolyte contact interface possibilities, all with different levels of chemical reactivity. We are interested in finding which arrangements lead to practical improvements in battery cycle life, energy density and power."

The researchers said an electrolyte's stability controls how many charging and discharging cycles a battery can handle before it starts to lose power. Because of this, scientists are in a race to find the most stable electrolyte materials.

"In the rush to find stable solid electrolyte materials, developers have sort of lost sight of the importance of what is happening in that very thin interface between electrolyte and electrode," Zahiri said. "But the stability of the electrolyte will not matter if the connection between it and the electrodes cannot be evaluated in an efficient way."

In the lab, the team built electrodes containing sodium and lithium ions with specific atomic arrangements. They found correlations between battery performance and interface atomic arrangement in both the lithium- and sodium-based solid-state batteries. They also discovered that minimizing the interface surface area and controlling the electrodes' atomic alignment is key to both understanding the nature of interface instabilities and improving cell performance.

"This is a new paradigm for how to evaluate all the important solid electrolytes available today," Cook said. "Before this, we were largely just guessing what electrode-solid electrolyte interface structures gave the best performance, but now we can test this and find the best combination of materials and atomic orientations."

As demonstrated by co-author mechanical science and engineering professor Elif Ertekin and her group, having this level of control gave the researchers the information needed to run atomic simulations that they hypothesize will lead to even better electrolyte materials in the future, the researchers said.

"We think this will teach us a lot about how to investigate emerging solid electronics," Braun said. "We are not trying to invent new solid electrolytes; the materials world is doing a great job with that already. Our methodology will allow others to precisely measure the interfacial properties of their new materials, something that has otherwise been very difficult to determine."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Airborne radar reveals groundwater beneath glacier

image: Aerial view of Hiawatha Glacier in northwest Greenland as seen from a flight during NASA's Operation IceBridge.

Image: 
NASA, Public domain, via Wikimedia Commons

Melting glaciers and polar ice sheets are among the dominant sources of sea-level rise, yet until now, the water beneath them has remained hidden from airborne ice-penetrating radar.

With the detection of groundwater beneath Hiawatha Glacier in Greenland, researchers have opened the possibility that water can be identified under other glaciers from the air at a continental scale and help improve sea-level rise projections. The presence of water beneath ice sheets is a critical component currently missing from glacial melt scenarios that may greatly impact how quickly seas rise - for example, by enabling big chunks of ice to calve from glaciers vs. stay intact and slowly melt. The findings, published in Geophysical Research Letters May 20, could drastically increase the magnitude and quality of information on groundwater flowing through the Earth's poles, which had historically been limited to ground-based surveys over small distances.

"If we could potentially map water underneath the ice of other glaciers using radar from the air, that's a game-changer," said senior study author Dustin Schroeder, an assistant professor of geophysics at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth).

The data was collected in 2016 as part of NASA's Operation IceBridge using a wide-bandwidth radar system, a newer technique that has only started being used in surveys in the last few years. Increasing the range of radio frequencies used for detection allowed the study authors to separate two radar echoes - from the bottom of the ice sheet and the water table - that would have been blurred together by other systems. While the team suspected groundwater existed beneath the glacier, it was still surprising to see their hunch confirmed in the analyses.

"When you see these anomalies, most of the time they don't pan out," said lead study author Jonathan Bessette, a graduate student at the Massachusetts Institute of Technology who conducted the research as a SUNY Buffalo undergraduate through the Stanford Summer Undergraduate Research in Geoscience and Engineering Program (SURGE).

Based on the radar signal, the study team constructed two possible models to describe Hiawatha Glacier's geology: Frozen land with thawed ice below it or porous rock that enables drainage, like when water flows to the bottom of a vase filled with marbles. These hypotheses have different implications for how Hiawatha Glacier may respond to a warming climate.

Groundwater systems may play a more significant role than what researchers currently model in ice sheets for sea-level-rise projections, according to Schroeder. The researchers hope their findings will prompt further investigation of the possibility for additional groundwater detection using airborne radar, which could potentially be deployed on a grand scale to collect hundreds of miles of data per day.

"What society wants from us are predictions of sea level - not only now, but in futures with different greenhouse gas emission scenarios and different warming scenarios - and it is not practical to survey an entire continent with small ground crews," Schroeder said. "Groundwater is an important player, and we need to survey at the continental scale so that we can make continental-scale projections."

Credit: 
Stanford University

The entire genome from Peştera Muierii 1 sequenced

image: The skull of Peştera Muierii 1, which entire genome is now successfully sequenced.

Image: 
Mattias Jakobsson

For the first time, researchers have successfully sequenced the entire genome from the skull of Peştera Muierii 1, a woman who lived in today's Romania 35,000 years ago. Her high genetic diversity shows that the out of Africa migration was not the great bottleneck in human development but rather this occurred during and after the most recent Ice Age. This is the finding of a new study led by Mattias Jakobsson at Uppsala University and being published in Current Biology.

"She is a bit more like modern-day Europeans than the individuals in Europe 5,000 years earlier, but the difference is much less than we had thought. We can see that she is not a direct ancestor of modern Europeans, but she is a predecessor of the hunter-gathers that lived in Europe until the end of the last Ice Age," says Mattias Jakobsson, professor at the Department of Organismal Biology at Uppsala University and the head of the study.

Very few complete genomes older than 30,000 years have been sequenced. Now that the research team can read the entire genome from Peştera Muierii 1 (see the fact box below), they can see similarities with modern humans in Europe while also seeing that she is not a direct ancestor. In previous studies, other researchers observed that the shape of her cranium has similarities with both modern humans and Neanderthals. For this reason, they assumed that she had a greater fraction of Neanderthal ancestry than other contemporaries, making her stand out from the norm. But the genetic analysis in the current study shows that she has the same low level of Neanderthal DNA as most other individuals living in her time. Compared with the remains from some individuals who lived 5,000 years earlier, such as Peştera Oase 1, she had only half as much Neanderthal ancestry.

The spread of modern humans out of Africa about 80,000 years ago is an important period in human history and is often described as a genetic bottleneck. Populations moved out of Africa and into Asia and Europe. The effects of these migrations can be seen even today. Genetic diversity is lower in populations outside of Africa than in African. That Peştera Muierii 1 has high genetic diversity implies that the greatest loss of genetic diversity occurred during the last Ice Age (which ended about 10,000 years ago) instead of during the out of Africa migration.

"This is exciting since it teaches us more about the early population history of Europe. Peştera Muierii 1 has much more genetic diversity than expected for Europe at this time. This shows that genetic variation outside of Africa was considerable until the last Ice Age, and that the Ice Age caused the decrease in diversity in humans outside of Africa."

The researchers were also able to follow the genetic variation in Europe over the last 35,000 years and see a clear decrease in variations during the last Ice Age. The reduced genetic diversity has previously been linked to pathogenic variants in genomes being more common among populations outside of Africa, but this is in dispute.

"Access to advanced medical genomics has allowed us to study these ancient remains and even be able to look for genetic diseases. To our surprise, we did not find any differences during the last 35,000 years, even though some individuals alive during the Ice Age had low genetic diversity.

Now we have accessed everything possible from these remains. Peştera Muierii 1 is important from a cultural history perspective and will certainly remain interesting for researchers within other areas, but from a genetic perspective, all the data is now available."

Credit: 
Uppsala University

Survival of migrating juvenile salmon depends on stream flow thresholds

image: Juvenile Chinook salmon were tagged with acoustic transmitters, allowing scientists to track them as they migrated down the Sacramento River to the sea.

Image: 
Photo by Alex McHuron

Juvenile salmon migrating to the sea in the Sacramento River face a gauntlet of hazards in an environment drastically modified by humans, especially with respect to historical patterns of stream flow. Many studies have shown that survival rates of juvenile salmon improve as the amount of water flowing downstream increases, but "more is better" is not a useful guideline for agencies managing competing demands for the available water.

Now fisheries scientists have identified key thresholds in the relationship between stream flow and salmon survival that can serve as actionable targets for managing water resources in the Sacramento River. The new analysis, published May 19 in Ecosphere, revealed nonlinear effects in the flow-survival relationship, meaning it changes in stepwise fashion, with significant jumps in survival rates at two key steps.

A threshold defined in the paper as the "historic mean" flow of 10,712 cubic feet per second (cfs) provides an especially important target for resource managers, said first author Cyril Michel, a project scientist in the Institute of Marine Sciences at UC Santa Cruz.

"We see a substantial increase in salmon survival above that level, so if we can increase stream flow to that level for critical periods of the year, it would really benefit the salmon populations," Michel said.

The researchers analyzed migration survival data from 2,436 juvenile Chinook salmon tagged with acoustic transmitters and tracked in years with different water flows, from 2013 to 2019. After identifying the key thresholds, the team then used historical data on river flows and salmon migration patterns to run simulations of different management actions.

"We wanted to see how much the salmon populations would benefit if we had enacted flows to match that threshold of 10,712 cfs," Michel said. "We found we could increase survival by a lot, sometimes doubling or tripling the survival rates in a given year, without having to spend too much water. It's a reasonable target that won't break the bank in most years."

Juvenile salmon migrate out to sea in the spring, which was historically a period of high flows in the Sacramento River. Now, however, dams and water diversions combined with seasonal reductions in flows from tributaries result in spring flows that tend to be the lowest of the year.

"Because of how we've plumbed the Central Valley, salmon now have to migrate out during low flows. So we're proposing to enact pulse flows in the spring to bring the river up to historical conditions for short periods of time," Michel said.

He noted that plans to implement pulse flows in the Sacramento River are now being developed under environmental permits renegotiated last year for the Central Valley Project, the massive federal water management project that includes the Sacramento River. An interagency group of scientists has agreed to use the 10,712 cfs threshold as the target for these pulse flows, Michel said.

"We're excited that this might actually happen this year or next, and we will be tracking survival rates to see how successful it is," he said. "This could be a tool that is used for many years to come, with real benefits for salmon populations."

Additional research on the optimal timing of pulse flows could improve their implementation, enabling adaptive management in response to environmental conditions, he added.

The study identified two other thresholds, defined as "minimum" (4,259 cfs) and "high" (22,872 cfs). Below the minimum threshold, only 3% of the tagged salmon survived the migration. Survival was 18.9% between minimum and historic mean, 50.8% between historic mean and high, and 35.3% above the high threshold.

The results suggest that the main mechanism behind the flow thresholds relates to how fast migrating fish are able to move through the river and get past the hazards along the way. Travel times for fish during flows between the historic mean and high thresholds were significantly shorter than for fish experiencing all other flows.

"In most years, it's best for juvenile salmon to rear in the upper river where it's safer, and then move through the lower river as quickly as possible to reduce their exposure to predators and other stressors," Michel explained.

Credit: 
University of California - Santa Cruz

Opening up possibilities with open-top optofluidic device

image: Schematic of the co-planar light-actuated optoelectrowetting microfluidic device that features an integrated metal mesh grid. A droplet on the device surface is actuated and moved around the two-dimensional plane under the influence of an incident optical pattern.

Image: 
Jodi Loo et al. doi: 10.1117/1.JOM.1.3.034001.

Microfluidic technologies have seen great advances over the past few decades in addressing applications such as biochemical analysis, pharmaceutical development, and point-of-care diagnostics. Miniaturization of biochemical operations performed on lab-on-a-chip microfluidic platforms benefit from reduced sample, reagent, and waste volumes, as well as increased parallelization and automation. This allows for more cost-effective operations along with higher throughput and sensitivity for faster and more efficient sample analysis and detection.

Optoelectrowetting (OEW) is a digital optofluidic technology that is based on the principles of light-controlled electrowetting and enables the actuation and manipulation of discrete droplets. OEW devices have many advantages, such as the ability for large-scale, real-time, and reconfigurable control of picoliter- to microliter-sized droplets by adjusting the number and size of low-intensity optical light patterns incident on the device. With each individual droplet on the OEW device acting as its own bioreaction chamber, the OEW device also has the ability to support multiplex capabilities. This can prove to be beneficial in applications such as single-cell analysis and genomics or combinatorial libraries.

Previous traditional OEW devices provide a flexible platform to perform chemical and biological assays such as real-time isothermal polymerase chain reaction with basic droplet manipulation techniques. However, in these OEW devices, droplets are sandwiched between a bottom active OEW substrate and a top layer ground electrode substrate, forcing any input/output fluidic configurations to be integrated from the side openings. Although feasible, this can prove to be limiting for system integration.

Researchers from the University of California, Berkeley, created a single-sided, co-planar OEW device that allows for individualized and parallel droplet actuation and benefits from easier droplet accessibility from above for more input/output configuration schemes. This was achieved by eliminating the need for a top cover electrode found in traditional OEW devices by fabricating a metal mesh grid integrated on the OEW device. Droplets can still move freely around the two-dimensional device surface and are now accessible from above due to the open-top design.

In their research, recently published in SPIE's new Journal of Optical Microsystems, they have also derived a theoretical model of the co-planar OEW device to better understand how the integrated metal mesh grid affects device and droplet performance. Analysis gathered from the co-planar OEW model was used to optimize the co-planar device structure and operation. They have demonstrated the ability for basic droplet manipulation, such as individual droplet operations in parallel, merging of multiple droplets, and the ability to handle and move droplets with varying volumes simultaneously.

The co-planar device improves on the traditional OEW device's droplet actuation performance with speeds more than two times faster, up to 4.5 cm/s. Higher droplet speeds on the co-planar OEW device achieved despite a marginal reduction in effective force compared to the traditional OEW device can be partly attributed to the reduction in friction due to elimination of the top cover.

In addition, the ability to operate co-planar OEW devices with 95% reduced light intensity was demonstrated. To showcase the benefit of having exposed droplets to accommodate a wider range of input/output configurations, a droplet-on-demand dispensing system from above was integrated with the co-planar OEW device to inject, collect, and position individual droplets and form large-scale droplet arrays of up to 20 by 20, covering the whole device surface area. Creating larger OEW devices should allow for even more droplets to be accommodated on chip.

With this research, the team has developed an OEW platform for reliable droplet manipulation that can accomplish most basic biological and chemical benchtop techniques. The co-planar OEW device expands the flexibility and range of possibilities for optofluidic technologies to realize greater system integration capabilities and biological and chemical applications.

Credit: 
SPIE--International Society for Optics and Photonics

Clearing the air: A reduction-based solution to nitrogen pollution with a novel catalyst

image: A new iron catalyst helps preferentially reduce nitric oxide to hydroxylamine, opening doors to pollution control and clean energy.

Image: 
Gwangju Institute of Science and Technology (GIST)

Our reliance on fossil fuels as a primary energy source has pushed air pollution to an all-time high, resulting in several environmental and health concerns. Among the major pollutants, nitrogen oxide (NOx) accumulation can cause severe respiratory diseases and imbalance in the Earth's nitrogen cycle. Reducing NOx accumulation is, therefore, an issue of utmost importance.

Recently, the conversion of NOx into harmless or even useful nitrogen products has emerged as a promising strategy. Particularly appealing to scientists is the reduction of NOx to hydroxylamine (NH2OH), which can be utilized as a renewable source of energy.

The "make-or-break" step that determines the formation of hydroxylamine is the catalytic electrochemical reduction of nitric oxide (NO), which can either yield hydroxylamine or nitrous oxide (N2O), depending on the electrolyte pH and electrode potential. Studies show that for hydroxylamine formation to dominate over N2O formation, very acidic electrolytes with a pH less than 0 are required. However, such a harshly acidic environment rapidly degrades the catalyst, limiting the reaction. "The development of a new catalyst with high activity, selectivity, and stability is the next challenge," says Prof. Chang Hyuck Choi from the Gwangju Institute of Science and Technology (GIST) in Korea where he works on the catalysis of electrochemical reactions.

In a recent study published in Nature Communications, Prof. Choi and his colleagues from Korea and France investigated NO reduction in the presence of a new iron-nitrogen-doped carbon (Fe-N-C) catalyst made of isolated FeNxCy moieties bonded to a carbonaceous substrate. The catalyst was chosen for its high selectivity for the NH2OH pathway as well as its resistance to extremely acidic conditions.

The team performed in operando (i.e., during the reaction) spectroscopy and electrochemical analysis of the catalyst to determine its catalytic site and the pH dependence of NH2OH production.

They identified the active site of the catalyst as the ferrous moieties bonded to the carbon substrate where the rate of NH2OH formation showed a peculiar increase with decreasing pH. The team attributed this peculiarity to an uncertain oxidation state of NO. Finally, they achieved efficient (71%) NH2OH production in a prototypical NO-H2 fuel cell, establishing the catalyst's practical utility. Moreover, they found that the catalyst exhibited long-term stability, showing no signs of deactivation even after operating for over 50 hours!

The approach not only reduces harmful air pollutants, but also provides a useful byproduct that may find use in ushering in a renewable energy society. "Apart from the applications of hydroxylamine in the nylon industry, it can also be used as an alternative hydrogen carrier. Thus, the new catalyst will not only help reduce the amount of NOx pollutants in our atmosphere but also lead us to a renewable energy future," Prof. Choi explains.

We can breathe easy knowing that the team's findings take us a few steps closer to a pollution-free renewable energy society.

Credit: 
GIST (Gwangju Institute of Science and Technology)

Multi-story buildings made of wood sell for 9% more than other construction in Helsinki

image: A block of flats made out of wood in Viikki, a neighbourhood in the Finnish capital of Helsinki.

Image: 
Hans Koistinen/Metsä Wood

Building more homes and buildings with wood has been on the radar for years as a way to offset carbon emissions, though construction companies have been hesitant to take the material in broader use. A study at Aalto University in Finland is now the first to show that building with wood can be a sound investment.

The team analysed statistical data from real estate sales in the Finnish capital of Helsinki and two suburbs, from 1999 to 2018. Of these, timber-built homes made up 2.23% of cases. The findings show that multi-storied buildings made out of wood sold for an average of 8.85% more than those made from other materials.

Previous research has pointed to perceptions of higher costs in wood construction, and until now there have been no definitive results on the material's economic feasibility. Since many things can affect price, the researchers used regression analysis to control for other potential factors.

'At first glance multi-story housing blocks made out of wood appear to be cheaper on average but when we look more closely at the data and control for location, we see that it's economically advantageous to use wood. The results show that wood-based housing is almost 10% more expensive per square metre than concrete-based housing in the same area,' explains Seppo Junnila, professor of real estate business at Aalto University.

Price differences between wood and other construction materials were not seen in the suburbs studied. The researchers say this may have to do with the fact that timber-based construction in Helsinki occurs in cheaper-than-average areas, where people may be more willing to pay for the eco-friendly material.

'These days many consumers value ecological choices and, at the same time, want to communicate their green preferences to others. Our results show that wooden buildings are located more often in areas with lower socio-economic status - wood is what boosts their prices,' explains Doctoral Student Ilmari Talvitie.

'Our previous research shows that if you buy a flat you're more concerned about its environmental footprint than if you rent. An owner typically invests more in ways to improve performance, like energy-saving options. This principle seems to hold true here: buyers are willing to pay more for an eco-friendly choice, even if they can't afford to live in the most expensive neighbourhoods of the city,' adds Junnila.

While Finland's construction industry has been hesitant to invest in timber construction, the country's government sees its potential: the Ministry of the Environment aims to have 45% of new multi-storied buildings constructed with wood by 2025.

'Just legislation isn't enough to meet these goals. We need people to want to live in these kinds of buildings and construction firms need to see them as a business opportunity,' says Junnila.

Construction firms hesitate because of a lack of skills, resistance to change and a concern that wooden construction will be expensive, say the researchers.

'The cost of construction is just half of the cost of a home, so if the consumer is ready to pay nearly 10% for their wooden home, it's an extremely worthwhile investment for the builder,' Junnila emphasises.

Previous research has shown that boosting wood construction in cities is an effective way of storing carbon emissions. Deforestation, however, continues to be a hot topic worldwide.

'Wood construction is an excellent option in countries like Finland and other Nordic countries, where legislation requires that cut trees be replaced. It's also worth remembering that, globally speaking, deforestation happens for other reasons -- not wood construction. If we need a new building, wood is indisputably a good choice for our planet,' says Jussi Vimpari, a post-doctoral researcher at Aalto University.

Most importantly, in a world increasing fixated on carbon neutrality, a shift to wood construction can help cities around the planet meet their goals.

'Building with wood is essentially the only way for cities to store carbon - by definition they don't have vast amounts of nature needed to sink carbon. The good news is that some international investment companies have already realised the potential of timber construction, and we can only expect this interest to grow,' says Junnila.

Credit: 
Aalto University