Tech

Study of past South Asian monsoons suggests stronger monsoon rainfall in the future

image: The research vessel JOIDES Resolution drilled sediment cores from the Bay of Bengal, which were used to reconstruct past monsoon rainfall. Those data were used to test predictions of future monsoon rain as the climate changes. The data suggests that future rainfall could increase as CO2 levels rise.

Image: 
Courtesy of Steven Clemens

PROVIDENCE, R.I. [Brown University] -- A new study of monsoon rainfall on the Indian subcontinent over the past million years provides vital clues about how the monsoons will respond to future climate change.

The study, published in Science Advances, found that periodic changes in the intensity of monsoon rainfall over the past 900,000 years were associated with fluctuations in atmospheric carbon dioxide (CO2), continental ice volume and moisture import from the southern hemisphere Indian Ocean. The findings bolster climate model predictions that rising CO2 and higher global temperatures will lead to stronger monsoon seasons.

"We show that over the last 900,000 years, higher CO2 levels along with associated changes in ice volume and moisture transport were associated with more intense monsoon rainfall," said Steven Clemens, a professor of geological sciences (research) at Brown University and lead author of the study. "That tells us that CO2 levels and associated warming were major players in monsoon intensity in the past, which supports what the models predict about future monsoons -- that rainfall will intensify with rising CO2 and warming global temperature."

The South Asian monsoon is arguably the single most powerful expression of Earth's hydroclimate, Clemens says, with some locations getting several meters of rain each summer. The rains are vital to the region's agriculture and economy, but can also cause flooding and crop disruption in years when they're particularly heavy. Because the monsoons play such a large role in the lives of nearly 1.4 billion people, understanding how climate change may affect them is critical.

For several years, Clemens has been working with an international team of researchers to better understand the major drivers of monsoon activity. In November 2014, the research team sailed aboard the research vessel JOIDES Resolution to the Bay of Bengal, off the coast of India, to recover sediment core samples from beneath the sea floor. Those core samples preserve a record of monsoon activity spanning millions of years.

The rainwater produced by the monsoons each summer eventually drains off the Indian subcontinent into the Bay of Bengal. The runoff creates a layer of dilute seawater in the bay that rides atop the denser, more saline water below. The surface water is a habitat for microorganisms called planktonic foraminifera, which use nutrients in the water to construct their shells, which are made of calcium carbonate (CaCO3). When the creatures die, the shells sink to the bottom and become trapped in sediment. By taking core samples of sediment and analyzing the oxygen isotopes in those fossils, scientists can divine the salinity of the water in which the creatures lived. That salinity signal can be used as an indicator of changing rainfall amounts over time.

Other data from the samples complement the foraminifera data. River runoff into the bay brings sediment from the continent with it, providing another indicator of rain intensity. The carbon isotopic composition of plant matter washed into the ocean and buried in sediment offers yet another rainfall-related signal that reflects changes in vegetation type. The hydrogen isotope composition of waxes on plant leaves varies in different rainfall environments, and that signature can be reconstructed from sediment cores as well.

"The idea is that we can reconstruct rainfall over time using these proxies, and then look at other paleoclimate data to see what might be the important drivers of monsoon activity," Clemens said. "That helps us to answer important questions about the factors driving the monsoons. Are they primarily driven by external factors like changes in Earth's orbit, which alter the amount of solar radiation from the sun, or are factors internal to the climate system like CO2, ice volume and moisture-transporting winds more important?"

The researchers found that periods of more intense monsoon winds and rainfall tended to follow peaks in atmospheric CO2 and low points in global ice volume. Cyclical changes in Earth's orbit that alter the amount of sunlight each hemisphere receives played a role in monsoon intensity as well, but on their own could not explain monsoon variability. Taken together, the findings suggest that monsoons are indeed sensitive to CO2-related warming, which validates climate model predictions of strengthening monsoons in relation to higher CO2.

"The models are telling us that in a warming world, there's going to be more water vapor in the atmosphere," Clemens said. "In general, regions that get a lot of rain now are going to get more rain in the future. In terms of the South Asians monsoons, that's entirely consistent with what we see in this study."

Credit: 
Brown University

Study finds lower mortality rate for men at high risk for death from prostate cancer who received early postoperative radiation therapy

In a large, international retrospective study, men at high risk for death from prostate cancer had a significant reduction in all-cause mortality if treated with radiation shortly after surgery.

Prostate cancer is one of the most common forms of cancer among men, and about 1-in-8 of them will be diagnosed with it during their lifetime. While most men are cured with available treatment, there remains a group at high risk for death. In the United States in 2020, 33,330 men died from the disease, making prostate cancer the second leading cause of cancer death for men in this country. Therefore, among those at highest risk of recurrence, metastasis, and death from prostate cancer, understanding what steps can be taken to lower these risks could save and extend lives.

Early results from three randomized, clinical trials reported no benefit to giving adjuvant radiation therapy (i.e. when the prostate-specific antigen (PSA) level is not measurable) rather than early salvage radiation therapy (i.e. when the PSA level becomes measurable, signaling recurrence). But these three studies had very small numbers of men at high risk for death from prostate cancer. A new, retrospective study focuses on men who have both high-grade prostate cancer that extends outside the prostate and/or has spread into the lymph nodes. For these men who are at high risk of dying from the disease, there was a significant reduction in the risk of death with adjuvant radiation therapy (aRT) use, suggesting that it should be offered to these men. Results are published in The Journal of Clinical Oncology.

"We found that men at highest risk for dying from prostate cancer may lose the chance for cure if we wait for the PSA to become measurable before delivering radiation after surgery," said corresponding and senior author Anthony D'Amico, MD, PhD, professor and chief of Genitourinary Radiation Oncology at Brigham and Women's Hospital and Dana-Farber Cancer Institute. "While three previous randomized studies have largely encompassed men at very low risk of dying from prostate cancer after surgery, men at high risk of dying from prostate cancer have the most to lose from delaying the use of early and potentially life-saving radiation therapy. We focused on these patients and on the sentinel endpoint of mortality."

To conduct their study, D'Amico and colleagues leveraged a cohort of more than 26,000 men treated between 1989 and 2016 across the U.S. and Germany. The cohort included 2,424 patients who were at high risk for dying from prostate cancer despite surgery -- men with a Gleason score of 8-10 and extension of the cancer beyond the prostate capsule and/or into pelvic lymph nodes.

The researchers found that adjuvant radiation therapy was associated with significantly lower all-cause mortality. Among men with high-grade prostate cancer that extended outside the prostate, the risk of death was reduced by two-thirds. Ten years after radical prostatectomy, the rate of all-cause mortality was 5 percent among those who received adjuvant radiation therapy, compared to 22 percent among those who had received sRT. Among those whose cancer had spread to the lymph nodes, a group many consider incurable, the risk of death was reduced by one-third.

The authors note that their study is retrospective in nature, and while they took many steps to adjust and control for relevant patient- and cancer-related factors, some degree of selection bias may exist. For example, men selected for adjuvant as compared with early sRT might have been healthier. Therefore, it is possible that the risk reduction in death could overestimate the true risk reduction.

"For those men at high risk of dying from prostate cancer despite surgery, adjuvant radiation therapy rather than waiting until the PSA is measurable appears to be able to reduce all-cause mortality," said D'Amico. "If we want to make a global impact in driving down the number of people who die from prostate cancer, it's important to examine what can be done to help these men who are most at risk for dying from this disease."

Credit: 
Brigham and Women's Hospital

Ten-fold increase in carbon offset cost predicted

image: Note: "Nationally Determined Contribution (NDC) adjustment" refers to the scenario where governments take responsibility for reducing emissions in their Paris commitments, so only higher cost abatement options are available to the voluntary market.

Image: 
Trove Research

The cost of offsetting corporate carbon emissions needs to increase ten-fold to drive meaningful climate action, says a landmark report by Trove Research and UCL.

Current prices of carbon offsets are unsustainably low and need to increase significantly to encourage greater investment in new projects that remove carbon from the atmosphere.

If prices stay low companies could be accused of greenwashing their emissions, as real emissions reduction and carbon removals are more costly than today's prices.

Prices of carbon credits used by companies to offset their emissions are currently low, due to an excess of supply built up over several years, together with issues over whether payments for credits really result in additional reductions in carbon emissions.

According to the research, titles Future Demand, Supply and Prices for Voluntary Carbon Credits - Keeping the Balance, without this surplus, prices would be around $15/tCO2e higher, compared to $3-5t/CO2e today.

The research shows, however, that the surplus will not last forever, with demand for carbon credits expected to increase five to ten-fold over the next decade as more companies adopt Net Zero climate commitments.

This growth in demand should see carbon credit prices rise to $20-50/tCO2e by 2030, as more investment is required in projects that take carbon out of the atmosphere in the long-term. These prices are needed, for example, to incentivise landowners to forgo income from agriculture and instead preserve forests and plant trees.

With a further increase in demand expected by 2040 and 2050, carbon credit prices would rise in excess of $50/tCO2e.

If governments successfully reduce emissions through domestic policies, fewer carbon credits will be available to businesses through the voluntary market. This would increase carbon credit prices further, potentially reaching $100/tCO2e.

If carbon credit prices remain significantly below these forecast levels, companies could be open to criticisms of greenwashing, claiming credit for emission reductions that would have been undertaken anyway.

The analysis also shows that the contribution of the voluntary market to reducing world emissions needs to be seen in perspective. Even at prices of $100/tCO2e the technologies assessed in this study (reducing deforestation, forest restoration, CCS, BECCs and renewables in least developed countries) could deliver around 2bn tCO2e per year of emission reductions on average between now and 2050.

This is about 4% of world greenhouse gas emissions, and 10% of the gap between global "business as usual" emissions in 2030 and pledges in the Paris Agreement by 2030, showing that the market for offsets will be modest compared to economy-wide emissions reductions needed to reach the Paris targets and net zero by 2050.

However, over the next decade the voluntary market will provide a valuable financing mechanism to support the protection of existing forests and restoring degraded habitats, providing immediate climate and biodiversity benefits while other technologies that can remove carbon from the atmosphere are scaled up.

Guy Turner, CEO of Trove Research and lead author of the study said "It is encouraging to see so many companies setting Net Zero and Carbon Neutral climate targets. What this new analysis shows is that these companies need to plan for substantially higher carbon credit prices and make informed trade-offs between reducing emissions internally and buying credits from outside the company's value chain."

Co-author of the study Professor Mark Maslin (UCL Geography) said: "Customers, clients, investors and employees all want companies to become more sustainable and achieve net zero carbon as soon as possible. Even with ambitious carbon reduction plans there are some company emissions that are currently unavoidable and this is where carbon offsetting is essential. But everyone wants a carbon credit system that is reliable and really does remove carbon from the atmosphere - what this groundbreaking report shows is that this will cost significantly more than companies are paying now."

Co-author of the study Professor Simon Lewis (UCL Geography) said: "The current market in carbon credits is the wild-west, where too often anything goes. A clean-up and independent regulation is required, which will increase the price of carbon credits. This is because in reality it is costly to remove carbon dioxide from the atmosphere. Overall it will be cheaper in the long-run to invest in moving to zero emissions rather than relying on offsets. But for those emissions that remain, the true price of removing carbon from the atmosphere must be paid, as the alternative is greenwash."

Credit: 
University College London

Multisensory facilitation near the body in all directions

image: A virtual sound source approached or receded in front, rear, left, or right body relative direction. The participants were asked to quickly and accurately detect the tactile sensation of a vibrator attached to their chest, regardless of the sound. When the sound source was approaching in any directions and was closer to the body, the vibration was detected faster.

Image: 
COPYRIGHT (C) TOYOHASHI UNIVERSITY OF TECHNOLOGY. ALL RIGHTS RESERVED.

Details:
Peripersonal space (PPS) is defined as the space near the body within which we can reach external objects and be reached by others. It has the special function of multisensory facilitation. A research team at Toyohashi University of Technology, in collaboration with researchers at Keio University and the University of Tokyo, investigated PPS representation in the front, rear, left, and right directions by audio-tactile multisensory integration using tactile detection with task-irrelevant approaching and receding sounds. They found that the tactile stimulus was detected faster near the body space than far from it when sound approached from any direction, but not when it receded. Thus, peripersonal representations exist with approaching sound, irrespective of the direction of approach. This study was published in Scientific Reports on May 28, 2021.

It is important for human and non-human primates to detect a sensory input that has a present or immediate potential threat in the space immediately surrounding the body, that is, PPS. However, debate remains about the shape and extent of PPS.

In the experiment, the researchers investigated the PPS representations in four directions (front, rear, left, and right), while previous studies focused on differences between the front and rear or left and right directions.

Because humans can hear sounds from any direction, the researchers used both approaching and receding task-irrelevant sounds in the experiment. Participants were asked to respond as quickly as possible when a tactile stimulus was applied to a vibrator on their chest. The timing of the tactile stimulus was varied relative to the virtual sound-source location, while the observers were blindfolded during the experiment.

The results indicated that, when sound approached, the observers responded to the tactile stimulus faster when the sound position was closer to them than when it was farther away. Thus, the participants could integrate the multisensory stimuli (tactile and task-irrelevant approaching sound stimuli) near the body. This multisensory facilitation effect in PPS diminished or weakened with the receding sound, which may be because an approaching sound can signal an imminent threat, while a receding sound poses no danger.

Therefore, the PPS representation is nearly circular around the trunk of the body, which implies that humans can detect a threat from any direction quickly by utilizing approaching sound information within PPS.

Future Outlook:
The research team investigated the PPS representations in the front, rear, left, and right directions, but not in the up and down directions. PPS representations in three-dimensional (3-D) space should be investigated in future studies because audio information is available and potential threats exist in all 3-D directions.

Credit: 
Toyohashi University of Technology (TUT)

Substantial carbon dioxide emissions from northern peatlands drained for crop cultivation

image: Arctic peatland in Svalbard

Image: 
Angela Gallego-Sala

A new study shows that substantial amounts of carbon dioxide were released during the last millennium because of crop cultivation on peatlands in the Northern Hemisphere.

Only about half of the carbon released through the conversion of peat to croplands was compensated by continuous carbon absorption in natural northern peatlands.

Peatlands are a type of wetland which store more organic carbon than any other type of land ecosystem in the world.

Due to waterlogged conditions, dead plant materials do not fully decay and carbon accumulates in peatlands over thousands of years.

Therefore, natural peatlands help to cool the climate by capturing carbon dioxide (CO2) from the atmosphere through photosynthesis and trapping carbon in soils.

However, artificial drainage of peatlands for agriculture aerates the soil and enhances the decay of organic matter, rapidly releasing carbon into the atmosphere.

Peatlands are a missing piece of the carbon cycle puzzle; little is known about how much carbon has been released due to drainage and conversion of peatland to cropland during the historical sprawl of agriculture, and about the role of cultivated peatlands versus natural peatlands.

The new international study, led by INRAE and LSCE, and including the University of Exeter, quantified CO2 fluxes in natural and cultivated peatlands between 850 and 2010.

The study provides the first detailed estimates of historical carbon losses from cultivated northern peatlands.

"We incorporated peatland hydrological and carbon processes into a process-based land surface model," said Chunjing Qiu who developed the model and designed the study, and worked at the Institut National de de recherche pour l'agriculture, l'alimentation et l'environnement (INRAE) and the Laboratory for Sciences of Climate and Environment (LSCE) in France.

"This model is one of the first to simulate natural peatland and the conversion of peatland to cropland and resultant CO2 emissions.

"We also looked at how the carbon emission rates of cultivated peatlands vary with time after conversion.

"High CO2 emissions can occur after the initial drainage of peatland, but then, the emission rates decrease with time because of depletion of labile carbon and increasing recalcitrance of the remaining material."

Professor Angela Gallego-Sala, of Exeter's Global Systems Institute, said: "This study highlights how much carbon is lost if you drain peatlands, as we have done to many peatlands in Europe, but it also reminds us how important it is to make sure we manage peatlands appropriately."

The study shows that cultivated northern peatlands emitted 72 billons tons of carbon over 850-2010, and 40 billons tons over the period 1750-2010.

According to the authors, this indicates that historical CO2 emissions caused by land-use changes are greater than previously estimated.

It also implies an underestimation of historical carbon uptake by terrestrial ecosystems if carbon emissions from cultivated peatlands is ignored.

"Carbon emissions from drainage of peatlands are a source of concern for national greenhouse gas budgets and future emission trajectories," said Philippe Ciais from LSCE, who co-lead the study with Chunjing Qiu.

"However, we have only a very few observations, and peatland drainage and cultivation are not explicitly considered by bookkeeping models and dynamic global vegetation models used to compute the annual carbon budget.

"Emissions from cultivated peatlands are omitted in previous global carbon budget assessments.

"Our study brings new and important implications for a better understanding of the global carbon budget."

Credit: 
University of Exeter

Preventing suicide among a 'hidden population' in public housing

COLUMBUS, Ohio - New research suggests that African American families living in public housing are a "hidden population" when it comes to national suicide prevention efforts.

The study showed 11% of Black teens and young adults living in a mid-Atlantic public housing development reported that in the previous 12 months, they had made a plan to die by suicide.

The finding fits with what previous research has shown: that African American youths are the fastest-growing group engaging in suicidal behavior and dying by suicide, and have the highest suicide death rate increase among any other racial or ethnic minority group, from 2.55 per 100,000 in 2007 to 4.82 per 100,000 in 2017.

Males were more likely than females to have come up with a suicide plan, and certain family dynamics increased the chances a youth would engage in suicidal behavior: mothers who were currently incarcerated or fathers with a history of alcohol abuse.

Researchers suggest the findings warrant expansion of the types of locations that national suicide prevention experts have targeted as the best places to deliver prevention programs. Rather than basing interventions at community hospitals or schools, the researchers argue, culturally tailored suicide-prevention interventions should be offered within public housing communities themselves as well as these other locations.

Though public housing was originally envisioned as a temporary residence for transitory families, societal changes - the collapse of manufacturing jobs, crack cocaine epidemic and welfare-to-work mandates among others - combined to leave most families in these developments without the means to move out.

"I call it a 'constellation of correlations.' There was no more transition and these communities were devastated, and as a result, you see a 'Lord of the Flies' type of narrative where children were unintentionally left to their own devices," said Camille R. Quinn, lead author of the study and assistant professor of social work at The Ohio State University.

"Today, even though there is not as much drug trafficking, it is still part of the tapestry in these communities and that has definitely left an imprint. And parents and their children are likely living with the aftermath," Quinn said. "If either parent is in or out of the prison system, or has charges or offenses on their record, that makes it harder for them to find employment, and that makes it difficult for them to do the best they can for their children."

The study is published online in the Journal of Racial and Ethnic Health Disparities.

This study used select data from a larger research project examining the association for residents in public housing between neighborhood factors and health risk behaviors. The sample of a subset of the participants in the initial study included 190 African American youths and young adults between ages 15 and 24.

Quinn and colleagues analyzed results from survey questions asking the youths if they had made a plan to attempt suicide in the past 12 months, if either parent were currently or had previously been in jail, and if either parent had ever had problems with illegal substances or consuming too much alcohol.

Almost 34% of fathers and 8.4% of mothers were incarcerated at the time the survey data were collected, and more dads than moms had had drug and alcohol problems. Statistical analysis showed that a father's past alcohol problem or a mother's current incarceration had the strongest association with a youth's plan to die by suicide. Males were significantly more likely than females to have planned a suicide.

"It's significant that so many males reported a plan to die by suicide, which is really stark," said Quinn, adding that this finding matches patterns seen in previous research: Girls and women as a whole are much more likely to think about, plan and attempt suicide but survive, while young men who have decided they are going to die are more likely to follow through.

The researchers cite U.S. Census data showing that public housing constitutes almost a quarter of households in the most highly segregated and lowest-opportunity neighborhoods in the United States, and African American households represent 51% of the families living in public housing in these neighborhoods. Of those families, 29% have been contacted by child protective services - suggesting these housing communities are marked by violence and social problems, including parental substance misuse and jail time, which have been linked in previous research with youths' suicidal behavior.

The study findings imply that African American families living in public housing should be targeted for family-centered, evidence-based interventions delivered in their residential communities, the researchers say, which could lead to development of the most effective suicide prevention practices for this specific population.

The National Action Alliance for Suicide Prevention Research Prioritization Task Force published a plan in 2014 to reduce suicide attempts and deaths by 40% or more by 2024. The plan recommended reaching "boundaried" populations by delivering interventions in hospital emergency rooms, schools, correctional facilities, and mental health and substance abuse centers - systems from which families living in public housing may be isolated and therefore missed by suicide prevention outreach.

In the meantime, Quinn is investigating potential factors beyond the family that could influence - positively or negatively - suicidal behavior in African American teens and young adults living in public housing.

"What impact might school have, or peers?" she said. "In this paper, we don't even know whether or not the young people in this sample have any involvement with any system - child welfare, special education or juvenile justice. We would guess that if they were, that would have implications for what kind of considerations might be made for treatment."

Credit: 
Ohio State University

Plant competition during climate change

How plants cope with stress factors has already been broadly researched. Yet what happens when a plant is confronted with two stressors simultaneously? A research team working with Simon Haberstroh and Prof. Dr. Christiane Werner of the Chair of Ecosystem Physiology at the Institute of Forest Sciences and Natural Resources (UNR) of the University of Freiburg is investigating this. Together with colleagues from the Forest Research Center of the School of Agriculture of the University of Lisbon in Portugal and the Institute of Meteorology and Climate Research at the Karlsruhe Institute of Technology - KIT, they have published their findings in the specialist journal "New Phytologist."

The researchers set up a field study in the Park Tapada Real in the small Portuguese town of Vila Viçosa. The focus was on how cork oak (Quercus suber) handles two stressors: the first being extreme drought; and the other, the invasive plant species gum rockrose (Cistus ladanifer). The study has great relevance because both stress factors are currently clearly on the increase. At the same time, there was a gap in research on the issue. Researchers have up to now rarely looked at how different, interacting stress factors influence ecosystems.

The researchers were in part surprised by their findings. "The factors interacted more dynamically than we expected," says Haberstroh, who did the investigative work for his doctoral thesis. During wet years, the interacting stressors didn't cause any significant changes in the cork oak, while in dry conditions, the factors either amplified or buffered each other. One surprising result was also that cork oak, despite the double burden, was able to recover better than had been expected after extreme drought. The researchers observed that happens above all when the invasive gum rockrose shrubs were seriously compromised by the drought as well. The team will continue its work in Portugal to gather more data and look at long-term trends.

"These new research findings contribute to better understanding and more expedient care of ecosystems," Haberstroh explains. "Using them we can, for example, develop rules for particularly dry years, which is a central issue in times of climate change," he says.

Credit: 
University of Freiburg

Geologist identifies new form of quasicrystal

LOWELL, Mass. - A UMass Lowell geologist is among the researchers who have discovered a new type of manmade quasicrystal created by the first test blast of an atomic bomb.

The formation holds promise as a new material that could one day help repair bone, insulate heat or convert heat to electricity, among other uses, according to UMass Lowell Prof. G. Nelson Eby, a member of the university's Environmental, Earth and Atmospheric Sciences Department.

Eby is a member of the research team that identified the quasicrystal substance inside samples of trinitite they examined that were collected from the debris of the first atomic bomb detonated by the U.S. Army on July 16, 1945 in the New Mexico desert. Also known as atomic rock, trinitite is a glassy material produced by the extreme heat and pressure unleashed by detonated atomic devices. The rock gets its name from the word "trinity," the U.S military's code term for the first nuclear test blast.

Naturally occurring quasicrystals have been found in meteorites and in structures impacted by meteorite strikes; researchers first discovered them in aluminum-manganese alloy in the early 1980s. While scientists have created quasicrystals in the laboratory since then, their discovery in trinitite represents the first known time the substance was artificially created, according to Eby, who lives in Burlington.

The quasicrystal the researchers discovered in the trinitite is shaped in an icosahedron, a solid, 3D structure with 20 faces. The material is composed of silicon, copper, calcium and iron that can be traced to source materials near the bomb site that were drawn into the enormous force of the explosion along with the desert sand.

"Quasicrystals are strange crystalline forms that do not follow the normal laws of crystal symmetry. The tremendous pressure and temperature generated by an atomic detonation can lead to new forms of quasicrystals, such as the one we identified that cannot be produced in a laboratory," Eby said.

The research team's findings were published last month in the academic journal Proceedings of the National Academy of Sciences. Along with Eby, scientists from the University of Florence in Florence, Italy; the California Institute of Technology; Los Alamos National Laboratory; Princeton University; and a researcher working independently contributed to the project.

Eby believes the growing understanding of the conditions under which various types of quasicrystals form can help scientists design them for specific purposes, such as heat insulation, converting heat into electricity, bone repair and use in prosthetics, he said.

An understanding of trinitite, which Eby and his students study in his UMass Lowell laboratory, is also vital, according to Eby.

"Because of concerns about the proliferation and possible use of atomic weapons by rogue nations and terrorist groups, over the past decade, forensic studies of radioactive elements contained in trinitite have been conducted by a number of academic and federal laboratories and institutions, including UMass Lowell," he said.

Such study is essential should scientists be called upon to assist in investigations of atomic activity, according to Eby.

"Materials recovered from a detonated atomic device would most likely contain remnants of the bomb, and knowing the relationship between glass chemistry and radioactive elements in the materials would be useful in characterizing the device and ultimately identifying the perpetrators," he said.

Credit: 
University of Massachusetts Lowell

Vitamin D may not protect against COVID-19, as previously suggested

While previous research early in the pandemic suggested that vitamin D cuts the risk of contracting COVID-19, a new study from McGill University finds there is no genetic evidence that the vitamin works as a protective measure against the coronavirus.

"Vitamin D supplementation as a public health measure to improve outcomes is not supported by this study. Most importantly, our results suggest that investment in other therapeutic or preventative avenues should be prioritized for COVID-19 randomized clinical trials," say the authors.

To assess the relationship between vitamin D levels and COVID-19 susceptibility and severity, the researchers conducted a Mendelian randomization study using genetic variants strongly associated with increased vitamin D levels. They looked at genetic variants of 14,134 individuals with COVID-19 and over 1.2 million individuals without the disease from 11 countries.

In the study published in PLOS Medicine, the researchers found that among people who did develop the disease, there was no difference between vitamin D levels and a likelihood of being hospitalized or falling severely ill.

Studying the effects of vitamin D

Early in the pandemic, many researchers were studying the effects of vitamin D, which plays a critical role in a healthy immune system. But there is still not enough evidence that taking supplements can prevent or treat COVID-19 in the general population.

"Most vitamin D studies are very difficult to interpret since they cannot adjust for the known risk factors for severe COVID-19 such as older age or having chronic diseases, which are also predictors of low vitamin D," says co-author Guillaume Butler-Laporte, a physician and a fellow under the supervision of Professor Brent Richards at McGill University.

"Therefore, the best way to answer the question of the effect of vitamin D would be through randomized trials, but these are complex and resource intensive, and take a long time during a pandemic," he says.

By using a Mendelian randomization, the researchers say they were able to decrease potential bias from these known risk factors and provide a clearer picture of the relationship between vitamin D and COVID-19.

However, researchers noted that their study had some important limitations. It did not account for truly vitamin D deficient patients, consequently it remains possible that they may benefit from supplementation for COVID-19 related protection and outcomes. Additionally, the study only analyzed genetic variants from individuals of European ancestry. Future studies are needed to explore the relationship with vitamin D and COVID-19 outcomes in other populations, say the researchers.

"In the past Mendelian randomization has consistently predicted results of large, expensive, and timely vitamin D trials. Here, this method does not show clear evidence that vitamin D supplementation would have a large effect on COVID-19 outcomes," says Butler-Laporte, who is a microbiologist and an expert in infectious diseases.

Credit: 
McGill University

UMass Amherst food scientists aim to make plant-based protein tastier and healthier

image: David Julian McClements is a Distinguished Professor of Food Science at UMass Amherst.

Image: 
UMass Amherst

As meat-eating continues to increase around the world, food scientists are focusing on ways to create healthier, better-tasting and more sustainable plant-based protein products that mimic meat, fish, milk, cheese and eggs.

It's no simple task, says renowned food scientist David Julian McClements, University of Massachusetts Amherst Distinguished Professor and lead author of a paper in the new Nature journal, Science of Food, that explores the topic.

"With Beyond Meat and Impossible Foods and other products coming on the market, there's a huge interest in plant-based foods for improved sustainability, health and ethical reasons," says McClements, a leading expert in food design and nanotechnology, and author of Future Foods: How Modern Science Is Transforming the Way We Eat.

In 2019, the plant-based food market in the U.S. alone was valued at nearly $5 billion, with 40.5% of sales in the milk category and 18.9% in plant-based meat products, the paper notes. That represented a market value growth of 29% from 2017.

"A lot of academics are starting to work in this area and are not familiar with the complexity of animal products and the physicochemical principles you need in order to assemble plant-based ingredients into these products, each with their own physical, functional, nutritional and sensory attributes," McClements says.

With funding from the USDA's National Institute of Food and Agriculture and the Good Food Institute, McClements leads a multidisciplinary team at UMass Amherst that is exploring the science behind designing better plant-based protein. Co-author Lutz Grossmann, who recently joined the UMass Amherst food science team as an assistant professor, has expertise in alternative protein sources, McClements notes.

"Our research has pivoted toward this topic," McClements says. "There's a huge amount of innovation and investment in this area, and I get contacted frequently by different startup companies who are trying to make plant-based fish or eggs or cheese, but who often don't have a background in the science of foods."

While the plant-based food sector is expanding to meet consumer demand, McClements notes in the paper that "a plant-based diet is not necessarily better than an omnivore diet from a nutritional perspective."

Plant-based products need to be fortified with micronutrients that are naturally present in animal meat, milk and eggs, including vitamin D, calcium and zinc. They also have to be digestible and provide the full complement of essential amino acids.

McClements says that many of the current generation of highly processed, plant-based meat products are unhealthy because they're full of saturated fat, salt and sugar. But he adds that ultra-processed food does not have to be unhealthy.

"We're trying to make processed food healthier," McClements says. "We aim to design them to have all the vitamins and minerals you need and have health-promoting components like dietary fiber and phytochemicals so that they taste good and they're convenient and they're cheap and you can easily incorporate them into your life. That's the goal in the future, but we're not there yet for most products."

For this reason, McClements says, the UMass Amherst team of scientists is taking a holistic, multidisciplinary approach to tackle this complex problem.

Credit: 
University of Massachusetts Amherst

Beyond synthetic biology, synthetic ecology boosts health by engineering the environment

image: In a new Nature Communications study, researchers from BU's Microbiome Initiative discovered that providing microbial communities with a broader variety of food sources didn't increase the variety of microbial species within their experiments, but more food did fuel more microbial growth. The team's ultimate goal is to learn how to direct microbiome behavior through environmental molecules like food sources.

Image: 
Image courtesy of Alan Pacheco and Daniel Segrè.

There's a lot of interest right now in how different microbiomes—like the one made up of all the bacteria in our guts—could be harnessed to boost human health and cure disease. But Daniel Segrè has set his sights on a much more ambitious vision for how the microbiome could be manipulated for good: "To help sustain our planet, not just our own health."

Segrè, director of the Boston University Microbiome Initiative, says he and other scientists in his field of synthetic and systems biology are studying microbiomes—microscopic communities of bacteria, fungi, or a combination of those that exert influence over each other and the surrounding environment. They want to know how microbiomes might be directed to carry out important tasks like absorbing more atmospheric carbon, protecting coral reefs from ocean acidification, improving the fertility and yield of agricultural lands, and supporting the growth of forests and other plants despite changing environmental conditions.

"Microbes affect us as humans through their own metabolic processes, they affect our planet through what they consume and secrete, they help create the oxygen we breathe," says Segrè, a BU College of Arts & Sciences professor of biology and bioinformatics, and a College of Engineering professor of biomedical engineering. "A long time ago microbes are what made multicellular life possible."

But, unlike many other synthetic biologists who are working to enhance or genetically engineer microbes directly, Segrè is more interested in how to direct the behavior of a microbiome by tweaking the environmental conditions it lives within—an approach he says could be better described as "synthetic ecology."

"The more traditional synthetic biology approach would be to manipulate the genomes of the microbes," Segrè says. "But we're trying to manipulate microbial ecosystems using environmental molecules."

"We know that microbial interactions with the environment are important," says Alan Pacheco, who earned his PhD in bioinformatics working in Segrè's lab. Some of those interactions benefit several microbial species, some only benefit one species in a community, and some can be harmful to certain species, he says. "But there's still so much we don't know about why these interactions happen the way that they do."

In a new study recently published in Nature Communications, Segrè, Pacheco, and their collaborator Melisa Osborne, a research scientist in Segrè's lab, explored how the presence of 32 different environmental molecules or nutrients, alone or in combination with others, would influence the growth rate of microbial communities and the mix of diverse species making up a given microbiome.

"In the back of our minds we had this idea of diet, framed by studies that have looked at differences in the gut microbiome based on Western vs hunter-gatherer diets," says Pacheco, who is now a postdoctoral fellow at ETH Zürich. Hunter-gatherer diets, opportunistic and comprising a wide range of plant-based food sources, are considered much more diverse than the Western diet, which is why the hunter-gatherer diet is thought to cultivate a healthier gut.

But the experimental results surprised the team. They expected they would see growth and diversity of microbiomes increase as the "bugs" had more access to a variety of foods—a range of carbons, including sugars, amino acids, and complex polymers—but that's not what their carefully controlled experiments revealed. Instead, they observed that competition for food between different species of microbes hampered diversification within the microbial community.

"Our results demonstrate that environmental complexity alone is not sufficient for maintaining community diversity, and provide practical guidance for designing and controlling microbial ecosystems," the authors write.

So, what are the mechanisms that control a microbiome's diversity? "It's going to take some time to figure out the cause of all these interactions," Segrè says.

Although increasing the variety of food sources didn't increase the variety of microbial species within their experiments, more food did fuel more microbial growth. "We found yield depends on the total number of carbon sources, but not on the variety of those sources," Segrè says. "It's like people at a picnic—if enough people come to a picnic, no matter what the spread of different foods, eventually everything will be eaten up. In many of our experiments, the microbial communities used up every last bit of carbon source to the fullest extent."

Pacheco adds that if somebody can consume something, somebody else can outcompete them for it. "Our experiments showed that the crucial modulator in microbial diversity is how much these different organisms compete with one another for resources," he says. "The more organisms compete, the less diverse that community is going to be."

The team plans to do more research into additional environmental factors, investigating how nutrient access and variety changes microbial communities over time, and how the medium that the microbial community lives in affects their consumption and secretion of molecules. They are also exploring how metabolic processes amongst different microbial species could interact and interplay with each other, and how the ability of some organisms to sequentially or simultaneously consume multiple resources affects the microbiome overall.

Further unlocking and eventually harnessing all these environmental "dials and knobs" could open doors to using microbiomes to influence human metabolisms and health or disease states in people and in natural ecosystems.

Credit: 
Boston University

New form of silicon could enable next-gen electronic and energy devices

image: Visualization of the structure of 4H-Si viewed perpendicular to the hexagonal axis. A transmission electron micrograph showing the stacking sequence is displayed in the background.

Image: 
Image courtesy of Thomas Shiell and Timothy Strobel

Washington, DC--A team led by Carnegie's Thomas Shiell and Timothy Strobel developed a new method for synthesizing a novel crystalline form of silicon with a hexagonal structure that could potentially be used to create next-generation electronic and energy devices with enhanced properties that exceed those of the "normal" cubic form of silicon used today.

Their work is published in Physical Review Letters.

Silicon plays an outsized role in human life. It is the second most abundant element in the Earth's crust. When mixed with other elements, it is essential for many construction and infrastructure projects. And in pure elemental form, it is crucial enough to computing that the longstanding technological hub of the U.S.--California's Silicon Valley--was nicknamed in honor of it.

Like all elements, silicon can take different crystalline forms, called allotropes, in the same way that soft graphite and super-hard diamond are both forms of carbon. The form of silicon most commonly used in electronic devices, including computers and solar panels, has the same structure as diamond. Despite its ubiquity, this form of silicon is not actually fully optimized for next-generation applications, including high-performance transistors and some photovoltaic devices.

While many different silicon allotropes with enhanced physical properties are theoretically possible, only a handful exist in practice given the lack of known synthetic pathways that are currently accessible.

Strobel's lab had previously developed a revolutionary new form of silicon, called Si24, which has an open framework composed of a series of one-dimensional channels. In this new work, Shiell and Strobel led a team that used Si24 as the starting point in a multi-stage synthesis pathway that resulted in highly oriented crystals in a form called 4H-silicon, named for its four repeating layers in a hexagonal structure.

"Interest in hexagonal silicon dates back to the 1960s, because of the possibility of tunable electronic properties, which could enhance performance beyond the cubic form" Strobel explained.

Hexagonal forms of silicon have been synthesized previously, but only through the deposition of thin films or as nanocrystals that coexist with disordered material. The newly demonstrated Si24 pathway produces the first high-quality, bulk crystals that serve as the basis for future research activities.

Using the advanced computing tool called PALLAS, which was previously developed by members of the team to predict structural transition pathways--like how water becomes steam when heated or ice when frozen--the group was able to understand the transition mechanism from Si24 to 4H-Si, and the structural relationship that allows the preservation of highly oriented product crystals.

"In addition to expanding our fundamental control over the synthesis of novel structures, the discovery of bulk 4H-silicon crystals opens the door to exciting future research prospects for tuning the optical and electronic properties through strain engineering and elemental substitution," Shiell said. "We could potentially use this method to create seed crystals to grow large volumes of the 4H structure with properties that potentially exceed those of diamond silicon."

Credit: 
Carnegie Institution for Science

Underground storage of carbon captured directly from air -- green and economical

image: Schematic image of low-purity CO2 storage with the membrane-based Direct Air Capture (DAC).

Image: 
Takeshi Tsuji

Fukuoka, Japan - The global threat of ongoing climate change has one principal cause: carbon that was buried underground in the form of fossil fuels is being removed and released into the atmosphere in the form of carbon dioxide (CO2). One promising approach to addressing this problem is carbon capture and storage: using technology to take CO2 out of the atmosphere to return it underground.

In a new study published in Greenhouse Gases Science and Technology, researchers from Kyushu University and the National Institute of Advanced Industrial Science and Technology, Japan, investigated geological storage of low-purity CO2 mixed with nitrogen (N2) and oxygen (O2), produced by direct air capture (DAC) using membrane-based technology.

Many current carbon capture projects are carried out at localized sources using concentrated CO2 emissions, such as coal-fired power plants, and require intensive purification storage owing to the presence of hazardous compounds such as nitrogen oxide and sulfur oxide. They also have high transportation costs because viable geological storage sites are typically far from the sources of CO2. In contrast, direct air capture of CO2 can be performed anywhere, including at the storage site, and does not require intensive purification because the impurities, O2 and N2, are not hazardous. Therefore, low-purity CO2 can be captured and injected directly into geological formations, at least in theory. Understanding how the resulting mixture of CO2, O2, and N2 behaves when it is injected and stored in geological formations is necessary before underground storage of low-purity CO2 from direct air capture can be widely adopted. As the study's lead author, Professor Takeshi Tsuji, explains, "It is difficult to capture high-purity CO2 using DAC. We performed molecular dynamic simulations as a preliminary evaluation of the storage efficiency of CO2-N2-O2 mixtures at three different temperature and pressure conditions, corresponding to depths of 1,000 m, 1,500 m, and 2,500 m at the Tomakomai CO2 storage site in Japan."

Although further research is still needed, such as investigations of the chemical reactions of injected O2 and N2 at great depths, the results of these simulations suggest that geological storage of CO2-N2-O2 mixtures produced by direct air capture is both environmentally safe and economically viable.

According to Professor Tsuji, "Because of the ubiquity of ambient air, direct air capture has the potential to become a ubiquitous means of carbon capture and storage that can be implemented in many remote areas, such as deserts and offshore platforms. This is important both for reducing transportation costs and ensuring social acceptance."

Credit: 
Kyushu University, I2CNER

Antarctica wasn't quite as cold during the last ice age as previously thought

image: Ice core researcher Don Voigt examines an ice core at the West Antarctic Ice Sheet Divide (WAIS Divide) project.

Image: 
Photograph by Gifford Wong

CORVALLIS, Ore. - A study of two methods for reconstructing ancient temperatures has given climate researchers a better understanding of just how cold it was in Antarctica during the last ice age around 20,000 years ago.

Antarctica, the coldest place on Earth today, was even colder during the last ice age. For decades, the leading science suggested ice age temperatures in Antarctica were on average about 9 degrees Celsius cooler than at present.

An international team of scientists, led by Oregon State University's Christo Buizert, has found that while parts of Antarctica were as cold as 10 degrees below current temperatures, temperatures over central East Antarctica were only 4 to 5 degrees cooler, about half of the previous estimates.

The findings were published this week in Science.

"This is the first conclusive and consistent answer we have for all of Antarctica," said Buizert, an Oregon State University climate change specialist. "The surprising finding is that the amount of cooling is very different depending on where you are in Antarctica. This pattern of cooling is likely due to changes in the ice sheet elevation that happened between the ice age and today."

Understanding the planet's temperature during the last ice age is critical to understanding the transition from a cold to a warm climate and to modeling what might occur as the planet warms as a result of climate change today, said Ed Brook, a paleoclimatologist at OSU and one of the paper's co-authors.

"Antarctica is particularly important in the climate system," Brook said. "We use climate models to predict the future, and those climate models have to get all kinds of things correct. One way to test these models is to make sure we get the past right."

The study's co-authors are an international team of researchers from the United States, Japan, the United Kingdom, France, Switzerland, Denmark Italy, Korea and Russia. The study was supported in part by the National Science Foundation.

"The international collaboration was critical to answering this question because it involved so many different measurements and methods from ice cores all across Antarctica," said co-author T.J. Fudge, an assistant professor in Earth and Space Sciences at the University of Washington. "Ice cores that were recently drilled with support from the National Science Foundation allowed us to gain new insights from previously drilled cores, as well."

The last ice age represents a natural experiment for understanding the planet's sensitivity to changes in greenhouse gases such as carbon dioxide, the researchers said. Core samples taken from ice that has built up over hundreds of thousands of years helps tell that story.

Researchers in the past have used water isotopes contained in the layers of ice, which essentially act like a thermometer, to reconstruct temperatures from the last ice age. In Greenland, those isotope changes can be calibrated against other methods to ensure their accuracy. But for most of Antarctica, researchers have not been able to calibrate the water isotope thermometer against other methods.

"It is as if we had a thermometer, but we could not read the scale," said Buizert, an assistant professor in OSU's College of Earth, Ocean, and Atmospheric Sciences. "One of the places where we had no calibration is East Antarctica, where the oldest continuous records of ice cores have been drilled, making it a critical location for understanding climate history."

In the new study, the researchers used two methods for reconstructing ancient temperatures, using ice cores from seven locations across Antarctica - five from East Antarctica and two from West Antarctica.

The borehole thermometry method measures temperatures throughout the thickness of an ice sheet. The Antarctic ice sheet is so thick that it keeps a memory of earlier, colder ice age temperatures that can be measured and reconstructed, said Fudge, an assistant professor in the department of earth and space science at the University of Washington.

The second method examines the properties of the snowpack as it builds up and transforms into ice over time. In East Antarctica, that snowpack can range from 50 to 120 meters thick and has compacted over thousands of years in a process that is very sensitive to temperature changes.

The researchers found that both methods produced similar temperature reconstructions, giving them confidence in the results.

They also found that the amount of ice age cooling is related to the shape of the ice sheet. During the last ice age, some part of the Antarctic ice sheet became thinner as the amount of snowfall declined, Buizert said. That lowers the surface elevation and cooling in those areas was 4 to 5 degrees. In places where the ice sheet was much thicker during the ice age, temperatures cooled by more than 10 degrees.

"This relationship between elevation and temperature is well-known to mountaineers and pilots - the higher you go, the colder it gets," Buizert said.

The findings are important for improving future climate modeling but they do not change researchers' perception of the how sensitive the Earth is to carbon dioxide, the primary greenhouse gas produced through human activity, he said.

"This paper is consistent with the leading theories about sensitivity," Buizert said. "We are the same amount of worried today about climate change as we were yesterday."

Credit: 
Oregon State University

Coastal flooding increases Bay Area traffic delays and accidents

image: Model estimates of non-highway car accident rates without flooding and with a 36-inch water level.

Image: 
(Image Suckale et al.)

Almost half of the world's population currently lives in cities and that number is projected to rise significantly in the near future. This rapid urbanization is contributing to increased flood risk due to the growing concentration of people and resources in cities and the clustering of cities along coastlines.

These urban shifts also result in more complex and interconnected systems on which people depend, such as transportation networks. Disruptions to urban traffic networks from flooding or other natural disasters can have serious socioeconomic consequences. In fact, what are defined as indirect impacts from these types of events, such as commute-related employee absences, travel time delays and increase in vehicular accident rates, could ultimately outweigh the more direct physical damage to roads and infrastructure caused by severe flooding.

Stanford researchers examined traffic networks in the San Francisco Bay Area (SF Bay Area) as a case study to quantify the indirect impacts of sea-level rise and intensifying coastal flood events on urban systems. Specifically, the researchers sought to identify the effects flooding would have on traffic delays and safety, particularly as road closures rerouted vehicles into adjacent streets and residential neighborhoods not designed to handle heavy vehicle flows. The research was published in the May issue of Urban Climate.

"The goal is to highlight road safety in future climate adaptation planning," said lead study author Indraneel Kasmalkar, an engineering PhD candidate affiliated with the Stanford Institute for Computational and Mathematical Engineering (ICME).

Similar to many other regions across the country, the SF Bay Area has dense urban development concentrated along its coastline and heavily congested traffic grids. Currently, even relatively minor instances of coastal flooding have the potential to inundate major traffic corridors and increase already lengthy commute times and traffic accidents.

"I think one of the key issues about traffic in the Bay is that we're already pretty close to the limit," said senior study author Jenny Suckale, an assistant professor of geophysics at Stanford University's School of Earth, Energy & Environmental Sciences (Stanford Earth). "That's also why some of the relatively minor degrees of water level we're considering here can make quite a difference."

For coastal flooding events, three types of flood impacts were identified: impassable commutes where the origin, destination or critical road connections are flooded and impede driving; travel time delays caused by commuters rerouting to avoid flooded roadways; and increases in car and pedestrian accident rates in communities that experience high inflows of traffic as commuters reroute onto local roads.

The study highlights the challenges of preparing the traffic network in the Bay Area for climate change. Increasing coastal flooding could lead to significant travel time delays across the entire Bay Area, including communities that do not encounter any flooding themselves. However, focusing exclusively on reducing travel time delays may be problematic as some communities will be impacted by coastal flooding primarily through an increase in accident rates.

The research is a followup to the team's recent findings published in Science Advances that revealed commuters living outside the areas of flooding may experience some of the largest commute delays in the Bay Area due to the nature of road networks in the region.

"The two studies provide interesting contrasts on the resilience of communities to flood impacts," Kasmalkar said.

While delays increase sharply at higher water levels, region-wide accident rates increase the most at low water levels, suggesting that accidents may be a greater concern than delays at low-to-moderate water levels. Using only the metric of travel time delay for estimating traffic resilience could impart a bias toward travel efficiency rather than road safety into planning efforts.

When flooding of highways forces commuters onto local roads which pass through residential communities a spike in accident rates occurs. This may especially impact lower income or historically disadvantaged communities that are more likely to be adjacent to highways and may have fewer road-safety provisions.

"Some communities might care a lot more about safety than traffic delays," Suckale said. "The interconnectedness makes governance and decision making harder, and planners are not necessarily accounting for the negative consequences on neighbors."

Credit: 
Stanford University