Tech

A solution for cleaning up PFAS, one of the world's most intractable pollutants

image: An electrochemical flow cell with a stainless steel cathode and a boron-doped diamond anode is used to treat a concentrated waste stream of GenX.

Image: 
Colorado State University

A cluster of industrial chemicals known by the shorthand term "PFAS" has infiltrated the far reaches of our planet with significance that scientists are only beginning to understand.

PFAS - Per- and polyfluoroalkyl substances - are human-made fluorine compounds that have given us nonstick coatings, polishes, waxes, cleaning products and firefighting foams used at airports and military bases. They are in consumer goods like carpets, wall paint, popcorn bags and water-repellant shoes, and they are essential in the aerospace, automotive, telecommunications, data storage, electronic and healthcare industries.

The carbon-fluorine chemical bond, among nature's strongest, is the reason behind the wild success of these chemicals, as well as the immense environmental challenges they have caused since the 1940s. PFAS residues have been found in some of the most pristine water sources, and in the tissue of polar bears. Science and industry are called upon to clean up these persistent chemicals, a few of which, in certain quantities, have been linked to adverse health effects for humans and animals.

Among those solving this enormously difficult problem are engineers in the Walter Scott, Jr. College of Engineering at Colorado State University. CSU is one of a limited number of institutions with the expertise and sophisticated instrumentation to study PFAS by teasing out their presence in unimaginably trace amounts.

Now, CSU engineers led by Jens Blotevogel, research assistant professor in the Department of Civil and Environmental Engineering, have published a new set of experiments tackling a particular PFAS compound called hexafluoropropylene oxide dimer acid, better known by its trade name, GenX. The chemical, and other polymerization processes that use similar chemistries, have been in use for about a decade. They were developed as a replacement for legacy PFAS chemicals known as "C8" compounds that were - and still are - particularly persistent in water and soil, and very difficult to clean up (hence their nickname, "forever chemicals").

GenX has become a household name in the Cape Fear basin area of North Carolina, where it was discovered in the local drinking water a few years ago. The responsible company, Chemours, has committed to reducing fluorinated organic chemicals in local air emissions by 99.99%, and air and water emissions from its global operations by at least 99% by 2030. For the last several years, Chemours has also funded Blotevogel's team at CSU as they test innovative methods that would help the environment as well as assist the company's legacy cleanup obligations.

Writing in Environmental Science and Technology, Blotevogel teamed up with Tiezheng Tong, assistant professor in civil and environmental engineering, to demonstrate an effective "treatment train" that combines multiple technologies to precisely isolate and destroy GenX residues in water.

One of the current practices for treating GenX-contaminated water is high-temperature incineration - a process that is "excessively expensive," according to the researchers, and very wasteful for water and energy recovery. "It works," Blotevogel said, "but it's not sustainable."

The researchers are offering a better solution. Tong, a leading expert in membrane filtration and desalination methods for environmental hazards, employed a nanofiltration membrane with appropriate pore sizes to filter out 99.5% of dissolved GenX compounds. Once that concentrated waste stream is generated, the researchers showed that electrochemical oxidation, which Blotevogel considers one of the most viable technologies for destructive PFAS cleanup, can then break down the waste into harmless products.

Currently, companies can also use several measures for removal of PFAS from water to acceptable levels: adsorption to activated carbon, ion exchange, and reverse osmosis. While all three of these technologies can be highly effective, they do not result directly in destruction of PFAS compounds, Blotevogel said.

The CSU researchers alternative solution of electrochemical treatment uses electrodes to chemically change the PFAS into more benign compounds. Blotevogel's lab has demonstrated several successful pilot-scale decontamination efforts, and is working to continue optimizing their methodologies. Combined with Tong's nanofiltration system, the waste stream would be directed and concentrated, saving companies money and lowering the entire process's carbon footprint.

The researchers hope to continue working together to refine their process, for example, by testing different types of filtration membranes to determine the most optimal materials and design.

Credit: 
Colorado State University

Prenatal and early life exposure to multiple air pollutants increases odds of toddler allergies

ARLINGTON HEIGHTS, IL - (DECEMBER 5, 2019) - A new article in Annals of Allergy, Asthma and Immunology, the scientific journal of the American College of Allergy, Asthma and Immunology (ACAAI) shows a significant association between multiple prenatal and early life exposures to indoor pollutants and the degree of allergic sensitivity in 2-year-olds.

"Because most children are exposed to more than one pollutant or allergen, we examined the relationship between multiple exposures and allergic sensitizations at 2 years of age," says Mallory Gallant, MSc, lead author of the study. "We examined exposure to dogs, cats, air fresheners, candles, mold, environmental tobacco smoke (ETS) and carpet, all of which have been associated with childhood allergies. Of the exposures we measured, prenatal exposure to candles, 6-month exposure to cats and 2-year exposure to ETS significantly increased the chance of a positive skin prick test (SPT) at 2 years of age."

108 mother-child pairs were followed from birth to 2 years of age. Exposure to air fresheners, candles, mold, cats, dogs, carpet and environmental tobacco smoke (ETS) during the prenatal, 6-month, 1-year, and 2-year timepoints were obtained. A SPT was performed on both the mother and the 2-year-old child to measure allergic sensitivity. Allergic sensitization means that a person has had (or may have had given the possibilities for false positives) an allergic type immune response to a substance. But it does not necessarily mean that the substance causes them problems.

"The increase in the average amount of time indoors means there is an increased risk of harmful health outcomes related to exposure to indoor air pollutants," says allergist Anne K. Ellis, MD, study author, and member of the ACAAI Environmental Allergy Committee. "Additionally, children breathe more frequently per minute than adults, and mostly breathe through their mouths. These differences could allow for air pollutants to penetrate more deeply into the lungs and at higher concentrations, making children more vulnerable to air pollutants."

Another goal of the study was to evaluate the effect of multiple exposures on allergic outcomes at 2 years of age. The study found that children with a positive SPT at 2 years of age had significantly more exposures prenatally, at the 1-year and 2-year time points compared to children with a negative SPT. As the number of indoor air polluting exposures increased, the percentage of children with a positive SPT increased. Dr. Ellis says, "When considered together, the findings suggest that the effect of multiple exposures may contribute more to allergy development than one single exposure."

Credit: 
American College of Allergy, Asthma, and Immunology

Brain differences detected in children with depressed parents

The largest brain imaging study of children ever conducted in the United States has revealed structural differences in the brains of those whose parents have depression.

In Brief

Depression is a common and debilitating mental health condition that typically arises during adolescence. While the causes of depression are complex, having a parent with depression is one of the biggest known risk factors. Studies have consistently shown that adolescent children of parents with depression are two to three times more likely to develop depression than those with no parental history of depression. However, the brain mechanisms that underlie this familial risk are unclear.

A new study, led by David Pagliaccio, PhD, assistant professor of clinical neurobiology in the Department of Psychiatry at Columbia University Vagelos College of Physicians and Surgeons, found structural differences in the brains of children at high risk for depression due to parental depressive history.

The study was published in the Journal of the American Academy of Child & Adolescent Psychiatry.

What the Study Found

The researchers analyzed brain images from over 7,000 children participating in the Adolescent Brain Cognitive development (ABCD) study, led by the NIH. About one-third of the children were in the high-risk group because they had a parent with depression.

In the high-risk children, the right putamen--a brain structure linked to reward, motivation, and the experience of pleasure--was smaller than in children with no parental history of depression.

What the Study Means

Randy P. Auerbach, PhD, associate professor of medical psychology at Columbia University Vagelos College of Physicians and Surgeons and senior author of the study, notes, "These findings highlight a potential risk factor that may lead to the development of depressive disorders during a peak period of onset. However, in our prior research, smaller putamen volumes also has been linked to anhedonia--a reduced ability to experience pleasure--which is implicated in depression, substance use, psychosis, and suicidal behaviors. Thus, it may be that smaller putamen volume is a transdiagnostic risk factor that may confer vulnerability to broad-based mental disorders."

Dr. Pagliaccio adds that, "Understanding differences in the brains of children with familial risk factors for depression may help to improve early identification of those at greatest risk for developing depression themselves, and lead to improved diagnosis and treatment. As children will be followed for a 10-year period during one of the greatest periods of risk, we have a unique opportunity to determine whether reduced putamen volumes are associated with depression specifically or mental disorders more generally."

Credit: 
Columbia University Irving Medical Center

St. Michael's Hospital study examines the relationship between sugars and heart health

The impact of sugars on heart health depends on the dose and type of sugar consumed, suggests a new study led by researchers at St. Michael's Hospital.

The team, led by Dr. John Sievenpiper, a staff physician in the Division of Endocrinology and Metabolism and a scientist at the Li Ka Shing Knowledge Institute, examined the relationship between total and added sugars that contain fructose on cardiovascular disease incidence and mortality.

Fructose is a naturally occurring sugar in many fruits and vegetables and makes up about half of the sugars in added sucrose and high-fructose corn syrup.

"We tend to think that sugars irrespective of the source are all bad, but this isn't always the case," said Dr. Sievenpiper. "Sugars behave differently depending on the type, dose and food source. Different sugars in varying amounts from different sources can have different effects on our health."

Dr. Sievenpiper and his team wanted to find out whether there were harmful associations of fructose-containing sugars with heart health.

To do this, the team conducted a review of previous studies investigating the association between reported intakes of fructose-containing sugars derived from all reported sources and heart disease incidence and mortality.

The team found that different types of sugars showed different associations with cardiovascular disease. Higher intake of total sugars, fructose or added sugars was associated with increased death from cardiovascular disease, whereas higher intake of sucrose was associated with decreased death from cardiovascular disease.

The sugars that were associated with harm also showed thresholds for harm below which increased death from cardiovascular disease was not observed, ranging from 58 grams for fructose to 133 grams for total sugars.

Given the limitation that their data is largely observational in nature, Dr. Sievenpiper stressed that the certainty of their evidence is generally low and there is still a long way to go before fully understanding the relationship between sugars and heart health.

Next, the team plans to look at whether the differences seen by the type and dose of sugars can be explained by their food sources.

"We know that there are healthy and less healthy sources of sugar out there, but we want to know if these differences in sugars are driving the differences we're seeing in the association with cardiovascular disease," said Dr. Sievenpiper. "In other words, does it matter whether sugar comes from a healthier source such as fruit, yogurt, or a high-fibre, whole grain cereal versus a sugar-sweetened beverage."

Credit: 
St. Michael's Hospital

Wind and water

Howling wind drives torrential rain sideways as tall, slender palms bow and tree limbs snap. A hurricane approaches, its gale-force winds wreaking havoc as it nears the coast. Storm surges combine with the downpour, inundating the area with water.

But according to new research out of UC Santa Barbara, the rains that come once the storm has weakened may actually be more intense than when the storm is at its strongest.

"The highest intensities of rainfall occur after the hurricanes have weakened to tropical storms, not when they first make landfall as major hurricanes," said lead author Danielle Touma, a postdoctoral scholar at the university's Bren School of Environmental Science & Management. The study appears in the journal Geophysical Research Letters.

The finding has counterintuitive implications. "If we're thinking about risks, we know that major hurricanes can drive storm surges, there's strong winds and so on. But this paper is also saying hurricanes are still dangerous even after they've weakened to tropical storms," said coauthor Samantha Stevenson, an assistant professor at Bren.

Around the time Hurricane Harvey hit Houston in 2017, Touma was developing a new method for studying areas and intensities of rainfall around tropical cyclones -- which include hurricanes and tropical storms -- based on weather station data. Many previous studies have used satellite and radar data, but these records are limited to the late 20th and early 21st centuries. In contrast, records from weather stations begin in 1900. Using the measurements from the stations, Touma could calculate the extent of land that experienced rain from a given weather system, as well as how much rain fell.

These issues were in sharp focus during the 2017 hurricane season, especially as flooding in Southern Texas racked up roughly $130 billion in damages, according to the National Oceanic and Atmospheric Administration.

Analyzing decades of records, Touma discovered that the accompanying rainfall tended to be more severe after a hurricane had abated to the category of a tropical storm. In that sense, the cyclones were more dangerous after they had subsided, in spite of their slower wind speeds.

"You might think hurricanes are most dangerous when they're strongest, because that's when the winds are whipping around the fastest," Stevenson said, "but this paper actually finds that the risk due to extreme rainfall is largest after the hurricane has weakened a bit."

Tropical cyclones tend to slow down once they hit land because they are cut off from their energy source, the ocean. There's no longer warm water evaporating below them to drive the convection that fuels the system. As a result, the cyclone becomes disorganized and spins at lower speeds, causing it to spread out. "Now it's kind of parked in one spot, and it can just dump a lot of rain on a particular location," Stevenson explained.

Naturally, this effect is most pronounced for cyclones that started as major hurricanes. A smaller cyclone would follow the same pattern -- hit land, slow down and spread out -- but it wouldn't contain the sheer volume of water to cause the same degree of damage.

Scientists predict that the intensity of tropical cyclone rainfall will increase as global temperatures rise. In fact, the team has already begun to see this trend in their historical data.

"Since our analysis used longer records than previous studies, we were better able to detect long-term trends," Stevenson explained. "What this paper adds to the conversation is which types of storms we have to pay more attention to in disaster preparedness plans."

Credit: 
University of California - Santa Barbara

Next generation of CAR-T cells possible

A new approach to programing cancer-fighting immune cells called CAR-T cells can prolong their activity and increase their effectiveness against human cancer cells grown in the laboratory and in mice, according to a study by researchers at the Stanford University School of Medicine.

The ability to circumvent the exhaustion that the genetically engineered cells often experience after their initial burst of activity could lead to the development of a new generation of CAR-T cells that may be effective even against solid cancers -- a goal that has until now eluded researchers.

The studies were conducted in mice harboring human leukemia and bone cancer cells. The researchers hope to begin clinical trials in people with leukemia within the next 18 months and to eventually extend the trials to include solid cancers.

"We know that T cells are powerful enough to eradicate cancer," said Crystal Mackall, MD, professor of pediatrics and of medicine at Stanford and the associate director of the Stanford Cancer Institute. "But these same T cells have evolved to have natural brakes that tamp down the potency of their response after a period of prolonged activity. We've developed a way to mitigate this exhaustion response and improve the activity of CAR-T cells against blood and solid cancers."

Mackall, who is also the director of the Stanford Center for Cancer Cell Therapy and of the Stanford research center of the Parker Institute for Cancer Immunotherapy, treats children with blood cancers at the Bass Center for Childhood Cancer and Blood Diseases at Stanford Children's Health.

Mackall is the senior author of the study, which will be published Dec. 4 in Nature. Former postdoctoral scholar Rachel Lynn, PhD, is the lead author.

Genetically modified cells of patient

CAR-T cells is an abbreviation for chimeric antigen receptor T cells. Genetically modified from a patient's own T cells, CAR-T cells are designed to track down and kill cancer cells by recognizing specific proteins on the cells' surface. CAR-T cell therapy made headlines around the world in 2017 when the Food and Drug Administration fast-tracked their approval for the treatment of children with relapsed or unresponsive acute lymphoblastic leukemia. Later that year, a version of CAR-T treatment was also approved for adults with some types of lymphoma.

But although blood cancers often respond impressively to CAR-T treatment, fewer than half of treated patients experience long-term control of their disease, often because the CAR-T cells become exhausted, losing their ability to proliferate robustly and to actively attack cancer cells. Overcoming this exhaustion has been a key goal of cancer researchers for several years.

Lynn and Mackall turned to a technique co-developed in the laboratory of Howard Chang, MD, PhD, the Virginia and D.K. Ludwig Professor of Cancer Genomics and professor of genetics at Stanford, to understand more about what happens when T cells become exhausted and whether it might be possible to inhibit this exhaustion. The technique, called ATAC-Seq, pinpoints areas of the genome where regulatory circuits overexpress or underexpress genes.

"When we used this technique to compare the genomes of healthy and exhausted T cells," Mackall said, "we identified some significant differences in gene expression patterns." In particular, the researchers discovered that exhausted T cells demonstrate an imbalance in the activity of a major class of genes that regulate protein levels in the cells, leading to an increase in proteins that inhibit their activity.

When the researchers modified CAR-T cells to restore the balance by overexpressing c-Jun, a gene that increases the expression of proteins associated with T cell activation, they saw that the cells remained active and proliferated in the laboratory even under conditions that would normally result in their exhaustion. Mice injected with human leukemia cells lived longer when treated with the modified CAR-T cells than with the regular CAR-T cells. In addition, the c-Jun expressing CAR-T cells were also able to reduce the tumor burden and extend the lifespan of laboratory mice with a human bone cancer called osteosarcoma.

"Those of us in the CAR-T cell field have wondered for some time if these cells could also be used to combat solid tumors," Mackall said. "Now we've developed an approach that renders the cells exhaustion resistant and improves their activity against solid tumors in mice. Although more work needs to be done to test this in humans, we're hopeful that our findings will lead to the next generation of CAR-T cells and make a significant difference for people with many types of cancers."

Credit: 
Stanford Medicine

Physical forces affect bacteria's toxin resistance, study finds

A random conversation between two Cornell researchers at a child's birthday party led to a collaboration and new understanding of how bacteria resist toxins, which may lead to new tools in the fight against harmful infections.

Physical forces have been known to affect how cells in our body grow and survive, but little has been understood about the role these forces play in prokaryotes - single-cell organisms, including bacteria.

Christopher Hernandez, associate professor in the Sibley School of Mechanical and Aerospace Engineering, had an idea for a microfluidic device that would subject individual bacteria to known amounts of force and mechanical deformation. But he knew of few ways to measure the effects - until a chance encounter with Peng Chen, the Peter J.W. Debye Professor in the College of Arts and Sciences' Department of Chemistry and Chemical Biology.

Chen had developed a way to tag and observe a specific molecule that pumps toxins from the inner membrane of certain bacteria. By putting their ideas together, the researchers have shown conclusively that mechanical stresses can interrupt the ability of bacteria to survive exposure to toxins.

Their paper, "Mechanical Stress Compromises Multicomponent Efflux Complexes in Bacteria," published Nov. 26 in the Proceedings of the National Academy of Sciences.

Gram-negative bacteria are characterized by their dual-membrane cell envelope and have the ability to assemble molecular pumps to rid themselves of toxic substances that manage to migrate into the cell, including antibiotics.

Hernandez and Chen's research showed that when E. coli bacteria were placed into a microfluidic device and forced to flow into very tight spaces, the resulting mechanical stresses alone were enough to cause these pumps to break apart and stop working.

"This is one of the first studies to look at the mechanobiology of bacteria," Hernandez said. "Our findings provide evidence that bacteria are similar to other types of cells in that they respond to mechanical forces through molecular complexes."

"Our work shows that you can disrupt the pump complex of bacteria with mechanical means," Chen said, "and this may give us a new tool to enhance treatments of bacterial diseases."

The methodology Hernandez and Chen created can be used to examine all sorts of prokaryotic cell structures, functions and behavior.

"This creative, collaborative research effort, which exploits capabilities in single molecule biology, will provide the Army with a better fundamental understanding of not only the cellular features that keep microbes alive, but how mechanical stress at the cellular level can control bacterial viability and thus provide a novel potential means of controlling bacterial infections," said Robert Kokoska, program manager for microbiology at Army Research Office, an element of U.S. Army Combat Capabilities Development Command's Army Research Laboratory that supported the research.

Credit: 
Cornell University

As China rapidly adopts clean energy, use of traditional stoves persists

image: Families have used their traditional stoves for generations. People know how to best use them so that all energy needs are met and so they may be reluctant to stop using them.

Image: 
Ellison Carter

Old habits are hard to break. A McGill-led study of replacement of traditional wood and coal burning stoves with clean energy in China suggests that, without a better understanding of the reasons behind people's reluctance to give up traditional stoves, it will be difficult for policies in China and elsewhere in the world to succeed in encouraging this shift towards clean energy. The study was published recently in Nature Sustainability.

China is ahead of most low- and middle-income countries in its energy transition: hundreds of millions of rural homes started using clean fuels such as electricity and gas in recent decades. Despite this, many Chinese homes continue using their traditional coal and wood-burning stoves - a trend that is common in many countries.

Traditional stoves linked to premature deaths and climate change

Air pollution from traditional stoves contributed to approximately 2.8 million premature deaths in 2017 and is a major contributor to regional climate change. Efforts made by governments, NGOs, and researchers to incentivize households to switch entirely to clean fuel stoves and give up their traditional stoves - even in highly controlled randomized trials - have largely failed.

"Families have used their traditional stoves for generations. People know what foods taste best with those stoves and how to best use them so that all energy needs are met," said Jill Baumgartner, an Associate Professor in McGill's Department of Epidemiology, Biostatistics and Occupational Health and the senior author on the study by an international team of researchers. "Clean fuel stoves do not always meet all of the energy uses provided by traditional stoves, and may also pose additional costs. The desire to continue using traditional stoves is also seen in the U.S. and Canada, where many homes still use wood fireplaces for space heating despite being well-equipped to only use gas and electric heaters."

Giving up is hard to do

The researchers gathered data from over 700 homes in three provinces in China (Beijing, Shanxi and Guangxi) using a photo-based questionnaire, which allowed participants to point to each type of stove that they had ever owned. They then asked questions about the frequency and timing of use, as well as what fuel types they used with the household stoves.

"We were surprised by the number of stoves current used in different homes, sometimes up to 13 different cooking devices and 7 different heating stoves," said Ellison Carter, the first author on the paper who is an Assistant Professor in the Department of Civil and Environmental Engineering at Colorado State University. "We were also surprised to find that most homes that had first started using clean fuels well over a decade ago were still using their solid fuel stoves, again highlighting the enormous challenge of achieving exclusive use of clean fuel stoves."

The researchers found that the factors associated with adoption of clean fuels were different from those associated with the suspension of traditional coal and wood burning stoves. The common traits among those who stopped using traditional stoves were that they tended to be younger, more educated, and had poorer self-reported health. Those who adopted clean technology tended either to be younger or retired, lived in smaller households, and had a higher income.

Looking at uptake of clean fuels at a global level

This study focused only on China, which is home to over 500 million solid fuel stove users. But an important question for the researchers is how generalizable these results from China are to other countries with different levels of economic development. They were recently awarded funding from the U.S. NIH to develop a framework on the household and community factors that can facilitate suspension of solid fuel stoves. For this work, they obtained a number of nationally-representative datasets on energy use from India, Cambodia, and number of countries in Sub-Saharan Africa to empirically assess this same question in other settings.

"We have good evidence from many countries on the policies and program that can promote the uptake of clean fuel stoves," said Baumgartner. "We now need to better understand the individual or combinations of policies and programs that can accelerate the suspension of solid fuel stoves, particularly for the poorest and most vulnerable."

Credit: 
McGill University

NASA's OSIRIS-REx mission explains Bennu's mysterious particle events

image: This view of asteroid Bennu ejecting particles from its surface on January 6 was created by combining two images taken by the NavCam 1 imager onboard NASA's OSIRIS-REx spacecraft: a short exposure image (1.4 ms), which shows the asteroid clearly, and a long exposure image (5 sec), which shows the particles clearly. Other image processing techniques were also applied, such as cropping and adjusting the brightness and contrast of each layer.

Image: 
NASA/Goddard/University of Arizona/Lockheed Martin

Shortly after NASA's OSIRIS-REx spacecraft arrived at asteroid Bennu, an unexpected discovery by the mission's science team revealed that the asteroid could be active, or consistently discharging particles into space. The ongoing examination of Bennu - and its sample that will eventually be returned to Earth - could potentially shed light on why this intriguing phenomenon is occurring.

The OSIRIS-REx team first observed a particle ejection event in images captured by the spacecraft's navigation cameras taken on Jan. 6, just a week after the spacecraft entered its first orbit around Bennu. At first glance, the particles appeared to be stars behind the asteroid, but on closer examination, the team realized that the asteroid was ejecting material from its surface. After concluding that these particles did not compromise the spacecraft's safety, the mission began dedicated observations in order to fully document the activity.

"Among Bennu's many surprises, the particle ejections sparked our curiosity, and we've spent the last several months investigating this mystery," said Dante Lauretta, OSIRIS-REx principal investigator at the University of Arizona, Tucson. "This is a great opportunity to expand our knowledge of how asteroids behave."

After studying the results of the observations, the mission team released their findings in a Science paper published Dec. 6. The team observed the three largest particle ejection events on Jan. 6 and 19, and Feb. 11, and concluded that the events originated from different locations on Bennu's surface. The first event originated in the southern hemisphere, and the second and third events occurred near the equator. All three events took place in the late afternoon on Bennu.

The team found that, after ejection from the asteroid's surface, the particles either briefly orbited Bennu and fell back to its surface or escaped from Bennu into space. The observed particles traveled up to 10 feet (3 meters) per second, and measured from smaller than an inch up to 4 inches (10 cm) in size. Approximately 200 particles were observed during the largest event, which took place on Jan. 6.

The team investigated a wide variety of possible mechanisms that may have caused the ejection events, and narrowed the list to three candidates: meteoroid impacts, thermal stress fracturing, and released of water vapor.

Meteoroid impacts are common in the deep space neighborhood of Bennu, and it is possible that these small fragments of space rock could be hitting Bennu where OSIRIS-REx is not observing it, shaking loose particles with the momentum of their impact.

The team also determined that thermal fracturing is another reasonable explanation. Bennu's surface temperatures vary drastically over its 4.3-hour rotation period. Although it is extremely cold during the night hours, the asteroid's surface warms significantly in the mid-afternoon, which is when the three major events occurred. As a result of this temperature change, rocks may begin to crack and break down, and eventually particles could be ejected from the surface. This cycle is known as thermal stress fracturing.

Water release may also explain the asteroid's activity. When Bennu's water-locked clays are heated, the water could begin to release and create pressure. It is possible that as pressure builds in cracks and pores in boulders where absorbed water is released, the surface could become agitated, causing particles to erupt.

But nature does not always allow for simple explanations. "It could be that more than one of these possible mechanisms are at play," said Steve Chesley, an author on the paper and Senior Research Scientist at NASA's Jet Propulsion Laboratory in Pasadena, Calif. "For example, thermal fracturing could be chopping the surface material into small pieces, making it far easier for meteoroid impacts to launch pebbles into space."

If thermal fracturing, meteoroid impacts, or both, are in fact the causes of these ejection events, then this phenomenon is likely happening on all small asteroids, as they all experience these mechanisms. However, if water release is the cause of these ejection events, then this phenomenon would be specific to asteroids that contain water-bearing minerals, like Bennu.

Bennu's activity presents larger opportunities once a sample is collected and returned to Earth for study. Many of the ejected particles are small enough to be collected by the spacecraft's sampling mechanism, meaning that the returned sample may possibly contain some material that was ejected and returned to Bennu's surface. Determining that a particular particle had been ejected and returned to Bennu might be a scientific feat similar to finding a needle in a haystack. The material returned to Earth from Bennu, however, will almost certainly increase our understanding of asteroids and the ways they are both different and similar, even as the particle ejection phenomenon continues to be a mystery whose clues we'll also return home with in the form of data and further material for study.

Sample collection is scheduled for summer 2020, and the sample will be delivered to Earth in September 2023.

Credit: 
NASA/Goddard Space Flight Center

Developing a digital twin

image: A digital twin is a digital replica of a physical entity. They enable data-driven decisions by modeling and predicting the status of that entity.

Image: 
Karen Willcox, UT Austin

In the not too distant future, we can expect to see our skies filled with unmanned aerial vehicles (UAVs) delivering packages, maybe even people, from location to location.

In such a world, there will also be a digital twin for each UAV in the fleet: a virtual model that will follow the UAV through its existence, evolving with time.

"It's essential that UAVs monitor their structural health," said Karen Willcox, director of the Oden Institute for Computational Engineering and Sciences at The University of Texas at Austin (UT Austin) and an expert in computational aerospace engineering. "And it's essential that they make good decisions that result in good behavior."

An invited speaker at the 2019 International Conference for High Performance Computing, Networking, Storage and Analysis (SC19), Willcox shared the details of a project -- supported primarily by the U.S. Air Force program in Dynamic Data-Driven Application Systems (DDDAS) -- to develop a predictive digital twin for a custom-built UAV. The project is a collaboration between UT Austin, MIT, Akselos, and Aurora Flight Sciences.

The twin represents each component of the UAV, as well as its integrated whole, using physics-based models that capture the details of its behavior from the fine-scale to the macro level. The twin also ingests on-board sensor data from the vehicle and integrates that information with the model to create real-time predictions of the health of the vehicle.

Is the UAV in danger of crashing? Should it change its planned route to minimize risks? With a predictive digital twin, these kinds of decisions can be made on the fly, to keep UAVs flying.

Bigger than Big Data

In her talk, Willcox shared the technological and algorithmic advances that allow a predictive digital twin to function effectively. She also shared her general philosophy for how "high-consequence" problems can be addressed throughout science and engineering.

"Big decisions need more than just big data," she explained. "They need big models, too."

This combination of physics-based models and big data is frequently called "scientific machine learning." And while machine learning, by itself, has been successful in addressing some problems -- like object identification, recommendation systems, and games like Go -- more robust solutions are required for problems where getting the wrong answer may be incredibly costly, or have life-or-death consequences.

"These big problems are governed by complex multiscale, multi-physics phenomena," Willcox said. "If we change the conditions a little, we can see drastically different behavior."

In Willcox's work, computational modeling is paired with machine learning to produce predictions that are reliable, and also explainable. Black box solutions are not good enough for high-consequence applications. Researchers (or doctors or engineers) need to know why a machine learning system settled on a certain result.

In the case of the digital twin UAV, Willcox's system is able to capture and communicate the evolving changes in the health of the UAV. It can also explain what sensor readings are indicating declining health and driving the predictions.

Real-Time Decision-Making at the Edge

The same pressures that require the use of physics-based models -- the use of complex, high-dimensional models; the need for uncertainty quantification; the necessity of simulating all possible scenarios -- also make the problem of creating predictive digital twins a computationally challenging one.

That's where an approach called model reduction comes into play. Using a projection-based method they developed, Willcox and her collaborators can identify approximate models that are smaller, but somehow encode the most important dynamics, such that they can be used for predictions.

"This method allows the possibility of creating low-cost, physics-based models that enable predictive digital twins," she said.

Willcox had to develop another solution to model the complex physical interactions that occur on the UAV. Rather than simulate the entire vehicle as a whole, she works with Akselos to use their approach that breaks the model (in this case, the plane) into pieces -- for example, a section of a wing -- and computes the geometric parameters, material properties, and other important factors independently, while also accounting for interactions that occur when the whole plane is put together.

Each component is represented by partial differential equations and at high fidelity, finite element methods and a computational mesh are used to determine the impact of flight on each segment, generating physics-based training data that feeds into a machine learning classifier.

This training is computationally intensive, and in the future Willcox's team will collaborate with the Texas Advanced Computing Center (TACC) at UT Austin to use supercomputing to generate even larger training sets that consider more complex flight scenarios. Once training is done, online classification can be done very rapidly.

Using these model reduction and decomposition methods, Willcox was able to achieve a 1,000-time speed up -- cutting simulation times from hours or minutes to seconds -- while maintaining the accuracy needed for decision-making.

"The method is highly interpretable," she said. "I can go back and see what sensor is contributing to being classified into a state." The process naturally lends itself to sensor selection and to determining where sensors need to be placed to capture details critical to the health and safety of the UAV.

In a demonstration Willcox showed at the conference, a UAV traversing an obstacle course was able to recognize its own declining health and chart a path that was more conservative to assure it made it back home safely. This is a test UAVs must pass for them to be deployed broadly in the future.

"The work presented by Dr. Karen Willcox is a great example of the application of the DDDAS paradigm, for improving modeling and instrumentation methods and creating real-time decision support systems with the accuracy of full-scale models," said Frederica Darema, former Director of the Air Force Office of Scientific Research, who supported the research.

"Dr. Willcox's work showed that the application of DDDAS creates the next generation of 'digital twin' environments and capabilities. Such advances have enormous impact for increased effectiveness of critical systems and services in the defense and civilian sectors."

Digital twins aren't the exclusive domain of UAVs; they're increasingly being developed for manufacturing, oil refineries, and Formula 1 race cars. The technology was named one of Gartner's Top 10 Strategic Technology Trends for 2017 and 2018.

"Digital twins are becoming a business imperative, covering the entire lifecycle of an asset or process and forming the foundation for connected products and services," said Thomas Kaiser, SAP Senior Vice President of IoT, in a 2017 Forbes interview. "Companies that fail to respond will be left behind."

With respect to predictive data science and the development of digital twins, Willcox says: "Learning from data through the lens of models is the only way to make intractable problems practical. It brings together the methods and the approaches from the fields of data science, machine learning, and computational science and engineering, and directs them at high-consequence applications."

Credit: 
University of Texas at Austin, Texas Advanced Computing Center

Recycling nutrient-rich industrial waste products enhances soil, reduces carbon

image: Sprayers, like this one, can distribute many materials across crops. In this research study, the team applied inactivated spent microbial mass (SMB) and measured maize yields over two growing seasons and changes in soil carbon.

Image: 
Photo by D. O'Dell, courtesy UTIA.

KNOXVILLE, Tenn. -- Recycling biotechnology byproducts can enhance soil health while reducing carbon emissions and maintaining crop yields.

A recent paper in Agrosystems, Geosciences & Environment examines the possible benefits of a new kind of crop fertilizer. Researchers from the University of Tennessee Institute of Agriculture, along with collaborators from DuPont, USDA, MetCorps, and Oklahoma State University, studied two fields of maize (Zea mays L. var. indentata): one plot treated with heat-inactivated spent microbial mass (SMB), and one plot treated with a typical farmer fertilizer practice. SMB is a biotechnology waste byproduct that can provide nutrients contained in conventional fertilizers. Over the course of one year, researchers measured the net ecosystem exchange of carbon dioxide (net CO2 emissions) between the crop surface and atmosphere of the two plots. Researchers also measured yields of maize over two growing seasons, in addition to changes in soil carbon over 1.7 years.

"Reusing industrial biotechnology by-products has become an important component of circular bio-economies," says Deb O'Dell, lead investigator. During the research, O'Dell was a graduate research assistant in the Department of Biosystems Engineering and Soil Science, working under the guidance of co-author and soil science professor Neal Eash. "When nutrient-rich wastes are returned to agricultural land, soil fertility improves and crop productivity increases," says Eash. "Also, re-using waste streams can reduce greenhouse gas emissions and improve soil fertility, which could generate greater environmental benefits as well." James Zahn of DuPont Tate & Lyle Bio Products, LLC, adds that, "Applying the rich source of nutrients in DuPont's biotechnology waste to agriculture has potential not only to replace mineral fertilizers but to enhance the soil and improve agricultural production."

According to the research findings, the addition of SMB provided similar crop yields to that of typical farmer fertilization practices; however, the SMB had to be applied at greater rates. The team also found the annual net ecosystem exchange of carbon dioxide was greater for the SMB application than for the farmer practice plot, although some excess emissions appear to be recycled back into the ecosystem. "The greater application of SMB shows the potential to enrich ecosystem productivity and environmental sustainability through the conversion of waste nutrients into greater yields, greater plant biomass and increased soil carbon," the paper's authors suggest.

Overall, the research found that utilizing carbon-rich waste nutrients increases soil organic matter, improves the physical and chemical properties of the soil, and creates a reservoir of plant nutrients, providing environmental and agricultural benefits that extend beyond the immediate application and harvest yield.

Credit: 
University of Tennessee Institute of Agriculture

Cellphone distraction linked to increase in head injuries

Head and neck injuries incurred while driving or walking with a cellphone are on the rise - and correlates with the launch of the iPhone in 2007 and release of Pokémon Go in 2016, a Rutgers study found.

The study, published in JAMA Otolaryngology-Head & Neck Surgery, reviewed 2,501 emergency department patients who sustained head and neck injuries resulting from cellphone use between 1998 and 2017. They found a steady increase in injuries over that time, along with the notable spikes. Pokémon Go is an augmented reality-based game that requires players to track animated characters on their phones in real locations.

The injuries included cuts, bruises, abrasions and internal injuries, especially around the eye and nose. More than 41 percent occurred at home and were minor, requiring little or no treatment. About 50 percent resulted from distracted driving and one-third from distracted walking.

Children under 13 years were significantly more likely to suffer a mechanical injury, such as a cell phone battery exploding or parents accidentally dropping a cell phone on a child or a child hitting themselves in the face with the phone.

"Injuries from cell phone use have mainly been reported from incidences during driving, but other types of injuries have gone largely underreported," said study author Boris Paskhover, a surgeon and assistant professor at Rutgers New Jersey Medical School.

"We hypothesize that distractions caused by cell phones were the biggest reason for injury and mainly affected people aged 13 to 29," Paskhover said. "The findings suggest a need for education about the risks of cell phone use and distracted behavior during other activities as well as driving and walking."

Credit: 
Rutgers University

Carbon emissions from volcanic rocks can create global warming -- study

Greenhouse gas emissions released directly from the movement of volcanic rocks are capable of creating massive global warming effects - a discovery which could transform the way scientists predict climate change, a new study reveals.

Scientists' calculations based on how carbon-based greenhouse gas levels link to movements of magma just below earth's surface suggest that such geological change has caused the largest temporary global warming of the past 65 million years.

Large Igneous Provinces (LIPs) are extremely large accumulations of igneous rocks which occur when magma travels through the crust towards the surface.

Geologists at the University of Birmingham have created the ?rst mechanistic model of carbon emissions changes during the Paleocene-Eocene Thermal Maximum (PETM) - a short interval of maximum temperature lasting around 100,000 years some 55 million years ago.

They published their findings in Nature Communications, after calculating carbon-based greenhouse gas fluxes associated with the North Atlantic Igneous Province (NAIP) - one of Earth's largest LIPs that spans Britain, Ireland, Norway and Greenland.

Dr Stephen Jones, Senior Lecturer in Earth Systems at the University of Birmingham, commented: "Large Igneous Provinces are linked to spikes of change in global climate, ecosystems and the carbon cycle throughout Mesozoic time - coinciding with the Earth's most devastating mass extinctions and oceans becoming strongly depleted of oxygen.

"We calculated carbon-based greenhouse gas fluxes associated with the NAIP - linking measurements of the process that generated magma with observations of the individual geological structures that controlled gas emissions. These calculations suggest the NAIP caused the largest transient global warming of the past 65 million years.

"More geological measurements are required to reduce the uncertainty range of our solid Earth emissions model, but we believe clari?cation of this carbon cycle behaviour will impact modelling and management of future climate change."

The researchers' simulations predict peak emissions flux of 0.2-0.5 PgC yr-1 and show that the NAIP could have initiated PETM climate change. Their work is the first predictive model of carbon emissions flux from any proposed PETM carbon source directly constrained by observations of the geological structures that controlled the emissions.

Associations between LIPs and changes in global climate, ecosystems and the carbon cycle during the Mesozoic period imply that greenhouse gases released directly by LIPs can initiate global change that persists over 10,000 to 100,000 years.

The PETM is the largest natural climate change event of Cenozoic time and an important yardstick for theories explaining today's long-term increase in the average temperature of Earth's atmosphere as an effect of human industry and agriculture.

During PETM initiation, release of 0.3-1.1 PgC yr-1 of carbon as greenhouse gases to the ocean-atmosphere system drove 4-5°C of global warming over less than 20,000 years - a relatively short period of time.

Credit: 
University of Birmingham

Changing wildfires in the California's Sierra Nevada may threaten northern goshawks

Amsterdam, December 5, 2019 - Wildfire is a natural process in the forests of the western US, and many species have evolved to tolerate, if not benefit from it. But wildfire is changing. Research in the journal Biological Conservation, published by Elsevier, suggests fire, as it becomes more frequent and severe, poses a substantial risk to goshawks in the Sierra Nevada region.

How Northern Goshawks respond to fire is not well understood. The single study to date examined the effects of fire on nest placement and found that the birds avoided nesting in areas burned at high severity. The effects of fire on the birds' roosting and foraging habitat however may be more complex, because prey populations may temporarily increase in burned areas and improve their quality as a foraging habitat.

"To effectively manage and conserve wildlife, we need to understand how animals use the landscape across their life cycle," noted corresponding author Dr. Rachel Blakey at The Institute for Bird Populations and UCLA La Kretz Center for California Conservation Science.

Dr. Blakey and her colleagues at the institute wanted to better understand the habitat preferences of Northern Goshawks. In collaboration with scientists at the US Forest Service and the US Geological Survey Missouri Cooperative Fish and Wildlife Research Unit at the University of Missouri, the research team looked specifically at how goshawks use burned areas in the Plumas National Forest, California.

Twenty Goshawks were fitted with solar-powered global positioning system (GPS) tracking devices that monitored the habitats the goshawks chose for foraging and night-time roosting. Goshawks preferred forest stands with larger, more mature trees and higher canopy cover-also called "late seral" forest-for both roosting and foraging.

"While there was individual and sex-based variability in selection of habitat at the finest scales, at the larger spatial scales that are arguably most important for management, goshawks consistently selected for late-seral forest," added Dr. Blakey.

Unfortunately, late-seral forest is already in short supply in the western US and the attributes that make it attractive to Northern Goshawks also put it at a high risk of large and severe wildfires. Further analysis of the study area showed that 80 percent foraging habitat and 87 percent of roost sites were designated a "High Wildfire Potential Hazard" by the US Forest Service.

Rodney Siegel, Executive Director of The Institute for Bird Populations and co-author of the study said "A lot of work by our organization and others over the past decade has shown that some wildlife species are quite resilient to forest fire and can even thrive in recently burned forests.

"But habitat selection by the Northern Goshawks we studied suggests that these birds, with their strong preference for late seral forest attributes like big trees and closed forest canopy, are jeopardized by changing fire patterns that reduce forest cover," added Dr. Siegel.

Dr. Siegel also notes that reducing wildfire risk in goshawk habitat will be a major challenge for forest managers. "The treatments to reduce risk of high-severity fire, including forest thinning and prescribed fire, may also reduce goshawk foraging and roosting habitat quality if they decrease canopy cover and fragment late-seral forest," said Dr. Siegel.

Dr. Blakey expects that the foraging and roosting habitat preferences seen in goshawks in this study are probably common to goshawks throughout the Sierra Nevada region, and perhaps western montane forests in general. Likewise, this preferred habitat is likely at risk of high severity fire across the region as well.

"Given that fire regimes are changing across the range of the Northern Goshawk, both in the US and across the species' distribution globally, the use of burned habitats by this species should also be investigated more broadly," concluded Dr. Blakey.

Credit: 
Elsevier

A robot and software make it easier to create advanced materials

image: A Rutgers-led team adapted advanced liquid handling robotics to perform the chemistry required for synthesizing synthetic polymers. This new automated approach enables the rapid exploration of new materials valuable in industry and medicine.

Image: 
Matthew Tamasi

A Rutgers-led team of engineers has developed an automated way to produce polymers, making it much easier to create advanced materials aimed at improving human health.

The innovation is a critical step in pushing the limits for researchers who want to explore large libraries of polymers, including plastics and fibers, for chemical and biological applications such as drugs and regenerative medicine through tissue engineering.

While a human researcher may be able to make a few polymers a day, the new automated system - featuring custom software and a liquid-handling robot - can create up to 384 different polymers at once, a huge increase over current methods.

Synthetic polymers are widely used in advanced materials with special properties, and their continued development is crucial to new technologies, according to a study in the journal Advanced Intelligent Systems. Such technologies include diagnostics, medical devices, electronics, sensors, robots and lighting.

"Typically, researchers synthesize polymers in highly controlled environments, limiting the development of large libraries of complex materials," said senior author Adam J. Gormley, an assistant professor in the Department of Biomedical Engineering in the School of Engineering at Rutgers University-New Brunswick. "By automating polymer synthesis and using a robotic platform, it is now possible to rapidly create a multitude of unique materials."

Robotics has automated many ways to make materials as well as discover and develop drugs. But synthesizing polymers remains challenging because most chemical reactions are extremely sensitive to oxygen and can't be done without removing it during production. The Gormley lab's open-air robotics platform carries out polymer synthesis reactions that tolerate oxygen.

The group developed custom software that allows a liquid handling robot to interpret polymer designs made on a computer and carry out every step of the chemical reaction. One benefit: the new automated system makes it easier for non-experts to create polymers.

Credit: 
Rutgers University