Earth

Children's fingertip injuries could signal abuse

Many children who suffer fingertip injuries have been abused, according to a Rutgers study. The researchers found that children who had a documented history of abuse or neglect were 23 percent more likely to suffer a fingertip injury before age 12.

The study, published in Journal of Hand Surgery Global Online, is the first to look at the link between children's fingertip injures and abuse or neglect.

The researchers used a New York state database that tracks medical discharge records to identify 79,108 children from infancy to 12 years old who sought emergency treatment between 2004 and 2013 for fingertip injuries, such as amputation, tissue damage or crushing, from a total of 4,870,299 children in the database. They then analyzed the children's medical record history for documentation of abuse.

"We found that children who had been coded at some point with physical abuse were more likely to have also been brought in for treatment of a fingertip injury," said lead author Alice Chu, an associate professor of orthopedic surgery and chief of the division of pediatric orthopedics at Rutgers New Jersey Medical School.

Fingertips injuries can occur during abuse when a child is treated roughly or when the abuser slams a door or steps on their hands. "There is no one injury type that is 100 percent predictive of child abuse, but all the small risk factors can add up. Since fingertip injuries are mostly inflicted by someone else -- whether intentional or accidental -- it should be a signal to physicians to look deeper into the child's medical history for signs of neglect or physical abuse," said Chu.

Doctors may suspect abuse if parents provide a vague history with contradictory statements, if they delay seeking treatment or if the child's developmental stage is inconsistent with the type of injury, she noted.

"Currently, pediatric fingertip injuries typically are not considered an injury of abuse but one of accidental trauma or a clumsy child who gets his finger caught in a door," she said. "Doctors need to see these instances as a possible injury from abuse or neglect so they can be on higher alert during the evaluation."

Credit: 
Rutgers University

Biodiversity offsetting is contentious -- here's an alternative

image: The spotted-tailed quoll still treads an ecological tightrope.

Image: 
Gerhard Körtner

A new approach to compensate for the impact of development may be an effective alternative to biodiversity offsetting - and help nations achieve international biodiversity targets.

University of Queensland scientists say target-based ecological compensation provides greater certainty and clarity, while ensuring the management of impacts from projects like new mines, roads or housing estates directly contributes to broader conservation goals.

UQ's Dr Jeremy Simmonds said most countries in the world have or are developing policies on biodiversity offsetting.

"Biodiversity offsetting is a form of compensation that typically aims to achieve an outcome in which there is 'no net loss' of biodiversity as a result of a particular development," he said.

"Calculating how much and what type of offset is required to compensate for biodiversity losses caused by a project is notoriously complex and confusing.

"What's more, as currently designed and implemented, most offsets result in an overall decline in biodiversity, which is at odds with stated goals of no net loss."

Dr Simmonds said target-based ecological compensation resolves much of this uncertainty by explicitly linking compensatory requirements to biodiversity targets.

"Let's say a country has committed to doubling the area of habitat for a particular threatened species," he said.

"Under target-based ecological compensation, a project that causes a loss of 100 hectares of that species' habitat would need to restore or recreate 200 hectares of that same species' habitat.

"The project has created twice as much habitat as it destroyed, and therefore contributes to the jurisdiction's target of doubling habitat availability for that species - it's that simple.

"Most nations already have explicit targets for nature conservation, including under international agreements like the Convention on Biological Diversity.

"In fact, the new draft set of targets under the Convention on Biological Diversity would require no net loss of natural ecosystems.

"Our approach suggests a way to achieve that, while recognising development projects that damage biodiversity are sometimes necessary.

"This approach harnesses the compensation that proponents of development are increasingly compelled to provide, generally at great cost and effort, towards the achievement of broader nature conservation goals like internationally-agreed targets."

UQ's Professor Martine Maron said the new proposal provided an opportunity to protect nature in the face of ongoing development.

"Offsetting is not yet delivering on its promise," she said.

"Future development must be strictly managed, so that it does not proceed at the expense of our precious biodiversity.

"Target-based ecological compensation is a tool that can reconcile nature conservation and development, resulting in better outcomes for the planet and for people."

Credit: 
University of Queensland

Polar bears in Baffin Bay skinnier, having fewer cubs due to less sea ice

image: A polar bear in Baffin Bay, West Greenland in 2012 seen from the air.

Image: 
Kristin Laidre/University of Washington

Polar bears are spending more time on land than they did in the 1990s due to reduced sea ice, new University of Washington-led research shows. Bears in Baffin Bay are getting thinner and adult females are having fewer cubs than when sea ice was more available.

The new study, recently published in Ecological Applications, includes satellite tracking and visual monitoring of polar bears in the 1990s compared with more recent years.

"Climate-induced changes in the Arctic are clearly affecting polar bears," said lead author Kristin Laidre, a UW associate professor of aquatic and fishery sciences. "They are an icon of climate change, but they're also an early indicator of climate change because they are so dependent on sea ice."

The international research team focused on a subpopulation of polar bears around Baffin Bay, the large expanse of ocean between northeastern Canada and Greenland. The team tracked adult female polar bears' movements and assessed litter sizes and the general health of this subpopulation between the 1990s and the period from 2009 to 2015.

Polar bears' movements generally follow the annual growth and retreat of sea ice. In early fall, when sea ice is at its minimum, these bears end up on Baffin Island, on the west side of the bay. They wait on land until winter when they can venture out again onto the sea ice.

When Baffin Bay is covered in ice, the bears use the solid surface as a platform for hunting seals, their preferred prey, to travel and even to create snow dens for their young.

"These bears inhabit a seasonal ice zone, meaning the sea ice clears out completely in summer and it's open water," Laidre said. "Bears in this area give us a good basis for understanding the implications of sea ice loss."

Satellite tags that tracked the bears' movements show that polar bears spent an average of 30 more days on land in recent years compared to in the 1990s. The average in the 1990s was 60 days, generally between late August and mid-October, compared with 90 days spent on land in the 2000s. That's because Baffin Bay sea ice retreats earlier in the summer and the edge is closer to shore, with more recent summers having more open water.

"When the bears are on land, they don't hunt seals and instead rely on fat stores," said Laidre. "They have the ability to fast for extended periods, but over time they get thinner."

To assess the females' health, the researchers quantified the condition of bears by assessing their level of fatness after sedating them, or inspecting them visually from the air. Researchers classified fatness on a scale of 1 to 5. The results showed the bears' body condition was linked with sea ice availability in the current and previous year -- following years with more open water, the polar bears were thinner.

The body condition of the mothers and sea ice availability also affected how many cubs were born in a litter. The researchers found larger litter sizes when the mothers were in a good body condition and when spring breakup occurred later in the year -- meaning bears had more time on the sea ice in spring to find food.

The authors also used mathematical models to forecast the future of the Baffin Bay polar bears. The models took into account the relationship between sea ice availability and the bears' body fat and variable litter sizes. The normal litter size may decrease within the next three polar bear generations, they found, mainly due to a projected continuing sea ice decline during that 37-year period.

"We show that two-cub litters -- usually the norm for a healthy adult female -- are likely to disappear in Baffin Bay in the next few decades if sea ice loss continues," Laidre said. "This has not been documented before."

Laidre studies how climate change is affecting polar bears and other marine mammals in the Arctic. She led a 2016 study showing that polar bears across the Arctic have less access to sea ice than they did 40 years ago, meaning less access to their main food source and their preferred den sites. The new study uses direct observations to link the loss of sea ice to the bears' health and reproductive success.

"This work just adds to the growing body of evidence that loss of sea ice has serious, long-term conservation concerns for this species," Laidre said. "Only human action on climate change can do anything to turn this around."

Credit: 
University of Washington

Invasive species that threaten biodiversity on the Antarctic Peninsula are identified

image: Research team

Image: 
Universidad de Córdoba

Invasive species are non-native ones that are introduced into a new habitat and are able to adapt to it, displacing indigenous species or causing them to go extinct. This threat is increased by the fact that people and things are constantly moving all over the world, and this is one of the main causes of biodiversity worldwide. Though it is uninhabited, Antarctica is not free from this problem. Due to scientific activity and growing tourism in Antarctica, especially on the Antarctic Peninsula, there is a high risk of invasive species coming into this habitat and killing off indigenous species in the area.

An international research team, including University of Cordoba researcher Pablo González Moreno, identified 13 invasive species that are the likeliest to threaten biodiversity in the Antarctic. "The species were assessed using three main criteria: their risk of coming to the Antarctic Peninsula, the risk of surviving and reproducing and the risk of causing a negative impact on the biodiversity and ecosystems of the region", explains González Moreno.

Among them, the most troubling are the Chilean mussel, the Mediterranean mussel, edible seaweed also known as wakame, some kinds of crabs, mites, and some insects, as well as land plants such as Leptinella scariosa and Leptinella plumose.

These non-native species can be transported in different ways. Visiting people can bring in seeds on their clothes or on the soles of their shoes which can end up taking root in the new soil. As far as boats are concerned, they may have species, such as mussels, stuck to their hulls and inside as well, especially in fresh food supplies, where different plants and insects can hide. In addition, rats and mice can be a threat. Some of the Antarctic islands, such as Marion Island and South Georgia Island, have already been invaded by these rodents, though this is not expected to happen on the Antarctic Peninsula yet.

These species and many others require mitigation measures in order to reduce their impact on fragile biological communities in the Antarctic, not only on marine habitats but land habitats as well. Some non-native species are already established near research centers and tourist areas. Erradicating invasive species is possible but difficult and costly.

"The only way to prevent this threat is by implementing a solid biosecurity system that can minimize the risk of entry of invasive species, as well as an early warning system that would monitor the region and identify new invasive species getting established", explains González Moreno. Only then will it be possible to reduce the risks and protect vulnerable Antarctic ecosystems against the threat of non-native species.

Credit: 
University of Córdoba

Foot-and-mouth-disease virus could help target the deadliest cancer

image: Professor John Marshall, Queen Mary University of London

Image: 
Cancer Research UK

The foot-and-mouth-disease virus is helping scientists to tackle a common cancer with the worst survival rate - pancreatic cancer.

Researchers at Queen Mary University of London have identified a peptide, or protein fragment, taken from the foot-and-mouth-disease virus that targets another protein, called ΑvΒ6 (alpha-v-beta-6). This protein is found at high levels on the surface of the majority of pancreatic cancer cells.

Working jointly with Spirogen (now part of AstraZeneca) and ADC Therapeutics, the team have used the peptide to carry a highly potent drug, called tesirine, to the pancreatic cancer cells. When mice with pancreatic cancer tumours were treated with the drug and peptide combination, the tumours were completely killed.

The study, published in Theranostics, was funded by the UK medical research charity Pancreatic Cancer Research Fund.

Lead researcher Professor John Marshall, from the Cancer Research UK Barts Centre, explains: "Foot-and-mouth-disease virus uses ΑvΒ6 as a route to infect cattle, as the virus binds to this protein on a cow's tongue. By testing pieces of the protein in the virus that attaches to ΑvΒ6, we've developed a route to deliver a drug specifically to pancreatic cancers. Our previous research had shown that 84 per cent of pancreatic cancer patients have high levels of ΑvΒ6 on their cancers."

The team performed tests of the peptide/tesirine combination in both cells in the laboratory and in mice. They used genetically identical human cancer cells, some that had ΑvΒ6 on their surface and some that had no ΑvΒ6. Both types of cells were exposed to the peptide and drug combination. The cells with ΑvΒ6 were most affected, while the ΑvΒ6 negative cells needed much higher doses of the drug for the cells to be killed.

The tests in mice gave the most impressive results. Mice that had ΑvΒ6-positive tumours were given a tiny dose of the peptide-drug combination three times a week, and this stopped the tumours growing completely. But when the dose was increased and given just twice a week, all tumours in mice that were ΑvΒ6 positive were completely killed.

"These very exciting results, that are the result of many years of laboratory testing, offer a completely new way of treating pancreatic cancer." says Professor Marshall. "One advantage of targeting ΑvΒ6 is that it is very specific to the cancer, because most normal human tissues have little or none of this protein. So we're hopeful that, if we can develop this into an effective treatment for pancreatic cancer, it would have limited side effects."

The team now plan to further test the peptide and drug combination in more complex mice models, to determine if it can also impact on pancreatic cancer metastases, before moving to clinical trials.

Dr Emily Farthing, senior research information manager at Cancer Research UK said: "Although we have made great progress in treating many types of cancer, survival remains stubbornly low for people with pancreatic cancer and there is an urgent need for more effective treatments. This early-stage research has developed a promising new drug that reduces the growth of pancreatic tumours in the lab. And with further research to see if it's safe and effective for patients, we hope that this could one day offer new hope for people with this disease."

Credit: 
Pancreatic Cancer Research Fund

Sitting still linked to increased risk of depression in adolescents

Too much time sitting still - sedentary behaviour - is linked to an increased risk of depressive symptoms in adolescents, finds a new UCL-led study.

The Lancet Psychiatry study found that an additional 60 minutes of light activity (such as walking or doing chores) daily at age 12 was associated with a 10% reduction in depressive symptoms at age 18.

"Our findings show that young people who are inactive for large proportions of the day throughout adolescence face a greater risk of depression by age 18. We found that it's not just more intense forms of activity that are good for our mental health, but any degree of physical activity that can reduce the time we spend sitting down is likely to be beneficial," said the study's lead author, PhD student Aaron Kandola (UCL Psychiatry).

"We should be encouraging people of all ages to move more, and to sit less, as it's good for both our physical and mental health."

The research team used data from 4,257 adolescents, who have been participating in longitudinal research from birth as part of the University of Bristol's Children of the 90s cohort study. The children wore accelerometers to track their movement for at least 10 hours over at least three days, at ages 12, 14 and 16.

The accelerometers reported whether the child was engaging in light activity (which could include walking or hobbies such as playing an instrument or painting), engaging in moderate-to-physical activity (such as running or cycling), or if they were sedentary. The use of accelerometers provided more reliable data than previous studies which have relied on people self-reporting their activity, which have yielded inconsistent results.

Depressive symptoms, such as low mood, loss of pleasure and poor concentration, were measured with a clinical questionnaire. The questionnaire measures depressive symptoms and their severity on a spectrum, rather than providing a clinical diagnosis.

Between the ages of 12 and 16, total physical activity declined across the cohort, which was mainly due to a decrease in light activity (from an average of five hours, 26 minutes to four hours, five minutes) and an increase in sedentary behaviour (from an average of seven hours and 10 minutes to eight hours and 43 minutes).

The researchers found that every additional 60 minutes of sedentary behaviour per day at age 12, 14 and 16 was associated with an increase in depression score of 11.1%, 8% or 10.5%, respectively, by age 18. Those with consistently high amounts of time spent sedentary at all three ages had 28.2% higher depression scores by age 18.

Every additional hour of light physical activity per day at age 12, 14 and 16 was associated with depression scores at age 18 that were 9.6%, 7.8% and 11.1% lower, respectively.

The researchers found some associations between moderate-to-vigorous activity at earlier ages and reduced depressive symptoms, although they caution that their data was weaker due to low levels of activity of such intensity in the cohort (averaging around 20 minutes per day), so the findings do not clarify whether moderate-to-vigorous activity is any less beneficial than light activity.

While the researchers cannot confirm that the activity levels caused changes to depressive symptoms, the researchers accounted for potentially confounding factors such as socioeconomic status, parental history of mental health, and length of time wearing the accelerometer, and avoided the possibility of reverse causation by adjusting their analysis to account for people with depressive symptoms at the study outset.

"Worryingly, the amount of time that young people spend inactive has been steadily rising for years, but there has been a surprising lack of high quality research into how this could affect mental health. The number of young people with depression also appears to be growing and our study suggests that these two trends may be linked," Kandola added.

The study's senior author, Dr Joseph Hayes (UCL Psychiatry and Camden and Islington NHS Foundation Trust), said: "A lot of initiatives promote exercise in young people, but our findings suggest that light activity should be given more attention as well."

"Light activity could be particularly useful because it doesn't require much effort and it's easy to fit into the daily routines of most young people. Schools could integrate light activity into their pupils' days, such as with standing or active lessons. Small changes to our environments could make it easier for all of us to be a little bit less sedentary," he added.

Credit: 
University College London

Ancient Antarctic ice melt increased sea levels by 3+ meters -- and it could happen again

image: Fine layers of ancient volcanic ash in the ice helped the team pinpoint when the mass melting took place.

Image: 
AntarcticScience.com

Mass melting of the West Antarctic Ice Sheet was a major cause of high sea levels during a period known as the Last Interglacial (129,000-116,000 years ago), an international team of scientists led by UNSW's Chris Turney has found. The research was published today in Proceedings of the National Academy of Sciences (PNAS).
 
The extreme ice loss caused a multi-metre rise in global mean sea levels - and it took less than 2°C of ocean warming for it to occur.  

"Not only did we lose a lot of the West Antarctic Ice Sheet, but this happened very early during the Last Interglacial," says Chris Turney, Professor in Earth and Climate Science at UNSW Sydney and lead author of the study.  

Fine layers of ancient volcanic ash in the ice helped the team pinpoint when the mass melting took place. Alarmingly, the results indicated that most ice loss occurred within the first millennia, showing how sensitive the Antarctic is to higher temperatures.  

"The melting was likely caused by less than 2°C ocean warming - and that's something that has major implications for the future, given the ocean temperature increase and West Antarctic melting that's happening today," Professor Turney says.  

During the Last Interglacial, polar ocean temperatures were likely less than 2°C warmer than today, making it a useful period to study how future global warming might affect ice dynamics and sea levels.  

"This study shows that we would lose most of the West Antarctic Ice Sheet in a warmer world," says Professor Turney.  

In contrast to the East Antarctic Ice Sheet - which mostly sits on high ground - the West Antarctic sheet rests on the seabed. It's fringed by large areas of floating ice, called ice shelves, that protect the central part of the sheet.  

As warmer ocean water travels into cavities beneath the ice shelves, ice melts from below, thinning the shelves and making the central ice sheet highly vulnerable to warming ocean temperatures.  

Going back in time 

To undertake their research, Professor Turney and his team travelled to the Patriot Hills Blue Ice Area, a site located at the periphery of the West Antarctic Ice Sheet, with support from Antarctic Logistics and Expeditions (or ALE).  

Blue ice areas are the perfect laboratory for scientists due to their unique topography - they are created by fierce, high-density katabatic winds. When these winds blow over mountains, they remove the top layer of snow and erode the exposed ice. As the ice is removed, ancient ice flows up to the surface, offering an insight into the ice sheet's history. 

While most Antarctic researchers drill down into the ice core to extract their samples, this team used a different method - horizontal ice core analysis.   

"Instead of drilling kilometres into the ice, we can simply walk across a blue ice area and travel back through millennia. By taking samples of ice from the surface we are able to reconstruct what happened to this precious environment in the past," Professor Turney says. 

Through isotope measurements, the team discovered a gap in the ice sheet record immediately prior to the Last Interglacial. This period of missing ice coincides with the extreme sea level increase, suggesting rapid ice loss from the West Antarctic Ice Sheet. The volcanic ash, trace gas samples and ancient DNA from bacteria trapped in the ice all support this finding. 

Learning from the Last Interglacial  

Ice age cycles occur approximately every 100,000 years due to subtle changes in Earth's orbit around the Sun. These ice ages are separated by warm interglacial periods. The Last Interglacial is the most recent warm period to our current interglacial period, the Holocene. 

While human contribution to global warming makes the Holocene unique, the Last Interglacial remains a useful research point to understand how the planet responds to extreme change.  

"The future is heading far beyond the range of anything we've seen observed in the scientific instrumental record of the last 150 years," says Professor Turney. "We have to look further into the past if we're going to manage future changes."  

During the Last Interglacial, global mean sea levels were between 6m and 9m higher than present day, although some scientists suspect this could have reached 11m.  

The sea level rise in the Last Interglacial can't be fully explained by the Greenland Ice Sheet melt, which accounted for a 2m increase, or ocean expansion from warmer temperatures and melting mountain glaciers, which are thought to have caused less than a 1m increase. 

"We now have some of the first major evidence that West Antarctica melted and drove a large part of this sea level rise," says Professor Turney. 

An urgent need to minimise future warming 

The severity of the ice loss suggests that the West Antarctic Ice Sheet is highly sensitive to future ocean warming. 

"The West Antarctic Ice Sheet is sitting in water, and today this water is getting warmer and warmer," says Professor Turney, who is also a Chief Investigator of the ARC Centre of Excellence for Australian Biodiversity and Heritage (CABAH).  

Using data gained from their fieldwork, the team ran model simulations to investigate how warming might affect the floating ice shelves. These shelves currently buttress the ice sheets and help slow the flow of ice off the continent.  

The results suggest a 3.8m sea level rise during the first thousand years of a 2°C warmer ocean. Most of the modelled sea level rise occurred after the loss of the ice shelves, which collapsed within the first two hundred years of higher temperatures.  

The researchers are concerned that persistent high sea surface temperatures would prompt the East Antarctic Ice Sheet to melt, driving global sea levels even higher. 

"The positive feedbacks between a warming ocean, ice shelf collapse, and ice sheet melt suggests that the West Antarctic may be vulnerable to passing a tipping point," stressed Dr Zoë Thomas, co-author and ARC Discovery Early Career Research Award (DECRA) Fellow at UNSW. 

"As it reaches the tipping point, only a small increase in temperature could trigger abrupt ice sheet melt and a multi-metre rise in global sea level." 

At present, the consensus of the Intergovernmental Panel on Climate Change (IPCC) 2013 report suggests that global sea level will rise between 40cm and 80cm over the next century, with Antarctica only contributing around 5cm of this.  

The researchers are concerned that Antarctica's contribution could be much greater than this. 

"Recent projections suggest that the Antarctic contribution may be up to ten times higher than the IPCC forecast, which is deeply worrying," says Professor Christopher Fogwill, co-author and Director of The Institute for Sustainable Futures at the UK University of Keele. 

"Our study highlights that the Antarctic Ice Sheet may lie close to a tipping point, which once passed may commit us to rapid sea level rise for millennia to come. This underlines the urgent need to reduce and control greenhouse gas emissions that are driving warming today."  

Notably, the researchers warn that this tipping point may be closer than we think. 

"The Paris Climate Agreement commits to restricting global warming to 2°C, ideally 1.5°C, this century," says Professor Turney.    

"Our findings show that we don't want to get close to 2°C warming."  

Professor Turney and his team hope to expand the research to confirm just how quickly the West Antarctic Ice Sheet responded to warming and which areas were first affected. 

"We only tested one location, so we don't know whether it was the first sector of Antarctica that melted, or whether it melted relatively late. How these changes in Antarctica impacted the rest of the world remains a huge unknown as the planet warms into the future" he says.  

"Testing other locations will give us a better idea for the areas we really need to monitor as the planet continues to warm."  

Credit: 
University of New South Wales

Inquiry-based labs give physics students experimental edge

video: Natasha Holmes, the Ann S. Bowers Assistant Professor in the College of Arts and Sciences, speaks about how her research shows that traditional physics labs, which strive to reinforce the concepts students learn in lecture courses, can actually have a negative impact on student learning, while nontraditional, inquiry-based labs that encourage experimentation can improve student performance and engagement without lowering exam scores.

Image: 
Cornell University

ITHACA, N.Y. - New Cornell University research shows that traditional physics labs, which strive to reinforce the concepts students learn in lecture courses, can actually have a negative impact on students. At the same time, nontraditional, inquiry-based labs that encourage experimentation can improve student performance and engagement without lowering exam scores.

"Typical physics lab courses are designed to help students see or observe the physics phenomena that we typically teach in a lecture course," said senior author Natasha Holmes, the Ann S. Bowers Assistant Professor in the College of Arts and Sciences at Cornell University. "In our previous work, we had this idea that these labs weren't effective. But we were pretty sure that we could restructure the labs to get students engaging and really learning what it means to do experimental physics."

The researchers created a controlled study in which students were divided into five lab sections for the same introductory, calculus-based physics course, focusing on mechanics and special relativity. Students in all five lab sections went to the same lectures and had identical problem sets, homework and exams. However, three lab sections followed the traditional model; the remaining two sessions were inquiry-based labs, with students making their own decisions about gathering and analyzing data.

"The students in the new labs are much more active," Holmes said. "They are talking to each other, making decisions, negotiating. Compared to the traditional lab, where everyone's really doing the same thing and just following instructions, we now have all of the students doing something completely different. They're starting to be creative."

The exam scores were the same for students in the traditional and inquiry-based labs. However, the traditional lab model negatively impacted student attitudes toward experimentation and failed to engage students with high-level scientific thinking, the researchers found.

Another telling distinction: Students in traditional labs completed their tasks as quickly as possible, often breezing through the instructions and finishing the two-hour session in 30 minutes, then leaving. Students in the inquiry-based labs tended to work for the full two hours.

"We think it's teaching them to have ownership over their experiments, and they're continuing to investigate," Holmes said. "We actually had trouble kicking them out of class - which I think is a pretty good problem to have."

Holmes believes the inquiry-based lab model is applicable to other disciplines, although physics has distinct advantages over chemistry or biology, where trial-and-error experimentation could result in wasted chemicals, materials and time.

Credit: 
Cornell University

Local genetic adaption helps sorghum crop hide from witchweed

image: The parasitic plant witchweed, which has bright flowers, has a variety of hosts, including the important cereal crop sorghum. A new study reveals that sorghum plants where witchweed is most prevalent are locally adapted to deal with the parasite by having a mutation in the LGS1 gene.

Image: 
S.M. Runo, Kenyatta University

Sorghum crops in areas where the agricultural parasite striga, also known as witchweed, is common are more likely to have genetic adaptations to help them resist the parasite, according to new research led by Penn State scientists. Changes to the LGS1 gene affect some of the crop's hormones, making it harder for parasites to find in the soil, at least in some regions. The changes, however, may come at a cost, affecting photosynthesis-related systems and perhaps growth. The new study by an international team of researchers appears online February 11, 2020, in the journal Proceedings of the National Academy of Sciences and may eventually inform strategies for managing the parasite.

Witchweed is one of the greatest threats to food security in Africa, causing billions of dollars in crop losses annually. It has a variety of hosts, including sorghum, the world's fifth most important cereal crop.

"We wanted to know if sorghum plants in areas with high parasite prevalence were locally adapted by having LGS1 mutations," said Jesse Lasky, assistant professor of biology at Penn State and senior author of the paper. "We often think about local adaptation of agricultural crops with regard to factors like temperature, drought, or salinity. For example, if plants in a particularly dry region were locally adapted to have genes associated with drought-tolerance, we could potentially breed plants with those genes to resist drought. We wanted to know if you could see this same kind of local adaptation to something biotic, like a parasite."

The researchers modeled the prevalence of witchweed across Africa and compared the presence of LGS1 mutations thought to confer some resistance in sorghum. They found that these mutations were more common in areas with high parasite prevalence, suggesting that sorghum plants in those areas may be locally adapted to deal with the parasite.

"The LGS1 mutations were widespread across Africa where parasites were most common, which suggests they are beneficial," said Emily Bellis, postdoctoral researcher at Penn State at the time of the study and first author of the paper. Bellis is currently an assistant professor of bioinformatics at Arkansas State University. "But these mutations were not very common, and nearly absent outside of parasite-prone regions. This indicates that there may also be a cost, or tradeoff, to having these mutations."

To better understand the effects of the LGS1 mutations, members of the research team at Corteva Agriscience used CRISPR-Cas9 gene-editing technology to replicate the mutations in the lab. The loss of LGS1 function did appear to confer resistance to witchweed in their experiments, as parasites had germination rates that were low or even zero, suggesting the parasites were not as successful at finding the crop to reproduce. But parasites collected from different geographic locations in Africa were affected in different ways.

"Germination of parasites from a population in West Africa was effectively shut down in both nutrient-rich and nutrient-poor conditions, but we still saw germination up to about 10% for a population in East Africa when nutrients were limited," said Bellis. "That is definitely an improvement, but there can be thousands of parasites in the soil, so even 10% germination can be problematic, especially in the smallholder farms where these crops are predominantly grown."

LGS1 mutations are known to affect strigolactone hormones that sorghum releases from its roots. Because the parasite uses these hormones to find sorghum, altering the hormones makes the plant mostly invisible to the parasite. But strigolactones are also important for communication with mycorrhizal fungi, which play an important role in the plant's acquisition of nutrients. The new study found that loss of LGS1 function in the modified plants also affected systems related to photosynthesis and subtly affected growth.

"It may be that plants with LGS1 mutations are better at hiding from the parasites, but are less productive," said Lasky. "This potential tradeoff might explain the relatively low prevalence of these mutations in sorghum across Africa."

The researchers also identified several mutations in other genes that are related to parasite prevalence, which might reflect local adaptation. The researchers plan to investigate these genes--some of which are involved in cell-wall strengthening, to see if they may also confer resistance to the parasite.

"We eventually would like to look at other agriculturally important host plants of striga in Africa to ask similar questions," said Lasky. "If we do indeed see local adaptation to the parasite and find genes that confer resistance with few tradeoffs, we may be able to capitalize on that from a management perspective."

Credit: 
Penn State

Replacing animal testing with synthetic cell scaffolds

video: Biomedical engineers at Michigan Tech use electrospun synthetic polymers to build scaffolds for cancer tumor research, removing the need for animal testing.

Image: 
Ben Jaszczak/Michigan Tech

In the field of cancer research, the idea that scientists can disrupt cancer growth by changing the environment in which cancerous cells divide is growing in popularity. The primary way researchers have tested this theory is to conduct experiments using animals.

Smitha Rao's cell scaffolding research aims to replace animal testing in cancer research with electrospun synthetics.

Rao, assistant professor of biomedical engineering at Michigan Technological University, recently published "Engineered three-dimensional scaffolds modulating fate of breast cancer cells using stiffness and morphology related cell adhesion" in the journal IEEE Open Journal of Engineering in Medicine and Biology.

Rao's coauthors are doctoral student Samerender Hanumantharao, master's student Carolynn Que and undergraduate student Brennan Vogl, all Michigan Tech biomedical engineering students.

Standardizing with Synthetics

When cells grow inside the body, they require something known as an extra-cellular matrix (ECM) on which to grow, just like a well-built house requires a strong foundation. To study how cells grow on ECMs, researchers need to source the matrices from somewhere.

"Synthetic ECMs are created by electrospinning matrices from polymers such as polycaprolactone and are more consistent for research than using cells from different kinds of animals," Hanumantharao said.

"In my lab the focus has been on standardizing the process and using synthetic materials to keep the same chemical formulation of a scaffold, but change the physical structure of the fibers that are produced," Rao said, noting that changing the type of polymer or adding solvents to polymers introduces too many variables, which could affect the way cells grow on the scaffolds. Rao and her fellow researchers, therefore, can compare separate cell lines with different scaffold alignments by changing just one aspect of the experiment: voltage.

By changing the voltage at which the polymer is spun, the researchers can alter the shape of the scaffolds, whether honeycombed, mesh or aligned. Rao's team published recently in Royal Chemistry Society Advances about manipulating electric fields to achieve different scaffold patterns. Rao's team is working with Dipole Materials to explore scaling up the process.

Rao and her fellow researchers used four different cell lines to test the efficacy of the electrospun scaffolds: 184B5, which is normal breast tissue, as a control; MCF-7, a breast adenocarcinoma; MCF10AneoT, a premalignant cell line; and MDA-MB-231, a triple negative adenocarcinoma-metastatic, a very difficult-to-detect cancer.

"We can study why and how cancer cells metastasize," Rao said. "We can understand in a true 3D system why pre-metastatic cells become metastatic, and provide tools to other researchers to study signaling pathways that change between pre-malignant and malignant cells."

Avenues for Future Research

In addition, the research has uncovered information for another area of study: In what type of cellular environment do malignant cancer cells grow best? Rao's group discovered that the triple-negative breast cancer cells preferred honeycomb scaffolds while adenocarcinoma cells favored mesh scaffolds and premalignant cells preferred the aligned scaffolds. In the future, scientists may be able to engineer cell scaffolding--stiffness, structure and shape--to make the area around a tumor in a person's body a far less hospitable place for cancer cells to grow.

Credit: 
Michigan Technological University

Hot climates to see more variability in tree leafing as temperatures rise

image: Data scientists at Oak Ridge National Laboratory have completed a study of long-term trends in the relationship between the timing of tree leafing and rising temperatures in the United States. The information is being incorporated into DOE's Energy Exascale Earth System Model.

Image: 
Oak Ridge National Laboratory, U.S. Dept. of Energy

A team of scientists led by Oak Ridge National Laboratory found that while all regions of the country can expect an earlier start to the growing season as temperatures rise, the trend is likely to become more variable year-over-year in hotter regions.

The researchers examined satellite imagery, air temperature data and phenology (plant life cycle} models for 85 large cities and their surrounding rural areas from 2001 through 2014 to better understand changes in tree leaf emergence, also called budburst, on a broad scale across the United States. The study can help scientists improve their modeling of the potential impacts of future warming.

The results are detailed in an article in the Proceedings of the National Academy of Sciences of the United States of America.

In all areas, whether with a cold, medium or hot climate to begin with, tree budburst happened earlier as temperatures trended higher. The analysis found that the link between early budburst and temperature was most pronounced in cities with intense urban heat islands--areas characterized by significantly higher air temperatures vs. adjacent regions.

"Even in varied environments, the results for these 85 cities were largely consistent, providing robust evidence of urban warming's impacts on spring phenology," said Jiafu Mao, who led the project at ORNL.

During the period under study, the scientists found several interesting trends and potential implications depending on whether the climate was cold or hot at the start.

"The key finding is that if you start in a cold climate, you end up with more stability from year to year," said Peter Thornton, leader of the Terrestrial Systems Modeling Group in ORNL's Environmental Sciences Division. Conversely, in warmer climates the start-of-season tended to be more variable year over year, he noted.

"Seasonal temperatures aren't consistent every year; they vary even as the trend is for warming overall," Thornton said. "So one of the things we looked at is how sensitive plant phenology is to those up-and-down, yearly variations. Does the start-of-season change a lot each year as the temperature changes, or does it change a little bit?"

There may be other factors at play in long-term phenology changes, the researchers said, including precipitation and humidity levels, whether trees are getting the "chilling" they need in the winter to end their dormancy as expected, and even the impact of nighttime light pollution in large cities.

Refining Earth system models

"The contrasting results of spring phenology in cold vs. warm regions was very interesting and surprising," Mao said. "It suggests that phenological changes under future warming will depend on geographical location and background climate, and thus different modeling strategies should be considered."

The latest research fills the need for more detailed diagnostics that can be plugged into Earth system models in order to help scientists better understand and predict environmental changes. The results of the phenology study are being fed into DOE's Energy Exascale Earth System Model (E3SM) to refine and develop next-generation models of phenology-climate feedbacks.

"Our current implementation in the Earth system model of how these phenological changes in the start of the season respond to warming is very simplistic," Thornton noted. "Everything kind of behaves the same. But these data suggest that the real world is a lot more complicated than that. It makes a difference whether you start out in a warm climate or a cold climate.

"We shouldn't expect all parts of the country or all parts of the world to respond similarly, and we need to incorporate that complexity within our model to get the best possible predictions for the future," Thornton said.

The project builds on ORNL's research in this area which previously found, for instance, a positive link between human activity and enhanced vegetation growth on a continental scale in the Northern Hemisphere.

Credit: 
DOE/Oak Ridge National Laboratory

Using sound and light to generate ultra-fast data transfer

image: This is the terahertz quantum cascade laser on its mounting. A pair of tweezers shows how small the device is.

Image: 
University of Leeds

Researchers have made a breakthrough in the control of terahertz quantum cascade lasers, which could lead to the transmission of data at the rate of 100 gigabits per second - around one thousand times quicker than a fast Ethernet operating at 100 megabits a second.

What distinguishes terahertz quantum cascade lasers from other lasers is the fact that they emit light in the terahertz range of the electromagnetic spectrum. They have applications in the field of spectroscopy where they are used in chemical analysis.

The lasers could also eventually provide ultra-fast, short-hop wireless links where large datasets have to be transferred across hospital campuses or between research facilities on universities - or in satellite communications.

To be able to send data at these increased speeds, the lasers need to be modulated very rapidly: switching on and off or pulsing around 100 billion times every second.

Engineers and scientists have so far failed to develop a way of achieving this.

A research team from the University of Leeds and University of Nottingham believe they have found a way of delivering ultra- fast modulation, by combining the power of acoustic and light waves. They have published their findings today (February 11th) in Nature Communications.

John Cunningham, Professor of Nanoelectronics at Leeds, said: "This is exciting research. At the moment, the system for modulating a quantum cascade laser is electrically driven - but that system has limitations.

"Ironically, the same electronics that delivers the modulation usually puts a brake on the speed of the modulation. The mechanism we are developing relies instead on acoustic waves."

A quantum cascade laser is very efficient. As an electron passes through the optical component of the laser, it goes through a series of 'quantum wells' where the energy level of the electron drops and a photon or pulse of light energy is emitted.

One electron is capable of emitting multiple photons. It is this process that is controlled during the modulation.

Instead of using external electronics, the teams of researchers at Leeds and Nottingham Universities used acoustic waves to vibrate the quantum wells inside the quantum cascade laser.

The acoustic waves were generated by the impact of a pulse from another laser onto an aluminium film. This caused the film to expand and contract, sending a mechanical wave through the quantum cascade laser.

Tony Kent, Professor of Physics at Nottingham said "Essentially, what we did was use the acoustic wave to shake the intricate electronic states inside the quantum cascade laser. We could then see that its terahertz light output was being altered by the acoustic wave."

Professor Cunningham added: "We did not reach a situation where we could stop and start the flow completely, but we were able to control the light output by a few percent, which is a great start.

"We believe that with further refinement, we will be able to develop a new mechanism for complete control of the photon emissions from the laser, and perhaps even integrate structures generating sound with the terahertz laser, so that no external sound source is needed."

Professor Kent said: "This result opens a new area for physics and engineering to come together in the exploration of the interaction of terahertz sound and light waves, which could have real technological applications."

Credit: 
University of Leeds

DNA misfolding in white blood cells increases risk for type 1 diabetes

PHILADELPHIA -- It's known that genetics, or an inherited genome, is a major determinant of one's risk for autoimmune diseases, like Type 1 diabetes. In human cells, a person's genome--about six feet of DNA--is compressed into the micrometer space of the nucleus via a three-dimensional folding process. Specialized proteins decode the genetic information, reading instruction from our genome in a sequence-specific manner. But what happens when a sequence variation leads to the misinterpretation of instruction, causing pathogenic misfolding of DNA inside the nucleus? Can the different folding patterns make us more susceptible to autoimmune diseases?

Now, in a first-of-its-kind study, researchers at Penn Medicine found, in mice, that changes in DNA sequence can trigger the chromosomes to misfold in a way that puts one at a heightened risk for Type 1 diabetes. The study, published today in Immunity, revealed that differences in DNA sequences dramatically changed how the DNA was folded inside the nucleus, ultimately affecting the regulation--the induction or repression--of genes linked to the development Type 1 diabetes.

"While we know that people who inherit certain genes have a heightened risk of developing Type 1 diabetes, there has been little information about the underlying molecular factors that contribute to the link between genetics and autoimmunity," said the study's senior author Golnaz Vahedi, PhD, an assistant professor of Genetics in the Perelman School of Medicine (PSOM) at the University of Pennsylvania and a member of the Institute for Immunology and the Penn Epigenetics Institute. "Our research, for the first time, demonstrates how DNA misfolding--caused by sequence variation--contributes to the development of Type 1 diabetes. With a deeper understanding, we hope to form a foundation to develop strategies to reverse DNA misfolding and change the course of Type 1 diabetes."

Autoimmune diseases, which affect as many as 23.5 million Americans, occur when the body's immune system attacks and destroys healthy organs, tissues and cells. There are more than 80 types of autoimmune diseases, including rheumatoid arthritis, inflammatory bowel disease, and Type 1 diabetes. In Type 1 diabetes, the pancreas stops producing insulin, the hormone that controls blood sugar levels. White blood cells called T lymphocytes play a significant role in the destruction of insulin-producing pancreatic beta cells.

Until now, little has been known about the extent to which sequence variation could cause unusual chromatin folding and, ultimately, affect gene expression. In this study, Penn Medicine researchers generated ultra-high resolution genomic maps to measure the three-dimensional DNA folding in T lymphocytes in two strains of mice: a diabetes-susceptible and diabetes-resistant mouse strain. The two strains of mice have six million differences in their genomic DNA, which is similar to the number of differences in the genetic code between any two humans.

The Penn team, led by Vahedi and co-first authors Maria Fasolino, PhD, a postdoctoral fellow in Immunology, and Naomi Goldman, a graduate student in the PSOM, found that previously defined insulin-diabetes associated regions were also the most hyperfolded regions in the T cells of diabetic mice. Researchers then used a high-resolution imaging technique to corroborate the genome misfolding in diabetes-susceptible mice. Importantly, they found the change in folding patterns occurred before the mouse was diabetic. Researchers suggest that the observation could serve as a diagnostic tool in the future if investigators are able to identify such hyperfolded regions in the T cells of humans.

After establishing the where the chromatin is misfolded in the T cells in mice, researchers sought to study gene expression in humans. Through a collaboration with the Human Pancreas Analysis Program, they discovered that a type of homologous gene in humans also demonstrated increased expression levels in immune cells infiltrating the pancreas of human.

"While much more work is needed, our findings push us closer to a more mechanistic understanding of the link between genetics and autoimmune diseases--an important step in identifying factors that influence our risk for developing conditions, like Type 1 diabetes," Vahedi said.

Credit: 
University of Pennsylvania School of Medicine

NYUAD researchers find new method to allow corals to rapidly respond to climate change

image: Ras Ghanada reef in Abu Dhabi

Image: 
NYU Abu Dhabi

Fast Facts:

Discovery by NYU Abu Dhabi and KAUST researchers points to new avenue for corals to rapidly respond to climate change

The research, published in Nature Climate Change, demonstrates that epigenetic modifications in reef-building corals can be transmitted from 'parent' corals to their offspring

The findings suggest that generating pre-adapted coral colonies and larvae via epigenetic conditioning would help seeding populations naturally repopulate dying reefs

Epigenetics means 'above' or 'on top of' genetics. It refers to external modifications to DNA that turn genes 'on' or 'off.' These modifications do not change the DNA sequence, but instead, they affect how cells 'read' genes.

Abu Dhabi, UAE (February 10, 2020) - For the first time, a team of marine biology and environmental genomics researchers at NYU Abu Dhabi (NYUAD) and KAUST (King Abdullah University of Science and Technology) have demonstrated that epigenetic modifications in reef-building corals can be transmitted from parents to their offspring. This discovery, reported in a new study in the journal Nature Climate Change, not only enhances the biological understanding of corals, it also opens up new approaches to stem the loss of this foundation species of marine ecosystems. The findings suggest that generating pre-adapted coral colonies and larvae via epigenetic conditioning would enable the creation of seeding populations that can naturally repopulate dying reefs.

Climate change is causing significant declines in coral reefs worldwide. The long generation times of corals has inspired fears that corals may not be able to genetically adapt in time to overcome the rapid pace of climate change. However, in the study "Intergenerational epigenetic inheritance in reef-building corals," the NYU Abu Dhabi team led by NYUAD Assistant Professor of Biology Youssef Idaghdour; NYUAD Associate Professor of Biology John Burt; former NYUAD postdoctoral associate Emily Howells and colleagues from King Abdullah University of Science and Technology found that a species of stony coral Platygyra daedalea, commonly referred to as 'brain coral', has the ability to pass to their offspring epigenetic marks that help them quickly adapt to adverse environmental changes.

Epigenetics allows the modification of the "context" of the genome without altering the actual genetic code. Instead, it alters how it is used - such as when, and to what degree, a gene will be turned on or off.

The researchers collected colony fragments from the stony coral from two different ocean environments (Abu Dhabi and Fujairah) and sampled adults, sperm, and larvae from reciprocal crosses. Epigenetic profiles of corals were determined by whole genome sequencing. They identified a strong environmental signature in the epigenome of the coral, as well as the epigenetic factors strongly associated with the extreme hot and saline environment and highlighted their potential effect on coral fitness.

"What we are finding is a surprising amount of potential for both male and female corals to transmit their epigenome to their offspring," said Idaghdour. "This research shows the capacity of coral parents to positively impact the resilience of their offspring in environments that are changing too quickly. Our learnings in this exciting field hold great potential for the preservation of these unique ecosystems for future generations."

"Climate change represents the most pressing threat to coral reefs globally, with most research estimating widespread loss of corals within the next century. Given the long generation times of corals, it has commonly been accepted that the rapid rate of climate change is outpacing the capacity for corals to genetically adapt to cope with increasing temperatures," said Burt. "This study demonstrates that epigenetic changes that enhance thermal tolerance can be passed to offspring, dramatically enhancing corals' capacity to rapidly respond to environmental change."

Credit: 
New York University

Blasting 'forever' chemicals out of water with a vortex of cold plasma

image: Gliding arc plasmatron, developed at Drexel University, is being used to eliminate PFAS contamination from drinking water.

Image: 
Drexel University

Researchers from Drexel University have found a way to destroy stubbornly resilient toxic compounds, ominously dubbed "forever chemicals," that have contaminated the drinking water of millions across the United States.

These chemicals, commonly called PFAS - a shortening of per- and polyfluoroalkyl substances, were used for some 60 years as coatings for nonstick pans and waterproof clothing and in fire-fighting foams. In the last two decades concerns about health risks associated with exposure to PFAS - from cancer and thyroid problems to low birthweights and high blood pressure - have led to federal bans, monitoring mandates and massive remediation efforts.

But eliminating these chemicals - originally designed to resist high-temperatures of fuel-driven fires - from drinking water has proven to be nearly impossible.

"The carbon-fluoride chemical bonds in PFAS compounds are extremely stable, so it's impossible to break down these compounds using standard treatment methods," said Christopher Sales, PhD, an associate professor of environmental engineering at Drexel who studies degradation of environmental contaminants.

Sales is part of a team from Drexel's College of Engineering and the C. & J. Nyheim Plasma Institute exploring how a blast of charged gas, called cold plasma, can be used to eliminate PFAS from water. His group recently published its research in the journal Environmental Science: Water Research & Technology.

"This has become an urgent issue because we are seeing PFAS turn up everywhere, not just in the water and soil near airports that used it in fire-fighting foam, but also in many consumer goods like stain-resistant fabrics and food packaging designed to repel liquids and grease" Sales said. "Because these chemicals do not readily biodegrade, PFAS is leaching into ground and surface water from products that have been sitting in landfills for decades."

Though precise exposure routes have not yet been identified, studies suggest that PFAS has been detected in the bloodstream of as much as 98 percent of the U.S. population through some combination of direct exposure, drinking water contamination and bioaccumulation.

The Department of Defense is currently spending billions of dollars to clean up contaminated soil water supplies around hundreds of military bases where PFAS fire-fighting foam had been used. But the best efforts are only a stopgap, according to Sales.

"The current standard for dealing with PFAS-contaminated water is activated carbon filters," Sales said. "But the problem with filtration is that it only collects the PFAS it doesn't destroy it. So, unless the filters are incinerated at high temperatures, the spent filters become a new source of PFAS that can make its way back into the environment through landfill runoff and seepage."

To truly eliminate the chemicals treatment needs to split apart the carbon-fluoride bond that is key to the compound's staying power. Different types of PFAS, which now number in the thousands, are comprised of different length carbon-fluoride chains. The primary goal of proposed decontamination methods is to break the chain into smaller pieces to render it inert. A secondary, and more challenging target, is completely removing the fluoride atoms from the compounds - an achievement called defluorination.

One way to remove PFAS from water is through elevating its temperatures which raises the activity of its atoms enough to stretch the carbon-fluoride bond to its breaking point. Unfortunately, it takes a lot more than boiling to get rid of PFAS, according to Sales.

"To break the extremely stable bond between carbon and fluoride in PFAS, you need to raise the temperature of the compounds to at least 1,000 Celsius - so 10 times the temperature of boiling water," Sales said. "But that's clearly not feasible for water treatment operations, due to the massive amount of energy it would consume."

The Drexel team is proposing the use of highly energized gas, or plasma, as a way to activate the PFAS atoms without heating the water.

In non-equilibrium, or "cold" plasma, an electromagnetic field is used to excite the electrons in a gas without raising its overall temperature. In a common example of cold plasma, a fluorescent light, electrons are excited to the point where they emit visible light while the gas itself remains at room temperature.

Researchers at the Nyheim Plasma Institute have been harnessing non-equilibrium plasma technology for decontamination and sterilization of produce and meat and in health care settings.

Turning it loose on PFAS is the chemical equivalent of using a blender to make a smoothie.

First, a device called a gliding arc plasmatron creates a rotating electromagnetic field that activates electrons of gas bubbles in the water. The high-energy electrons split apart chemical species in the water and begin to emit ultraviolet radiation. Ultimately, the spinning vortex of atoms, ions and radiation reaches a level of activity high enough to sever the carbon-fluoride bond in the PFAS compounds - all without raising the temperature of the water.

In just an hour of treatment, that uses a little more energy than it would take to get a tea kettle to boil, the gliding arc plasma treatment can eliminate more than 90 percent of PFAS from the water and defluorinate about a quarter of the compounds, according to the group's recently published study.

"This is just one example of how effectively and energy efficiently cold plasma technology can be used to address difficult chemical contamination problems," said Alexander Fridman, PhD, director of the Nyheim Plasma Institute. "Cold plasma has the potential to help us eliminate a variety of chemical toxins that threaten our food and drinking water supplies."

While plasma treatment methods for PFAS removal have been tested by other groups, thus far none of them lend themselves to being easily scaled up for use at large treatment facilities, according to the Drexel researchers.

"Now that we have proven the effectiveness of cold plasma water treatment our next step is to continue testing it on a larger scale," Fridman said. "We also believe the technology could be adjusted to treat contaminated soil and to achieve near-complete defluorination of PFAS compounds."

Credit: 
Drexel University