Tech

An alternate savanna

When civil war broke out in Mozambique more than 40 years ago, it largely spelled doom for animals in Gorongosa National Park, a 1,500-square-mile reserve on the floor of the southern end of the Great African Rift Valley, in the heart of the country. As the decades-long fighting spilled over into the reserve, many of the creatures became casualties of the conflict.

Throughout the war and even for some time after, food insecurity drove people to kill the animals to feed themselves. The hunting and poaching were hardest on the large mammals.

"More than 90% of the large mammals in the park were wiped out," said UC Santa Barbara ecologist Kaitlyn Gaynor, a postdoctoral researcher at UCSB's National Center for Ecological Analysis and Synthesis (NCEAS). After the war, a massive recovery effort was launched to repair and restore the park in the hopes that the animals would make a comeback.

With the park now three decades post-war, it appears the animal populations have recovered. While some have been reintroduced, most have simply rebounded from remnant post-war populations thanks to ongoing conservation efforts.

But for all the growing abundance in animals in the park, questions about the ecological consequences of the war remained for Gaynor and her team, which included coauthor Josh Daskin (U.S. Fish and Wildlife Service), Lindsey Rich (California Department of Fish and Wildlife and senior author Justin Brashares (UC Berkeley).

"How similar is this new system to pre-war conditions, or to African savannas that haven't seen this major shock?" These were the questions the researchers sought to address, using an array of 60 camera traps to document the comings and goings of the animals of Gorongosa.

Their results are published in the journal Animal Conservation.

Animal Crossing

"There are few places in the world that have had such a dramatic reset, where animals have been pretty much wiped out and then have come back," Gaynor said. "It looks a lot like it did before the war, if you look at just the numbers of total animals, or the number of species present throughout the landscape."

The researchers identified 38 species during the three months of their survey, which puts Gorongosa's biodiversity on par with other African savannas, such as the Moremi Game Reserve in Botswana's Okavango Delta and Serengeti National Park in Tanzania.

But that's where the similarity ends.

"When you take a closer look at the distribution of species, it's a bit out of whack," Gaynor said. The large herbivores that were dominant before the war -- iconic African animals like zebra, wildebeest and hippopotamus -- were rare. Large carnivores were rarer still, with only lions remaining after the war. The savanna now belonged to baboons, warthogs, bushbuck and especially waterbuck, which dominated the survey.

"Waterbuck have been reproducing exponentially," Gaynor said, adding that it remained to be seen whether the unchecked population might crash and stabilize, or if their dominance signaled a "new normal" for the park.

Additionally, in the first systematic study to focus on smaller predators in the park, the researchers also found a high diversity in mesopredators -- housecat sized animals such as civets, mongoose and genets -- which were widespread throughout the park.

"There may have been a 'mesopredator release,' where in the absence of apex predators, the smaller predators' populations can grow because they're not facing competition, or they're not being preyed upon by the larger carnivores," Gaynor explained.

All of this is happening against a backdrop of a environmental change: Tree cover increased while the herbivores (especially elephants) were absent, but with their return and increased feeding pressure the landscape might shift again, potentially influencing which species may flourish. According to the researchers, a variety of tree cover is important for promoting the diversity of the animals.

Time will tell whether the distribution of species in this park will return to pre-war levels, or if they will level off at some other stable state. Since the study was conducted, African wild dogs and leopards were reintroduced in an effort to rebalance the ecosystem. The slow return of large carnivores is bound to shape the dynamics of Gorongosa's animal community, and the researchers are hoping to document those and other developments in future studies.

"Our study represents the first data point in what will hopefully be a long-term, ongoing camera trap monitoring effort," Gaynor said. "Gorongosa has been a really remarkable conservation success story, but I think it's also pretty interesting how the system has recovered asymmetrically. There remain questions about the causes and consequences of that asymmetry, and how the wildlife community is going to change in the future, given ongoing transformations to the landscape."

Credit: 
University of California - Santa Barbara

LED lights found to kill coronavirus: Global first in fight against COVID-19

Researchers from Tel Aviv University (TAU) have proven that the coronavirus can be killed efficiently, quickly, and cheaply using ultraviolet (UV) light-emitting diodes (UV-LEDs). They believe that the UV-LED technology will soon be available for private and commercial use.

This is the first study conducted on the disinfection efficiency of UV-LED irradiation at different wavelengths or frequencies on a virus from the family of coronaviruses. The study was led by Professor Hadas Mamane, Head of the Environmental Engineering Program at TAU's School of Mechnical Engineering, Iby and Aladar Fleischman Faculty of Engineering. The article was published in November 2020 issue of the Journal of Photochemistry and Photobiology B: Biology.

"The entire world is currently looking for effective solutions to disinfect the coronavirus," said Professor Mamane. "The problem is that in order to disinfect a bus, train, sports hall, or plane by chemical spraying, you need physical manpower, and in order for the spraying to be effective, you have to give the chemical time to act on the surface. Disinfection systems based on LED bulbs, however, can be installed in the ventilation system and air conditioner, for example, and sterilize the air sucked in and then emitted into the room.

"We discovered that it is quite simple to kill the coronavirus using LED bulbs that radiate ultraviolet light," she explained. "We killed the viruses using cheaper and more readily available LED bulbs, which consume little energy and do not contain mercury like regular bulbs. Our research has commercial and societal implications, given the possibility of using such LED bulbs in all areas of our lives, safely and quickly."

The researchers tested the optimal wavelength for killing the coronavirus and found that a length of 285 nanometers (nm) was almost as efficient in disinfecting the virus as a wavelength of 265 nm, requiring less than half a minute to destroy more than 99.9% of the coronaviruses. This result is significant because the cost of 285 nm LED bulbs is much lower than that of 265 nm bulbs, and the former are also more readily available.

Eventually, as the science develops, the industry will be able to make the necessary adjustments and install the bulbs in robotic systems or air conditioning, vacuum, and water systems, and thereby be able to efficiently disinfect large surfaces and spaces. Professor Mamane believes that the technology will be available for use in the near future.

It is important to note that it is very dangerous to try to use this method to disinfect surfaces inside homes. To be fully effective, a system must be designed so that a person is not directly exposed to the light.

In the future, the researchers will test their unique combination of integrated damage mechanisms and more ideas they recently developed on combined efficient direct and indirect damage to bacteria and viruses on different surfaces, air, and water.

Credit: 
American Friends of Tel Aviv University

'Windows of opportunity' crucial for cutting Chesapeake nutrient, sediment loads

image: Researchers analyzed eight years of data from 108 sites in the Chesapeake Bay Program's nontidal monitoring network, looking at daily-scale records of flow and corresponding loads of nitrogen, phosphorus and suspended sediment.

Image: 
Heather Preisendanz research group/Penn State

The vast majority of nutrients and sediment washed into streams flowing into the Chesapeake Bay are picked up by deluges from severe storms that occur on relatively few days of the year. That is the conclusion of a new study led by Penn State researchers, who say it offers clues for cleaning up the impaired estuary.

"A small percentage of locations and events contribute to the vast majority of total annual pollution loads entering the bay," said Heather Preisendanz, associate professor of agricultural and biological engineering, College of Agricultural Sciences. "These findings stress the importance of concentrating our efforts on 'hot moments' -- not just 'hot spots' -- across impaired watersheds to achieve water-quality-restoration goals."

Researchers analyzed eight years of data from 108 sites in the Chesapeake Bay Program's nontidal monitoring network. They looked at daily-scale records of flow and corresponding loads of total nitrogen, total phosphorus and total suspended sediment at each gauging station from 2010 through 2018. Then, in an innovative move, they applied a formula normally used in economics to the data to determine the distribution of pollution loads throughout the years.

"Using Lorenz Inequality and the Gini Coefficient, both of which first were used in the early 1900s to quantify inequity in wealth distribution, we were able to measure the degree of inequality of nutrient and sediment loads across the study years," said Preisendanz. "This approach allowed us to identify periods of time and corresponding flow conditions that must be targeted to achieve needed load-reduction goals across the watershed."

Recently published in Environmental Research Letters, the study's conclusions make a strong case for watershed planners and managers to use a temporal framework to develop low- and high-flow targets for nitrogen, phosphorus and sediment loads specific to each watershed within the bay's 64,000-square-mile basin. The seven states in the Chesapeake's watershed have been federally mandated since 2010 to continually reduce nutrient and sediment loads reaching the bay.

The study portends a change for the bay, Preisendanz contends, because until now, processes for determining how to reduce total annual pollution loads at a watershed scale have targeted spatial, but not temporal, components of inequality -- hot spots but not hot moments.

"I think this offers some insight as to why we haven't met goals for restoring the bay's water quality," she said. "There's been a lot of frustration around how much time and money has been spent and the number of best-management practices that have been adopted. We're still significantly behind where we need to be -- especially in Pennsylvania."

Preisendanz also suggested the research may inject a badly needed dose of pragmatism into the debate about if and how the Chesapeake can be cleaned up, using best-management practices such as streamside buffers that function properly, streamside fencing, nutrient management plans for farms, continuous no-till crop cultivation and cover crops.

"Now that we know the dynamics of nutrient and sediment transport across the bay watershed, we may need to think differently about how we approach our goals," she said. "If the reality is that we can't deal with the highest flows from severe storms -- which, by the way, are becoming more intense due to climate change -- then we need to design a system that is more efficient at achieving load-reduction goals during low flows."

One of the team's hopes is that because its analysis was based on an approach that is easily understandable to a broad audience -- using the analogy of temporal inequality to income inequality -- the temporal targeting framework will be widely adopted by regulators, Preisendanz explained. People understand that inequality is ubiquitous not only in human-impacted systems, but also in natural systems, she pointed out.

"I like to ask people how unequally the number of strawberries produced in their garden was spread across their strawberry plants, because everyone has a super-productive and super-unproductive plant," she said "That analogy is exactly transferable to understanding hot spots and hot moments, and why it is so important to manage them both. The team embraces the notion that in every challenge, there lies an opportunity."

These findings show what successfully reducing pollution loads in waterways feeding the bay would look like in the real world, Preisendanz noted. "Rather than an 'everything, everywhere, all-the-time' approach, focusing on hot spots and hot moments reduces the problem to finding 'the right solutions in the right places that work at the right time' approach."

Credit: 
Penn State

Compound derived from thunder god vine could help pancreatic cancer patients

PHOENIX, Ariz. -- Dec. 14, 2020 -- The results of a pre-clinical study led by researchers at the Translational Genomics Research Institute (TGen), an affiliate of City of Hope, suggest how a compound derived from the thunder god vine -- an herb used in China for centuries to treat joint pain, swelling and fever -- is able to kill cancer cells and potentially improve clinical outcomes for patients with pancreatic cancer.

The medicinal plant's key ingredient, triptolide, is the basis of a water-soluble prodrug called Minnelide, which appears to attack pancreatic cancer cells and the cocoon of stroma surrounding the tumor that shields it from the body's immune system. Investigators recently published the study results in the journal Oncogenesis.

The study found that the compound's mechanism of action is the ability of triptolide (Minnelide) to disrupt what are known as super-enhancers, strings of DNA needed to maintain the genetic stability of pancreatic cancer cells and the cancer-associated-fibroblasts that help make up the stroma surrounding the cancer.

"The cancer cells rely on super-enhancers for their growth and survival," said Dr. Haiyong Han, a Professor in TGen's Molecular Medicine Division and one of the study's senior authors.

"We found that by disrupting these super-enhancers triptolide not only attacks the cancer cells, but also the stroma, which helps accelerate cancer cell death.

"While triptolide has been known to be a general transcriptional inhibitor and a potent antitumor agent, we are the first to report its role in modulating super-enhancers to regulate the expression of genes, especially cancer-causing genes," said Dr. Han, who also is head of the basic research unit in TGen's Pancreatic Cancer Program.

Pancreatic cancer is the third leading cause of cancer-related death in the U.S., annually killing more than 47,000 Americans.

"There is an urgent need to identify and develop treatment strategies that not only target the tumor cells, but can also modulate the stromal cells," said Dr. Daniel Von Hoff, TGen Distinguished Professor and another senior author of the study.

"Based on our findings, using modulating compounds such as triptolide to reprogram super-enhancers may provide means for effective treatment options for pancreas cancer patients," said Dr. Von Hoff, considered one of the nation's leading authorities on pancreatic cancer.

Thunder god vine (Tripterygium wilfordii), also known as léi g?ng téng, is native to China, Japan and Korea. Traditional Chinese medicine has used the vine for more than 2,000 years as a treatment for everything from fever to inflammation and autoimmune diseases, such as multiple sclerosis and rheumatoid arthritis. The chemical compound triptolide is among the more than 100 bioactive ingredients derived from the thunder god vine.

Credit: 
The Translational Genomics Research Institute

Virtual reality applied to rehabilitation for stroke and neurodegenerative disease patients

image: fMRI scans of a stroke patient before and after rehabilitation using non-immersive virtual reality software

Image: 
Raphael Casseb/UNICAMP

By José Tadeu Arantes | Agência FAPESP – Virtual reality-based rehabilitation programs are becoming an important complement to conventional motor therapy for stroke patients and individuals with neurodegenerative diseases. Immersion in virtual environments stimulates several sensory systems, especially sight and hearing, and intensifies central nervous system information input and output.

“The technology is expected to increase brain connectivity by stimulating the new neural connections needed to repair the losses caused by injury or by the patient’s clinical condition,” Alexandre Brandão, a researcher at the University of Campinas’s Physics Institute (IFGW-UNICAMP), told Agência FAPESP. Brandão is also affiliated with the Brazilian Research Institute for Neuroscience and Neurotechnology (BRAINN), one of the Research, Innovation and Dissemination Centers (RIDCs) supported by FAPESP.

Brandão is the lead author of the paper “Biomechanics Sensor Node for Virtual Reality: A Wearable Device Applied to Gait Recovery for Neurofunctional Rehabilitation”, distinguished with a Best Paper award in the Virtual Reality (VR) category at the 20th International Conference on Computational Science and its Applications (ICCSA 2020). Originally set to take place at the University of Cagliari in Italy, the conference was held online because of the pandemic.

The study described in the paper resulted in the development of a wearable device called Biomechanics Sensor Node (BSN) that captures user data and controls virtual environments, as well as a new software solution integrating the BSN with Unity Editor, one of the most widely used game engines and virtual world-building programs. “Integration of the wearable with the Unity software means patients undergoing motor rehabilitation can interact with VR environments while the therapist views data for the movements performed during the session,” Brandão explained.

The BSN consists of an inertial sensor that is placed on the user’s ankle, detects motion relative to stationary gait, and tracks the body in the three planes of movement. The signals generated are processed and transmitted to a smartphone, which is used to control an avatar that interacts with the virtual environment. “The patient’s actual movements may be very limited or small, but in the virtual context the captured and processed data generates complete movements by the avatar,” Brandão said. “The visual information gives patients the impression they’re able to perform these complete movements, and this can potentially activate more neural networks than conventional mechanical therapy.”

Functional magnetic resonance imaging (fMRI) scans show that the procedure activates specific brain regions associated with these fictitious movements. A next step entails performing clinical tests to measure the functional gains in the patient’s motor recovery. Another expected development will be to have the avatar engage in everyday activities, practice sports, or interact with other people in a multi-user virtual environment.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Robotic exoskeleton training improves walking in adolescents with acquired brain injury

image: Dr. Karunakaran is an associate research scientist in the Center for Mobility and Rehabilitation Engineering Research at Kessler Foundation and in the Center for Wearable Robots at New Jersey Institute of Technology, and Research Associate Professor in the Department of Physical Medicine and Rehabilitation at Rutgers New Jersey Medical School.

Image: 
Kessler Foundation/Jody Banks

East Hanover, NJ. December 14, 2020. A team of New Jersey researchers has shown that gait training using robotic exoskeletons improved motor function in adolescents and young adults with acquired brain injury. The article, "Kinetic gait changes after robotic exoskeleton training in adolescents and young adults with acquired brain injury" (doi: 10.1155/2020/8845772), was published October 28, 2020 in Applied Bionics and Biomechanics. It is available open access at: https://www.hindawi.com/journals/abb/2020/8845772/

The authors are Kiran Karunakaran, PhD, Naphtaly Ehrenberg, MS, and Karen Nolan, PhD, from the Center for Mobility and Rehabilitation Engineering Research at Kessler Foundation, and JenFu Cheng, MD, and Katherine Bentley, MD, from Children's Specialized Hospital. Drs. Karunakaran, Nolan, Cheng, and Bentley are also affiliated with the Department of Physical Medicine and Rehabilitation at Rutgers New Jersey Medical School.

Acquired brain injury often results in hemiparesis, causing significant deficits in balance and gait that adversely affect functional ambulation and participation in activities of daily living. Gait training using robotic exoskeletons offers an option for motor rehabilitation in individuals with hemiparesis, but few studies have been conducted in adolescents and young adults. Findings from a preliminary study in this age group show promise for this intervention, according to Drs. Karunakaran and Nolan.

Participants included seven individuals (aged 13 to 28 years) with acquired brain injury (ABI) and hemiparesis and one healthy control. The ABI group included individuals with brain injuries due to anoxia, trauma, and stroke. All participants received 12 45-minute sessions of high-dose, repetitive gait training in a robotic exoskeleton (EksoGT, Ekso Bionics, Inc.) over a 4-week period. The gait training was administered by a licensed physical therapist supervised by a member of the research team.

"At the end of the 4-week training, participants had progressed to a more normal gait pattern," said Dr. Karunakaran, "including improved loading, a longer step length and faster walking speed" Although results are promising, Dr. Nolan acknowledged the limitations of the study, including small sample size and lack of a control group: "Further study is needed to confirm the training effect in this age group with ABI, optimal dosing for the training protocol, and the durability of functional improvements."

Credit: 
Kessler Foundation

Marine pollution: How do plastic additives dilute in water and how risky are they?

image: Plastic waste can contaminate water supply as it breaks down into microplastics and its additives leach out. These additives sometimes contain harmful chemicals which accumulate in the environment and in the food chain, leading to various ecological and public health risks

Image: 
Seung-Kyu Kim from Incheon National University

Plastic pollution has been at the center of environmental debate for decades. While it is well-known that plastic in the environment can break down into microplastics, be ingested by humans and other organisms, transfer up the food chain, and cause harm, this is only one part of the picture. Plastics are almost always enriched with additives, which makes them easier to process, more resistant, or more performant. This poses a second problem: when the polymer material is left in an environment for long durations, these additives can easily leach out and contaminate the environment.

This is the case with styrene oligomers (SOs), a type of plastic additive commonly found in polystyrene, which have been causing growing concern due to their effects on hormonal disruption and thyroid function. Authorities usually rely on scientists' risk assessments to evaluate such public hazards and determine the appropriate action to minimize their impact. But scientists struggle to accurately measure the proportion of leachable plastic additives (i.e., the bioavailable fraction), as it is difficult to discriminate between leached compounds and those still bound to the source plastic material. Adding to the problem is the fact that these additives can diffuse into the environment at different rates.

Now, in a new study, Prof. Seung-Kyu Kim from Incheon National University, Korea, and his team have come up with an assessment method that could change the game. Their findings are published in Journal of Hazardous Materials.

Prof. Kim and his team collected surface sediments from an artificial lake connected to the Yellow Sea, with several potential sources of SO pollution from the surrounding land area and from marine buoys. "We were hoping that the distribution of SO contaminants in the lake's sediments would help identify their most likely source and measure the leachable amount from the source material," Prof. Kim explains. The scientists also examined one of these potential sources by dissecting a locally-used polystyrene buoy, measuring the concentration of SOs in it and how much leached out of it.

A key finding from their investigation was that SO dimers (SDs) and trimers (STs) dilute in water at different rates, so their composition in coastal sediments is very different from what can be observed in the buoys and other potential sources. This was especially true for STs: heavy, hydrophobic molecules that tended to remain in the source microplastics and moved at a slower rate in the lake. The lighter SD molecules leached out much more readily and traveled further. This meant that the SD to ST ratio would increase further away from the source of the contaminant.

Based on this dynamic, the researchers suggest using this ratio as a "reference index" to identify the source of SOs and to estimate the bioavailable fraction of SOs in a given sample. In Prof. Kim's words, this would "be critically important to the assessment of ecological and human risk caused by plastic additives", enabling more accurate risk assessments for potential exposure, and perhaps, formulating policies for disallowing certain more leachable, and therefore more hazardous, additives.

Credit: 
Incheon National University

Exploring the relationship between nitrogen and carbon dioxide in greenhouse gas emissions

A University of Oklahoma-led interdisciplinary study on a decade-long experiment (1997-2009) at the University of Minnesota found that lower nitrogen levels in soil promoted release of carbon dioxide from soils under high levels of atmospheric carbon dioxide, and could therefore contribute to furthering rising atmospheric greenhouse gases and climate change.

"Soil microorganisms help extract carbon from non-living sources and make the carbon available to living organisms and play an important role in influencing future climate and carbon cycle feedbacks," said Jizhong Zhou, the OU director for the Institute for Environmental Genomics, a George Lynn Cross Research Professor in the College of Arts and Sciences, and an adjunct professor in the Gallogly College of Engineering.

Zhou and the international research team sought to better understand how regional differences in soil nitrogen levels resulting from pollution or natural soil variation could be affecting how soils release carbon dioxide and impact atmospheric carbon dioxide levels.

"The interactive effects of nitrogen and carbon dioxide on soil respiration, a measure of carbon dioxide released from decomposition in the soil, is particularly important for our future climate, but are not all well understood, due to the lack of long-term manipulative experiments of these two elements together," said Peter Reich, a Distinguished McKnight University Professor at the University of Minnesota.

In the study, published in the journal, Proceedings of the National Academy of Sciences of the United States of America, the researchers found that in the last four years of the experiment, elevated carbon dioxide levels stimulated soil respiration twice as much under low as under high nitrogen supply, which was not observed in earlier years.

"Our study highlights that low nitrogen supply gradually accelerates the amount of carbon dioxide released to the atmosphere through decomposition of soil detritus," said Sarah Hobbie, a Distinguished McKnight University Professor at the University of Minnesota. "Considering the worldwide nitrogen limitation in natural environments, heightened release of CO2 back to the atmosphere from soil may be pervasive under those conditions of persistent nitrogen limitation."

Credit: 
University of Oklahoma

Nanoengineered cement shows promise for sealing leaky gas wells

image: Pictured is a natural gas well in Pennsylvania. When wells become damaged or degraded, methane can potentially escape into the environment. Penn State researchers developed a new nanomaterial cement mixture to address this issue.

Image: 
Pa Department of Environmental Protection

Leaking natural gas wells are considered a potential source of methane emissions, and a new nanomaterial cement mixture could provide an effective, affordable solution for sealing these wells, according to a team of Penn State scientists.

"We have invented a very flexible cement that is more resistant to cracking," said Arash Dahi Taleghani, associate professor of petroleum engineering at Penn State. "That's important because there are millions of orphaned and abandoned wells around the world, and cracks in the casings can allow methane to escape into the environment."

When natural gas wells are drilled, cement is used to secure the pipe, or casing, to the surrounding rock, creating a seal that prevents methane from migrating into the shallow subsurface, where it could enter waterways, or the atmosphere, where it is a potent greenhouse gas, the scientists said.

Wells can extend miles underground and over time changing temperatures and pressures can degrade the cement, causing cracks to form. The scientists said repairs involve injecting cement in very narrow areas between the casing and rock, requiring special cement.

"In construction, you may just mix cement and pour it, but to seal these wells you are cementing an area that has the thickness of less than a millimeter, or that of a piece of tape," Dahi Taleghani said. "Being able to better pump cement through these very narrow spaces that methane molecules can escape from is the beauty of this work."

Adding almost 2D graphite created a cement mixture that better filled these narrow spaces and that was also stronger and more resilient, the scientists found. They recently reported their findings in the International Journal of Greenhouse Gas Control. Maryam Tabatabaei, a postdoctoral scholar in the John and Willie Leone Family Department of Energy and Mineral Engineering, also contributed to this research.

The scientists developed a multi-step process to uniformly distribute sheets of the nanomaterial into a cement slurry. By treating the graphite first with chemicals, the scientists were able to change its surface properties so the material would dissolve in water instead of repelling it.

"If we just pour this material in the water and mix it, these small particles have a tendency to stick together and form a conglomerate," Dahi Taleghani said. "If they are not dispersing evenly then the graphite is not as strong inside the cement."

The cement mixture can be used in active unconventional wells like those found in the Marcellus Shale gas play, or to seal orphaned and abandoned gas wells, the scientists said. It also shows promise for use in carbon dioxide capture and storage technology.

Graphite is more affordable than other nanomaterials previously used to bolster cement performance. In addition, very little of the material is needed to strengthen the cement, the scientists said.

"Considering the low cost of the amount of graphite nanoplatelets required for this test, this technology may provide an economic solution for industry to address possible cementing problems in the field," Dahi Taleghani said.

Credit: 
Penn State

UMaine-led research group find that trees are out of equilibrium with climate

Forecasts predicting where plants and animals will inhabit over time rely primarily on information about their current climate associations, but that only plays a partial role.

Under climate change, there's a growing interest in assessing whether trees and other species can keep pace with changing temperatures and rainfall, shifting where they are found, also known as their ranges, to track their suitable climates. To test this, a University of Maine-led research team studied the current ranges of hundreds of North American trees and shrubs, assessing the degree to which species are growing in all of the places that are climatically suitable. Researchers found evidence of widespread "underfilling" of these potential climatic habitats -- only 50% on average -- which could mean that trees already have disadvantage as the world continues to warm.

Benjamin Seliger, a then UMaine Ph.D. student with the Climate Change Institute, spearheaded the study with his doctoral adviser, Jacquelyn Gill, a UMaine associate professor of paleoecology and plant ecology. Brain McGill, a UMaine professor of biological sciences, and Jens-Christian Svenning, a macroecologist and biogeographer from Aarhus University in Denmark also contributed.

The team used species distribution models to assess the degree to which 447 North American trees' and shrubs' "fill" their potential climatic ranges by comparing regions that are climatically suitable, known as potential ranges, against where trees are actually found, or their realized ranges.

The Journal of Biogeography published the team's research paper for the study.

Seliger, now a postdoctoral researcher at the Center of Geospatial Analytics at North Carolina State University, and co-authors discovered a significant difference between where the trees they studied could grow, and where they actually grow, also known as range filling. The average range filling value across all 447 species equalled 48.6%, indicating that on average, trees are not found in about half of the areas that are climatically suitable for them, according to researchers.

"We found tree ranges are more limited by non-climatic factors than expected, suggesting trees may not simply track warming climates." Seliger says.

Species distribution models (SDMs) are a common tool to predict how climate change will affect biodiversity and the future ranges of plants and animals. Various studies, including the one from the UMaine-led group, however, caution that because this tool assumes that species live in all areas that are climatically suitable, known as experiencing climatic equilibrium, it may not provide an accurate prediction of where species will be found in the future.

An SDM relies on what has been considered a foundational principle, "that geographic ranges generally appear to be in equilibrium with contemporary climate," according to researchers. Growing evidence suggests otherwise for many species, which experience climatic disequilibrium.

Seliger and his team found that North American trees and shrubs with large ranges tended to show much stronger evidence of climatic equilibrium, meaning they had high range filling. Small-ranged species, however, had much lower range filling overall, performing worse than predicted by a null model. According to researchers, that means small-ranged tree species, including many rare trees and species the International Union for the Conservation of Nature (IUCN) lists as vulnerable, will face additional challenges as they try to track their climates into the future.

The group also found that small-rage species may be more limited by nonclimatic influences, such as soils or pathogens. Conservation efforts for these plants and animals, therefore, should "account for a complex interplay of factors in addition to climate when preparing for the next century of global change," according to researchers.

Their findings support a growing body of evidence that for a climatic disequilibrium among various flora. As to what causes the disequilibrium could be due to two factors, according to researchers: dispersal lags that date back to the time when glaciers covered large portions of North American 21,000 years ago, or by non-climatic factors that may influence ranges more than previously appreciated, such as soil, competition with other plants, or symbiosis.

"It's been thought that if you zoom out to the scale of North America, climate was the most important factor in determining where species would be found. This study reveals some striking gaps in our knowledge; even at the scale of an entire continent, soils or other plants and animals may be playing an important role too. We used to think those were more important at the more local scale -- think of how the trees might change across two areas of your favorite park," Gill says. "All of this means that when it comes to plants, our predictive tools need to get a lot more sophisticated, if they're going to be useful for conservation."

Credit: 
University of Maine

Survivors of child abuse twice as likely to die young

image: Child maltreatment more than doubles the risk of dying during late adolescence and early adulthood, according to a world-first study.

Image: 
University of South Australia

Globally the statistics are daunting. Across countries and communities between 15 and 50 per cent of children are subject to serious abuse and neglect within their own families, most often at the hands of a parent.

Known as familial child maltreatment, a large body of research has revealed a raft of disturbing longer-term consequences for its victims - such as poor mental and physical health, diminished engagement with education, early substance misuse, involvement in crime, relationship instability and long term under or unemployment.

These distressing consequences reflect the direct impacts of child abuse and neglect; but also changes in how the brain develops and other physiological and relational responses to trauma.

What had not been studied, until now, is the impact of child maltreatment on the risk of death during late adolescence and early adulthood, a time when mental illness, suicide and substance use escalate.

In a world-first study from the University of South Australia, researchers have tracked the connection between abuse and neglect in childhood and risk of death in adolescence and early adulthood.

UniSA chief investigator, Professor Leonie Segal says the results are alarming.

"Young people who had a child abuse or neglect report to child protection, that met thresholds for serious concern, were more than twice as likely to die young than children who had never come to the attention of child protection services," Professor Segal says.

"Those who had been placed in out of home care, reflecting the highest and most urgent child maltreatment concerns, had more than four times the risk of dying before age of 33, if their first placement in care was after they turned three years old.

"These results demand our attention. We must ask - what we are doing, or failing to do in supporting children exposed to child abuse or profound neglect and their distressed families? How can we do better, from early childhood to stop this progression to an untimely death."

The research is part of a linked-data study on the impacts of child abuse and neglect (the iCAN project), drawing on a cohort of South Australians born since 1986 to estimate the health, mortality, education and intergenerational outcomes of child abuse and neglect.

In this mortality study researchers looked at the lives of 331,000 young South Australians, of whom one in five had some contact with Child Protective Services. From that group, 980 persons had died between the ages of 16 and 33 years.

Death from poisonings, alcohol and other substances, or where mental illness was a factor, were nearly five times as likely in those young people who had come to the attention of Child Protection Services, compared with those who had not, and death from suicide was nearly three times as likely.

In short, death rates among young people who have been victims of child abuse or neglect are high.

But, according to Prof Segal, that also means there is an opportunity to disrupt these tragic outcomes.

"Children exposed to child maltreatment are coming to the attention of early childhood workers, paediatricians, GPs, psychologists, psychiatrists, teachers and other service providers often presenting with challenging behaviours, disturbed emotional responses, learning difficulties and psychological distress, from an early age," Prof Segal says.

"All these children were known to child protection services. Yet not enough was, or still is being done to help distressed children, at the very time when we could change life trajectories.

"Recent studies have found a massive under-provision of mental health services for infants, children and their families in SA and more widely.

"We need to offer better-designed trauma and attachment-based services to all vulnerable parents and children, in friendly and welcoming settings. Mainstream platforms such as children's centres (early childhood), and maternal and child health services, could provide safe spaces for vulnerable families to access the high-quality inter-disciplinary therapeutic support they need.

"We need to upskill people working across the human services working with children, so that these services can support, rather than re-traumatise, troubled children and families, who present with a range of complex issues that require a highly expert response.

"The challenge is to better protect vulnerable children, not just from current harms, but from the potential extreme consequences of child abuse and neglect, into adulthood.

"Waiting until a first suicide attempt in a teenager or young adult, before intervening is a huge missed opportunity. We know which infants, children and adolescents are at risk; surely, we can offer the necessary intensive family-based support before even more traumatised young people are lost through early death.

"As a society we need to acknowledge our part in this tragedy."

Credit: 
University of South Australia

Quantum mysteries: Probing an unusual state in the superconductor-insulator transition

Scientists at Tokyo Institute of Technology (Tokyo Tech) approach the two decade-old mystery of why an anomalous metallic state appears in the superconductor-insulator transition in 2D superconductors. Through experimental measurements of a thermoelectric effect, they found that the "quantum liquid state" of quantum vortices causes the anomalous metallic state. The results clarify the nature of the transition and could help in the design of superconducting devices for quantum computers.

The superconducting state, in which current flows with zero electrical resistance, has fascinated physicists since its discovery in 1911. It has been extensively studied not only because of its potential applications but also to gain a better understanding of quantum phenomena. Though scientists know much more about this peculiar state now than in the 20th century, there seems to be no end to the mysteries that superconductors hold.

A famous, technologically relevant example is the superconductor-insulator transition (SIT) in two-dimensional (2D) materials. If one cools down thin films of certain materials to near absolute-zero temperature and applies an external magnetic field, the effects of thermal fluctuations are suppressed enough so that purely quantum phenomena (such as superconductivity) dominate macroscopically. Although quantum mechanics predicts that the SIT is a direct transition from one state to the other, multiple experiments have shown the existence of an anomalous metallic state intervening between both phases.

So far, the origin of this mysterious intermediate state has eluded scientists for over two decades. That's why a team of scientists from the Department of Physics at Tokyo Tech, Japan, recently set out to find an answer to the question in a study published in Physical Review Letters. Assistant Professor Koichiro Ienaga, who led the study, explains their motivation, "There are theories that try to explain the origin of dissipative resistance at zero temperature in 2D superconductors, but no definitive experimental demonstrations using resistance measurements have been made to unambiguously clarify why the SIT differs from the expected quantum phase transition models."

The scientists employed an amorphous molybdenum-germanium (MoGe) thin film cooled down to an extremely low temperature of 0.1 K and applied an external magnetic field. They measured a traverse thermoelectric effect through the film called the "Nernst effect," which can sensitively and selectively probe superconducting fluctuations caused by mobile magnetic flux. The results revealed something important about the nature of the anomalous metallic state: the "quantum liquid state" of quantum vortices causes the anomalous metallic state. The quantum liquid state is the peculiar state where the particles are not frozen even at zero temperature because of the quantum fluctuations.

Most importantly, the experiments uncovered that the anomalous metallic state emerges from quantum criticality; the peculiar broadened quantum critical region at zero temperature corresponds to the anomalous metallic state. This is in a sharp contrast to the quantum critical "point" at zero temperature in the ordinary SIT. Phase transitions mediated by purely quantum fluctuations (quantum critical points) have been long-standing puzzles in physics, and this study puts us one step closer to understanding the SIT for 2D superconductors. Excited about the overall results, Ienaga remarks, "Detecting superconducting fluctuations with precision in a purely quantum regime, as we have done in this study, opens a new way to next-generation superconducting devices, including q-bits for quantum computers."

Now that this study has shed light on the two-decade old SIT mystery, further research will be required to get a more precise understanding of the contributions of the quantum vortices in the anomalous metallic state. Let us hope that the immense power of superconductivity will soon be at hand!

Credit: 
Tokyo Institute of Technology

UMBC team reveals possibilities of new one-atom-thick materials

image: Left to right: Fatih Ersan, Can Ataca, Gracie Chaney, Jaron Kropp, and Daniel Wines, all members of Ataca's research group, discuss their work on 2D materials. These materials are one-atom-thick sheets that can have useful properties for applications from computers to solar cells and wearable electronics.

Image: 
Marlayna Demond for UMBC

New 2D materials have the potential to transform technologies, with applications from solar cells to smartphones and wearable electronics, explains UMBC's Can Ataca, assistant professor of physics. These materials consist of a single layer of atoms bound together in a crystal structure. In fact, they're so thin that a stack of 10 million of them would only be 1 millimeter thick. And sometimes, Ataca says, less is more. Some 2D materials are more effective and efficient than similar materials that are much thicker.

Despite their advantages, however, 2D materials are currently difficult and expensive to make. That means the scientists trying to create them need to make careful choices about how they invest their time, energy, and funds in development.

New research by Daniel Wines, Ph.D. candidate in physics, and Ataca gives those scientists the information they need to pursue high-impact research in this field. Their theoretical work provides reliable information about which new materials might have desirable properties for a range of applications and could exist in a stable form in nature. In a recent paper published in ACS Applied Materials and Interfaces, they used cutting-edge computer modeling techniques to predict the properties of 2D materials that haven't yet been made in real life.

"We usually are trying to stay five or so years ahead of experimentalists," says Wines. That way, they can avoid going down expensive dead ends. "That's time, effort, and money that they can focus on other things."

The perfect mix

The new paper focuses on the stability and properties of 2D materials called group III nitrides. These are mixtures of nitrogen and an element from group III on the periodic table, which includes aluminum, gallium, indium, and boron.

Scientists have already made some of these 2D materials in small quantities. Instead of looking at mixtures of one of the group III elements with nitrogen, however, Wines and Ataca modeled alloys--mixtures including nitrogen and two different group III elements. For example, they predicted the properties of materials made of mostly aluminum, but with some gallium added, or mostly gallium, but with some indium added.

These "in-between" materials might have intermediate properties that could be useful in certain applications. "By doing this alloying, we can say, I have orange light, but I have materials that can absorb red light and yellow light," Ataca says. "So how can I mix that so that it can absorb the orange light?" Tuning the light absorption capabilities of these materials could improve the efficiency of solar energy systems, for example.

Alloys of the future

Ataca and Wines also looked at the electric and thermoelectric properties of materials. A material has thermoelectric capability if it can generate electricity when one side is cold and the other is hot. The basic group III nitrides have thermoelectric properties, "but at certain concentrations, the thermoelectric properties of alloys are better than the basic group III nitrides," Ataca says.

Wines adds, "That's the main motivation of doing the alloying--the tunability of the properties."

They also showed that not all of the alloys would be stable in real life. For example, mixtures of aluminum and boron at any concentrations were not stable. However, five different ratios of gallium-aluminum mixtures were stable.

Once production of the basic group III nitrides becomes more reliable and is scaled up, Wines and Ataca expect scientists to work on engineering the materials for specific applications using their results as a guide.

Back to basics...with supercomputers

Wines and Ataca modeled the materials' properties using supercomputers. Rather than using experimental data as input for their models, "We are using the basics of quantum mechanics to create these properties. So the good part is we don't have any experimental biases," Ataca says. "We're working on stuff that doesn't have any experimental evidence before. So this is a trustable approach."

To get the most accurate results requires huge amounts of computing power and takes a long time. Running their models at the highest accuracy level can take several days.

"It's kind of like telling a story," Wines says. "We go through the most basic level to screen the materials," which only takes about an hour. "And then we go to the highest levels of accuracy, using the most powerful computers, to find the most accurate parameters possible."

"I think the beautiful part of these studies is that we started at the basics and we literally went up to the most accurate level in our field," Ataca adds. "But we can always ask for more."

A new frontier

They have continued to move forward into uncharted scientific territory. In a different paper, published within a week of the first in ACS Applied Materials and Interfaces, Theodosia Gougousi, professor of physics; Jaron Kropp, Ph.D. '20, physics; and Ataca demonstrated a way to integrate 2D materials into real devices.

2D materials often need to attach to an electronic circuit within a device. An in-between layer is required to make that connection--and the team found one that works. "We have a molecule that can do this, that can make a connection to the material, in order to use it for external circuit applications," Ataca says.

This result is a big deal for the implementation of 2D materials. "This work combines fundamental experimental research on the processes that occur on the surface of 2D atomic crystals with detailed computational evaluation of the system," Gougousi says. "It provides guidance to the device community so they can successfully integrate novel materials into traditional device architectures."

Collaboration across disciplines

The theoretical analyses for this work happened in Ataca's lab, and the experiments happened in Gougousi's lab. Kropp worked in both groups.

"The project exemplifies the synergy that is required for science and technology development and advancement," Gougousi says. "It is also a great example of the opportunities that our graduate students have to work on problems of great technological interest, and to develop a broad knowledge basis and a unique set of technical skills."

Kropp, who is first author on the second paper, is thrilled to have had this research experience.

"2D semiconductors are exciting because they have the potential for applications in non-traditional electronic devices, like wearable or flexible electronics, since they are so thin," he says. "I was fortunate to have two excellent advisors, because this allowed me to combine the experimental and theoretical work seamlessly. I hope that the results of this work can help other researchers to develop new devices based on 2D materials."

Credit: 
University of Maryland Baltimore County

Rectal cancer patients who "watch and wait" may only need few years of stringent follow-up

An international team of scientists, including doctors from the Champalimaud Clinical Centre, in Lisbon, has just published results in the prestigious journal The Lancet Oncology that suggest that the majority of rectal cancer patients may be able, in the not so distant future, to replace aggressive colorectal surgery with a course of radiochemotherapy and few years of close surveillance. All this with a very low probability of seeing their tumour regrow locally or leading to the development of distant metastases later on if they survived the first years after treatment without signs of reappearance of the tumour.

Specifically, the team showed that almost 70% of some 800 rectal cancer patients who, from 1991 to 2015, submitted themselves to a non-invasive alternative treatment to surgery, the "Watch-and-Wait" protocol - and who remained free of new tumours and metastases during the following couple of years -, could then follow less strict medical surveillance and perhaps even do away with additional oncological treatment.

For many years, the only available treatment for patients with rectal cancer was a radical surgical procedure that often ended with a definitive colostomy, which meant the patient had to be fitted for life with a bag for stool collection connected to their gut through an incision in the abdomen.

The Watch-and-Wait approach for rectal cancer was pioneered some 20 years ago by Brazilian surgeon Angelita Habr-Gama at the University of São Paulo - who led the new study together with colleagues from Brazil, the UK, the Netherlands and Portugal.

It so happens that patients with low rectal cancer (that is, whose tumour is very close to the anus) need to undergo radio and chemotherapy so as to reduce the tumour prior to surgery, to avoid potentially severe postoperative complications. What Habr-Gama observed was that, in a number of those patients, the analysis (biopsy) of the tissue that was harvested during the surgery often showed absolutely no trace of cancer cells. This led her to wonder whether rectal surgery, with its cohort of potential complications and life-long impact on patients' quality of life, had actually been necessary in those cases.

The Watch-and-Wait approach has been increasingly used since the mid-2000s, when surgeons in the Netherlands (also co-authors of the new study) decided to propose it to their eligible patients.

The protocol consists of performing, eight to ten weeks after the radiochemotherapy course, a series of diagnostic tests before deciding if surgery is warranted. To make this decision, three exams are then performed: a digital rectal examination, an endoscopy and magnetic resonance imaging. And if the patient's clinical response is "complete" - that is, if the tumour is not detectable in any one of these exams -, the patient is then given the choice to enter the Watch-and-Wait protocol.

Today, a large corpus of data has already been compiled into the International Watch-and-Wait Database, a large-scale registry of rectal cancer patients from 47 centres in 15 countries whose initial radiochemotherapy treatment led to a "complete clinical response". This is the repository that has now been used for the study.

Critics of the protocol have claimed there it presents at least two drawbacks. The first is that, in case of a subsequent local regrowth of the tumour (which is estimated to happen in one patient out of four) it could prove deleterious to the patient as it delayed a potentially life-saving surgery. The second doubt is that the cancer could have time to metastasise if it is not removed right away. "This strategy carries a potential risk for the local reappearance of the tumour, or local regrowth", agrees Laura Fernández, who is currently working at the Champalimaud Foundation and is the first author of this study. "It is estimated that one in four patients who achieve a complete clinical response suffers a local regrowth, especially during the first years of follow-up", she adds. "That is why patients are kept under a very strict surveillance programme after the radiochemotherapy."

However, in a paper published in February 2020 in the journal Annals of Surgery, digestive surgeon Nuno Figueiredo, who is also a co-author of this new study and heads the Champalimaud Surgical Centre, and colleagues, together with the same Dutch team, showed that the Watch-and-Wait period did not compromise the outcome for patients who experienced local regrowth. Specifically, the final result was the same they would have obtained if the surgery had been performed immediately. Moreover, although that study was not designed to address the issue of metastatic spread, the authors observed, based on the medical literature that, whereas 25% of patients with rectal cancer submitted to surgery go on to develop metastases, only 8.2% of patients submitted to the Watch-and-Wait protocol do.

Could one healthy year be enough to relax surveillance?

In the new study, the question the team wanted to try to answer was: How intensive should the follow-up be and how long should it last to ensure the safety and efficacy of the Watch-and-Wait approach? "Our field of research needs 'real world' data to support what kind of active surveillance is necessary, and for how long these patients should be observed", Fernández points out. And despite our knowledge that one out of four patients will develop a regrowth, we still do not know whether this risk changes over time, once patients achieve an additional cancer-free year. In this paper, we have now provided crucial information to guide clinicians counseling their rectal cancer patients."

The analysis led to several conclusions. First: it suggested "that for patients who survived the first year without recurrence, the risks for local regrowth and distant metastases during the two subsequent years were considerably lower, making it unnecessary to maintain such an intensive surveillance after three years", says Fernández.

Perhaps more surprisingly, she adds, "once patients have achieved and sustained a complete clinical response [recovery] for one year, known risk factors for local regrowth (such as the stage of the disease before any treatment and the dose of radiation received by the patient) seem to become irrelevant".

This may mean, according to Fernández, that additional treatment of all patients who have undergone the Watch-and-Wait protocol and who remain free of tumour recurrence after the first year could also become unnecessary. "Our results suggest that achieving a complete clinical recovery and sustaining it for one year is the most relevant protective factor in patients with rectal cancer, placing them in an excellent prognostic stage", she concludes.

Credit: 
Champalimaud Centre for the Unknown

Evolution of tropical biodiversity hotspots

For decades, scientists have worked to understand the intricacies of biological diversity--from genetic and species diversity to ecological diversity.

As they began to comprehend the depths of diversity across the planet, they noticed an interesting pattern. The number of species increases drastically from the poles to the equator. This phenomenon, known as the latitudinal gradient of species diversity, has helped define the tropics as home to most of the world's biodiversity. Scientists estimate that tropical forests contain more than half the species on earth, from plants and insects to birds, amphibians, and mammals.

These biologically rich areas are known as biodiversity hotspots. To qualify as a hotspot, a region must have at least 1,500 vascular plants species occurring nowhere else and have 30 percent or less of its original natural vegetation. In other words, it must be both irreplaceable and threatened.

While scientists agree that most biological diversity originated in the tropics, the jury is still out on how tropical species diversity formed and how it is maintained. A new study published in Science addresses these long-standing questions.

In "The evolution of tropical biodiversity hotspots," researchers argue that tropical species form faster in harsh species-poor areas but accumulate in climatically moderate areas to form hotspots of species diversity. Drawing on decades of expeditions and research in the tropics and the scientists' own knowledge and sampling of tropical bird diversity, the research team assembled a large and complete phylogenomic dataset for a detailed investigation of tropical diversification.

"This is our magnum opus," said Elizabeth Derryberry, associate professor in the University of Tennessee, Knoxville's Department of Ecology and Evolutionary Biology (EEB) and a senior author of the study. "This research is the product of a decades-long international collaboration to produce a completely sampled evolutionary history of a massive tropical radiation--the 1,306 species of suboscine passerine birds."

Roughly one in three Neotropical bird species is a suboscine, making it the predominant avian group in the Neotropic terrestrial habitat--which ranges from the Andes snow line to the Amazon lowlands--and a perfect group to shed light on the origins of tropical biodiversity.

"The tropics are a natural laboratory for speciation research," said Michael Harvey, a recent EEB postdoctoral student and lead author of the study. "Many high-profile studies over the years sought answers to fundamental questions concerning species formation and maintenance." These earlier projects, he added, sampled only a minority of the existing species within the group being studied. In addition, said Derryberry, data analysis limitations in nearly all of the previous studies left them open to estimation errors.

For this study, Derryberry, Harvey, EEB Professor Brian O'Meara, and fellow researchers used a time-calibrated phylogenomic tree to provide information needed for estimating the dynamics of suboscine diversification across time, lineages, and geography. They also used the tree to test links between the dynamics and potential drivers of tropical diversity.

"We took no shortcuts in this study," Derryberry said. "We leveraged this unparalleled sampling of tropical diversity to illustrate the tempo and geography of evolution in the tropics. It is the first study to demonstrate conclusively that tropical biodiversity hotspots are linked to climates that are both moderate and stable."

The team discovered that species-rich regions in the tropics contain diversity accumulated during a protracted evolutionary period. A key result of their study is that the best predictor of elevated speciation rates in North and South American suboscines is low species diversity. In other words, new species form at higher rates in areas containing relatively few species.

"The qualities that nurture diversity, lower extinction, and promote the gradual accumulation of species are, paradoxically, not the ones that support biodiversity hotspots," Harvey said. "The hotspots are seeded by species born outside the hotspot in areas characterized by more extreme and less climatically stable climates."

The team discovered that, overall, extreme environments limit species diversity but increase opportunities for populations to evolve to become distinct species. Moderate climates, on the other hand, limit speciation but provide more opportunities for species diversity to accumulate.

"Our study will pave the way for future investigations of evolution in the world's diversity hotspots," Derryberry said. "This paper marks not only a change in our understanding of evolution in the tropics, but also in acknowledgement and valuation of the diversity of culture, expertise, and perspective in the field of ornithology."

International collaboration for the study included researchers from Colombia, Brazil, Uruguay, and Venezuela as well as ornithologists from groups underrepresented in the sciences, include Latino and women researchers.

Credit: 
University of Tennessee at Knoxville