Earth

Parasite carried by grey squirrels negatively impacts red squirrel behavior

Research published in the Journal of Animal Ecology reveals a new mechanism of how grey squirrels affect native red squirrels in Europe through parasite-mediated competition.

An international team from universities in Italy and Belgium used a natural experiment of populations of native red squirrels (Sciurus vulgaris) co-inhabiting with alien grey squirrels (Sciurus carolinensis) to investigate the impact of a parasitic helminth (worm) transmitted by grey squirrels, Strongyloides robustus, on naive red squirrels' personality.

By comparing repeated measurements of red squirrel parasite infection and personality with those taken in sites where only the native species occurred, they demonstrated that infection by the alien parasite causes a significant reduction in red squirrels' activity and alters their relationship with native parasites.

Red squirrels normally carry only one species of gastro-intestinal helminth (Trypanoxyuris sciuri) that co-evolved with this arboreal mammal; therefore they might be sensitive to parasite spillover, the acquisition of new parasite species transmitted by another host, in this case the alien grey squirrel. Grey squirrels in Italy commonly harbour Strongyloides robustus, a helminth introduced from their native range, which they transmit to native red squirrel.

In their study, Dr. Francesca Santicchia and her co-authors found negative correlations between activity of red squirrels and infection with the alien parasite S. robustus in the sites invaded by grey squirrels. Activity was also negatively correlated with infection by its native helminth (T. sciuri) but only when grey squirrels were present, thus, not in the red-only sites. Moreover, individuals that acquired S. robustus during the study reduced their activity after infection, while this was not the case for animals that remained uninfected.

Their findings, which show that parasite-mediated competition is energetically costly and can also alter the "normal" relationships between native host and native parasite, are published today in the Journal of Animal Ecology. This new paper comes two years after the researchers previous discovery that grey squirrels cause an increase in the concentration of stress hormones in co-occurring red squirrels, published in the same journal (DOI: 10.1111/1365-2656.12853).

"That our red squirrel is threatened with extinction due to the introduction of an 'alien' species, the North American grey squirrel, has become common knowledge", say Dr. Francesca Santicchia and Dr. Lucas Wauters of the Guido Tosi Research Group at the University of Insubria in Italy. "But that one of the mechanisms involved is the reduction of activity, a personality trait that tends to be related to foraging intensity or efficiency, caused by the spillover of a parasitic helminth from grey squirrels is a new finding."

"This spillover is very similar to what occurs with the Squirrel Poxvirus in the UK and Ireland", add Dr. Claudia Romeo and Dr. Nicola Ferrari from the University of Milano, "although in this case of spillover of an endoparasite, the effect is much more subtle and does not lead directly to the death of the animal"

"For red squirrels, the 'natural' situation is being the only diurnal tree-dwelling mammal in our forests and woodlands", explain Dr. Lucas Wauters and Prof. Adriano Martinoli, also with the University of Insubria. "But when an alien species, such as the grey squirrel, colonizes these habitats, it acts as a true environmental stressor and carrier of potentially dangerous parasites."

In this study, the researchers produced compelling evidence that indicate that spillover of the alien helminth to naive red squirrels causes not only a reduction in activity, a behaviour that requires high energy expenditure, but also alters the relationships between red squirrels and their native, common helminth, T. sciuri.

"This is a subtle form of parasite-mediated competition, which may exacerbate the effects of interspecific competition with grey squirrels for food, such as conifer seeds, hazelnuts, or chestnuts", underline Wauters and Romeo.

In fact, reduced activity could result in lower food intake and together with chronically increased concentrations of glucocorticoids can produce a reduction in body growth or reproductive success, or even decrease survival, among the red squirrels that are forced to share their habitat with the invaders. The combination of these interacting ecological and physiological processes will lead to the extinction of the red squirrel population in few years' time.

Credit: 
British Ecological Society

Climate change: Extreme coastal flooding events in the US expected to rise

Extreme flooding events in some US coastal areas could double every five years if sea levels continue to rise as expected, a study published in Scientific Reports suggests. Today's 'once-in-a-lifetime' extreme water levels -- which are currently reached once every 50 years -- may be exceeded daily along most of the US coastline before the end of the 21st century.

Mohsen Taherkhani, Sean Vitousek and colleagues at the U.S. Geological Survey, the University of Illinois at Chicago, and the University of Hawaii, investigated the frequency of extreme water levels measured by 202 tide gauges along the US coastline and combined the data with sea-level rise scenarios to model the rate at which flooding events may increase in the future.

For 73% of the tide gauges used in the study, the difference in water level between the 50-year extreme water level and the daily average highest tide was found to be less than one metre, and most sea-level rise projections exceed one metre by 2100. The authors' model predicted that before 2050, current extreme water levels transitioned from 50-year, once-in-a-lifetime flooding events to annual events in 70% of US coastal regions. Before the end of 2100, once-in-a-lifetime extremes were predicted to be exceeded almost daily for 93% of the sites measured.

The data suggest that present-day extreme water levels will become commonplace within the next few decades. Low-latitude areas will be the most susceptible, with their rate of coastal flooding predicted to double every five years. At the most susceptible sites, along the Hawaiian and Caribbean coast, the rate at which extreme water levels occur may double with every centimetre of sea-level rise.

Associated coastal hazards, such as beach and cliff erosion, will likely accelerate in concert with the increased risk of flooding, suggest the authors.

Credit: 
Scientific Reports

Seeing 'under the hood' in batteries

image: The high-efficiency RIXS system at the Advanced Light Source's Beamline 8.0.1.

Image: 
Marilyn Sargent/Berkeley Lab

From next-gen smartphones to longer-range electric cars and an improved power grid, better batteries are driving tech innovation. And to push batteries beyond their present-day performance, researchers want to see "under the hood" to learn how the individual ingredients of battery materials behave beneath the surface.

This could ultimately lead to battery improvements such as increased capacity and voltage.

But many of the techniques scientists use can only scratch the surface of what's at work inside batteries, and a high-sensitivity X-ray technique at the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) is attracting a growing group of scientists because it provides a deeper, more precise dive into battery chemistry.

"People are trying to push the operation of batteries beyond what they got before," said Wanli Yang, a staff scientist at Berkeley Lab's Advanced Light Source (ALS) who adapted an X-ray technique known as RIXS (resonant inelastic X-ray scattering), for use in ALS experiments focusing on batteries and other energy materials. The ALS produces beams of light ranging from the infrared to X-rays to support a variety of simultaneous experiments that are carried out by researchers from around the world who use the facility.

The technique that Yang adapted for battery research, known as high-efficiency mRIXS (mapping of RIXS), has attracted particular interest from researchers studying designs for electrodes, which are the battery components through which current passes into and out of the battery. Previously, RIXS was known primarily as a tool for exploring fundamental physics in materials, and Yang, working with theorists and others, has helped to apply the technique to new fields of research.

"Scientists were trying to see inside a battery material - not only at the surface, but also in the bulk - to learn about its oxygen atoms and metal states," Yang said. "Most conventional techniques lack either the depth of probe or the chemical sensitivity that could be offered by mRIXS."

MRIXS can be used to scan samples of battery electrodes to measure the chemical states of different elements at a specific point in the battery's charge or discharge cycle. It is effective at measuring popular battery materials, such as those known as "lower transition metal oxides" that can be lighter and more cost-effective than some alternatives.

It can tell researchers whether, and how fully, battery materials are gaining and losing electrons and ions - positively or negatively charged atoms - in a stable way, so they can learn how quickly and why a battery is degrading, for example.

During a battery's operation, the oxygen atom in a battery electrode can be reduced (gaining electrons) and oxidized (losing electrons), which is known as an "oxygen redox" reaction. Such a change in oxygen states has been found to hamper battery performance in studies of so-called lithium-rich electrodes, which potentially offer more lithium storage and thus higher capacity.

"Changes of the oxygen states could make the battery unsafe and also trigger other side reactions" if the process isn't reversible, Yang said. "The structure may also collapse."

But reversible oxygen redox taking place inside the electrode is a good thing. The mRIXS technique can detect whether the oxygen redox states are reversible, and can also detect metal states in the electrode.

This unique capability also makes mRIXS particularly useful for studies of high-voltage, high-capacity battery materials that have become a growing focus for battery R&D.

The technique works by slowly scanning with X-rays across a sample that chemically preserves a point in the battery charge or discharge cycle. A map scan now takes about three hours to complete per sample - such a full-map scan would take days before the high-efficiency RIXS system was introduced at the ALS.

"The uniqueness of the system here is not only on the data collection time, but its ability to look at unconventional chemical states that typically are not very stable under X-rays," he said. The improvement in detection efficiency is important in preserving the sample prior to the onset of any damage caused by the X-rays. This is also a technical challenge that can be addressed by future light sources with much improved X-ray brightness, such as the ALS Upgrade (ALS-U) project, and ALS scientists are now working to further improve the detection efficiency.

The technique has been integral to several battery studies published in recent months:

One study, published in February, focused on the oxygen redox states in a commercially viable lithium-battery material containing lithium, nickel, cobalt, manganese, and oxygen for an electrode known as a cathode.

Oxygen redox states in battery materials were also the focus of other studies out in February, including one focused on a sodium-battery material containing sodium, lithium, manganese, and oxygen.

More studies of lithium-rich oxide electrodes have utilized mRIXS to resolve their oxygen chemistry: A study in January focused on reducing the voltage-related battery decay; and another study in March demonstrated the fast charging and discharging operation of a material with reversible oxygen chemistry.

A study in November 2019 also utilized mRIXS to look at the states of sulfur, rather than oxygen, in lithium-rich sulfide battery materials.

Yang said the growing use of the technique by the battery R&D community is encouraging,
and researchers at the ALS are working to build out more capacity for these experiments.

"The demand is increasing extremely fast and the ALS is in the process of developing new RIXS systems with even higher throughput due to this demonstrated capacity and increasing demand," Yang said.

"Having RIXS introduced into energy materials research is a new thing," Yang added. "If after 10 years we at the ALS are recognized as the people who pushed a fundamental physics technique for studying batteries and other energy materials, that's what we should be proud of. "This is like a new field, and the community was in dire need of such a tool."

Credit: 
DOE/Lawrence Berkeley National Laboratory

Healthy climate news: Fava beans could replace soy

Tofu, soy milk and veggie mince. More and more Danes are opting to supplement or completely replace their consumption of animal-based proteins with plant-based proteins. Climate considerations are part of their reasoning.

We often use soy-based protein when experimenting with vegetarian cooking. But, new research from the University of Copenhagen's Department of Food Science demonstrates that fava beans hold great promise as a non-soy source of plant protein. Moreover, favas are a better alternative for the environment:

"Many consumers are crying out for alternatives to soy, a crop that places great strain on the environment. This prompted us to find a method of processing fava beans in such a way that allows us to produce a concentrated protein powder. One of the advantages of fava beans is that they can be grown here, locally in Denmark. This is excellent news for the climate," explains Iben Lykke Petersen, an assistant professor at the University of Copenhagen's Department of Food Science, and one of the researchers behind the new study published in the journal Foods.

Far more climate friendly

Fava beans are better suited for climate considerations because they can be cultivated locally, unlike soybeans, which are primarily grown in the United States and South America -- and then exported to Denmark.

Moreover, numerous farms in Brazil and Paraguay have cleared large tracts of forest to create space for soybean fields. This has had severely negative consequences for wildlife, biodiversity and CO2 emissions.

"Another important factor is that, unlike fava beans, lots of soy is genetically modified to be able to tolerate Roundup, an herbicide. Within this context, many consumers are critical of soy's environmental consequences," explains Iben Lykke Petersen.

New method makes fava powder that bursts with protein

To find an alternative to environmentally taxing soybean, the study's researchers tested various crops, looking for those with the greatest potential as a protein powder, while also being able to be grown locally. Here, fava beans outperformed lentils, amaranth, buckwheat and quinoa.

Using an incredibly unique method known as 'wet fractionation', the researchers succeeded in concentrating fava bean protein and removing substances that would otherwise inhibit the digestion of the protein. This allows nutritious fava bean proteins to be more readily absorbed when consumed.

"Wet fractionation is accomplished by milling beans into a flour, and then adding water and blending the mixture into a soup. Thereafter, it becomes easier for us to sort out the less beneficial substances and produce an optimized product," explains Iben Lykke Petersen. She adds:

"Our results demonstrate that this method significantly increases protein content. Furthermore, through our tests, we can see that this protein is nearly as readily digested as when we break down protein from animal products, such meat and eggs."

Competitive color, taste and texture.

The content and nutritional quality of a protein is one thing. Taste is something else! Here too, fava beans can compete with soy and other plant-based protein alternatives. Iben Lykke Petersen explains that when fava beans are processed correctly, their proteins retain their naturally bright colour, along with a neutral taste and good texture.

"Manufacturers prefer a product that is tasteless, has a neutral color and a firm texture. Fava beans check each these boxes, unlike peas, which often have a very bitter aftertaste," she concludes.

Fava beans are grown primarily in the Middle East, China and Ethiopia, but are already available in Danish supermarkets and health food stores.

Credit: 
University of Copenhagen

How tiny water droplets form can have a big impact on climate models

Understanding droplet formation in pure water in a controlled lab setting is challenging enough, but in the atmosphere, droplets form in the presence of many other substances.

Some of them, like nitrogen, oxygen and argon, do not interact much with water and are easy to account for. The complications arise from surface-active species, namely substances that prefer to stay on the surface of the droplet.

You've seen the surface tension of water in action if you've ever seen a water bead up on a hard surface. The water molecules are attracted more to each other than to the molecules in the air, causing them to cling together as tightly as they can, causing the drop to form a dome.

One example of a surface-active species is ethanol, which is found in beer, wine, champagne and other alcoholic beverages. In a droplet of champagne, the ethanol molecules pile up at the surface and drastically lower its surface tension.

SINTEF researcher Ailo Aasen, who recently completed his PhD at the Norwegian University of Science and Technology (NTNU), partly focused on nucleation in the presence of impurities. The results, recently published in the prestigious journal Physical Review Letters, are relevant to diverse industrial processes but especially atmospheric science and climate models.

Before a water droplet can form in the atmosphere, enough random collisions between water molecules have to occur to form a seed, or "nucleus", for the droplet. The tiny, nanosized droplet of water is called a critical nucleus, and its formation is known as nucleation. These nanosized droplets typically form around dust particles, and surface-active impurities pile up at the droplet surface. After a large enough droplet has formed, it will grow spontaneously.

"A major goal of nucleation theory is to understand the properties of this critical "droplet seed". In a rain drop, the water molecules are of two types: those in the interior of the droplet, and those at the surface," Ailo says.

A droplet is approximately round (spherical), so that the water molecules on the surface have fewer neighbours than those inside the droplet. The smaller a droplet, the greater the portion of its molecules are in the surface layer.

The nucleus has to reach a critical size to continue to grow, because it has to overcome the surface tension that results from the fewer number of molecules on the outside of the drop. The smaller the surface tension, the easier it is for the drop to form. According to Ailo, this is where impurities can make a large difference: "Surface-active species reduce the surface tension between the droplet and the air. We see that a minute concentration of a surface-active impurity can dramatically increase the rate of drop formation. Since surface-active species like sulphuric acid and ammonia can be present in low concentrations during formation of rain drops, this is likely to be important input to weather forecasts and climate models."

Classical nucleation theory fails spectacularly when surface-active impurities are present. For example, if water droplets are formed in the presence of alcohols, predictions of the rate at which droplets form can be off by more than 20 orders of magnitude. In fact, the classical theory predicts that 10^20 (10 followed by 19 zeros) fewer droplets form than what researchers can actually measure in experiments. To put this number into context, the number of stars in the Milky Way is about 10^11 (10 followed by 10 zeros) - a billion times lower.

In addition to being grossly inaccurate, the classical theory makes predictions that are physically impossible. In some cases, such as for water-ethanol, it predicts that there is a negative number of water molecules in the droplet, which of course is impossible.

The hypothesis behind Aasen's research was that these discrepancies stem from an assumption in the theory, which considers the nucleus to be spherical but to have the same surface tension as a completely flat surface.

Part of the problem here is that it is very difficult to estimate how surface tension behaves during nucleation, so the classical theory includes the assumption that the surface tension in a drop is the same as is found a flat surface, which simplifies calculations, Ailo explains.

The tiny nuclei formed in the atmosphere are only a few nanometres wide and are highly curved. Assuming that the nuclei have the same surface tension as a completely flat surface is a major reason why the classical theory doesn't always work.

Ailo and his colleagues used a sophisticated model for the droplet surface, coupled with an accurate thermodynamic model for the liquid and the vapour, to improve the classical theory.

By properly including a more accurate representation of the surface tension into the theory that accounts for how curved the droplet is, they were able to reconcile the theoretical predictions of nucleation rates with those actually observed in experiments, reducing the discrepancy from more than 20 to less than 2 orders of magnitude. The weird, physically impossible predictions sometimes made by the classical nucleation theory also disappeared.

Aasen was supervised by Øivind Wilhelmsen at SINTEF and NTNU, whose 2016 work on vapor-liquid interfaces provided the basis for the new research. He believes the deeper understanding of droplet formation and a procedure for modelling it can bring benefits well beyond climate science.

The work was performed in collaboration with Prof. David Reguera from the University of Barcelona.

Credit: 
Norwegian University of Science and Technology

SMART and MIT develop nanosensors for real-time plant health monitoring

image: Nanosensors implanted within plant leaves can send signals that communicate the stress-induced signalling pathways of plants to a smartphone

Image: 
Felice C. Frankel

Sensors can intercept distress signals within plants to reveal how they respond to different types of stress

Plant responses can be sent directly to remote electronic devices such as cell phones, allowing remote, real-time tracking

Nanobionic approach has a range of applications including studying how to improve crop yield in urban farms

The technology can potentially be applied to all types of plants

Singapore, 16 April 2020 - Researchers from Massachusetts Institute of Technology (MIT), Singapore-MIT Alliance for Research and Technology (SMART), MIT's research enterprise in Singapore, and Temasek Life Sciences Laboratory (TLL) have developed a way to study and track the internal communication of living plants using carbon nanotube sensors that can be embedded in plant leaves.

The sensors can report on plants' signalling waves to reveal how they respond to stresses such as injury, infection, heat and light damage, providing valuable real-time insights for engineering plants to maximise crop yield.

The new nanobionic approach is explained in a paper titled "Real-time Detection of Wound-Induced H2O2 Signalling Waves in Plants with Optical Nanosensors" published in the prestigious online scientific journal Nature Plants. It uses sensors to intercept the hydrogen peroxide signals that plants use to communicate internally and displays the data on remote electronic devices such as cell phones, allowing agricultural scientists to remotely keep track of plant health in real time.

"Plants have a very sophisticated form of internal communication, which we can now observe for the first time. That means that in real time, we can see a living plant's response, communicating the specific type of stress that it's experiencing," says Michael Strano, co-lead Principal Investigator at Disruptive & Sustainable Technologies for Agricultural Precision (DiSTAP), an Interdisciplinary Research Group under SMART. Professor Strano, who is the senior author of the paper, is also a Carbon P. Dubbs Professor of Chemical Engineering at MIT.

The technology can provide much-needed data to inform a range of agricultural applications such as screening different species of plants for their ability to resist mechanical damage, light, heat, and other forms of stress, or study how different species respond to pathogens. It can also be used to study how plants respond to different growing conditions in urban farms.

"Plants that grow at high density are prone to shade avoidance, where they divert resources into growing taller, instead of putting energy into producing crops, lowering overall crop yield," says Professor Strano. "Our sensor allows us to intercept that stress signal and to understand exactly the conditions and the mechanism that are happening upstream and downstream in the plant that gives rise to the shade avoidance, thus leading to fuller crops."

Traditionally, molecular biology research has been limited to only specific plants that are amenable to genetic manipulation, but this new technology can potentially be applied to any plant. Professor Strano's team has already successfully used the approach in comparing eight different species including spinach, strawberry plants and arugula, and it could work for many more.

Funded by the National Research Foundation (NRF) Singapore, the Agency for Science, Technology and Research (A*STAR), and the U.S. Department of Energy Computational Science Graduate Fellowship Program, the study set out to embed sensors into plants that would report back on the plants' health status. The research team used a method called lipid exchange envelope penetration (LEEP), developed previously by Professor Strano's lab, to incorporate the sensors into plant leaves.

"I was training myself to get familiarized with the technique, and in the process of the training I accidentally inflicted a wound on the plant. Then I saw this evolution of the hydrogen peroxide signal," says the paper's lead author and MIT graduate student Tedrick Thomas Salim Lew.

The release of hydrogen peroxide triggers calcium release among adjacent plant cells, stimulating them to release more hydrogen peroxide and creating a wave of distress signals along the leaf. While the wave of hydrogen peroxide stimulates plants to produce secondary metabolites that can help repair damage, these metabolites are also often the source of the flavours we want in our edible plants. Manipulating this can help farmers enhance the taste of the plants we eat while optimising plant yield.

Credit: 
Singapore-MIT Alliance for Research and Technology (SMART)

New nanocarrier drug delivery technology crosses the blood-brain barrier

image: A Kumamoto University research group collected phages that penetrate human blood-brain barrier model cells and analyzed the amino acid sequence of the peptides on the permeating phages.

Image: 
Professor Sumio Ohtsuki

A Japanese research team has developed a cyclic peptide (a chain of amino acids bonded circularly) that enhances blood-brain barrier (BBB) penetration. By attaching the cyclic peptide to the surface of nanoparticles, research and development of new drug nanocarriers for drug delivery to the brain becomes possible.

Unlike blood circulation to the peripheral organs in the body, the BBB prevents various substances, including many drugs, from moving from the blood into the brain. Biopharmaceuticals and macromolecular drugs are attracting attention as new treatments for previously untreatable diseases and for improving outcomes. However, these high molecular weight drugs are unable to penetrate the BBB. Technologies that can deliver them to the brain would bring significant progress in the development of medications that act on the brain.

Aiming to develop technologies applicable to various drugs, a research team from Kumamoto University, Japan worked on developing a cyclic peptide able to penetrate the BBB. In their search to find a peptide with the desired function, they turned to viruses called phages. From a phage library listing cyclic peptides with 109 types of amino acid sequences, the researchers searched for phages able to penetrate human BBB model cells and analyzed their sequences. Since the size of a phage (about 1,000 nanometers) is larger than macromolecular drugs, the scientists expected that these cyclic peptides would also allow drug penetration into the BBB.

Of the two new cyclic peptides they discovered, one promoted phage penetration not only in human BBB model cells but also in monkey and rat BBB model cells. Furthermore, this phage could be found in the brain of a mouse 60 minutes after intravenous injection. In additional experiments, the researchers modified liposomes by adding the cyclic peptide to the surface of liposomes thereby creating 150 nanometer-sized artificial nanoparticles. When this modified liposome was injected intravenously into a mouse, it was also detected in the brain 60 minutes later showing that the new cyclic peptide facilitates penetration of phage and liposome nanoparticles through the BBB allowing for delivery into the brain.

"Liposomes are nanocarrier that can encapsulate various substances. The liposome whose surface has been modified with this new cyclic peptide can be used as a nanocarrier to bypass the BBB. A way to deliver macromolecular drugs to the brain has been opened," said Professor Ohtsuki. "We expect this research to contribute significantly toward the development of drugs for central nervous system diseases, including Alzheimer's disease."

Credit: 
Kumamoto University

Earth Day alert to save our frogs

image: Australian conservation workers conduct evaluation of a freshwater habitat.

Image: 
Kate Mason

With climate action a theme of Earth Day 2020 (22 April 2020), a new research paper highlights the plight of some of the most at-risk amphibian species - and shortfalls in most conservation efforts.

More than even birds and most mammals, amphibians (frogs, salamander, worm-like caecilians, anurans, etc) are on the front line of extinction in a hotter, dryer climate conditions.

"Amphibian populations are in decline globally, with water resource use dramatically changing surface water hydrology and distribution," says Flinders University freshwater ecologist Rupert Mathwin, lead author of the review study published in Conservation Biology.

"Intelligent manipulation and management of where and how water appears in the landscape will be vital to arrest the decline in amphibia."

However, many conservation measures are not enough to arrest the decline.

"Already about 41% of the species assessed (IUCN 2019) are threatened with extinction, so with continued climate change we have to be smarter about managing water to maintain critical habitats and save our threatened amphibians from extinction," says Corey Bradshaw, Flinders University Professor of Global Ecology.

"It will be critical to use prior knowledge and change the way we share our successes and failures to find ways to save amphibians."

The article found some key pointers for future land management:
* Extending the time that water is available in temporary pools is one of the most successful approaches. Excavating, lining and pumping water into breeding ponds helps populations.
* Amphibians are often limited by (mainly) fish predators, so restoring natural drying patterns outside of the main breeding times can reduce predation.
* Spraying water into the environment has been attempted but appears to have limited success (see examples in Conservation Bytes).
* Releasing water from dams along river channels (often termed environmental flow) can harm amphibians if high-energy water flows scour habitat features and displace larvae, and favour breeding of predators like fish.

Although perhaps counterintuitive, managers can restrict water in the landscape for amphibian conservation. This is successfully used to remove predators like fish and crayfish from breeding pools to improve breeding outcomes.

In Pennsylvania, sprinklers delivered treated wastewater through a forest reserve. Although this doubled the number of ponds available, it produced breeding habitats with poor water quality, fewer egg masses, and lower hatching success and larval survival.

Spraying was used to increase soil moisture and improve breeding conditions for a nest-breeding frog (Pseudophryne bibroni), resulting in increased calling, successful mating events and egg survival in a terrestrial nest breeder.

The researchers identified two other amphibian-conservation approaches that spray water into the environment that have not been reported:

Spraying could increase foraging opportunities (by decreasing evaporation rates), and
Spraying could create moist corridors between pools to link populations or aid recolonisation.
Much of Australia is drying as a result of climate change, water extraction, and landscape modification, with mass deaths of native fish hitting the headlines last summer. Environmental flows compete with agriculture and other human uses.
Amphibians breathe (in part) through their skin, so they maintain moist skin surfaces. This sliminess means that most amphibians quickly dry out in dry conditions.

Additionally, most amphibian eggs and larvae are fully aquatic. One of the greatest risks to populations are pools that dry too quickly for larval development, which leads to complete reproductive failure.

"This need for freshwater all too often places them in direct competition with humans," says Professor Bradshaw.

Credit: 
Flinders University

Little scientists: Children prefer storybooks that explain why and how things happen

Children have an insatiable appetite to understand why things are the way they are, leading to their apt description as "little scientists". While researchers have been aware of children's interest in causal information, they didn't know whether it influenced children's preferences during real-world activities, such as reading.

A new study in Frontiers in Psychology finds that children prefer storybooks containing more causal information. The results could help parents and teachers to choose the most engaging books to increase children's interest in reading, which is important in improving early literacy and language skills.

Children have a burning urge to understand the mechanics of the world around them, and frequently bombard parents and teachers with questions about how and why things work the way they do (sometimes with embarrassing consequences). Researchers have been aware of children's appetite for causal information for some time. However, no one had previously linked this phenomenon to real-world activities such as reading or learning.

"There has been a lot of research on children's interest in causality, but these studies almost always take place in a research lab using highly contrived procedures and activities," explains Margaret Shavlik of Vanderbilt University, Tennessee.

"We wanted to explore how this early interest in causal information might affect everyday activities with young children - such as joint book reading."

Finding the factors that motivate children to read books is important. Encouraging young children to read more improves their early literacy and language skills and could get them off to a running start with their education. Reading books in the company of a parent or teacher is a great way for children to start reading, and simply choosing the types of book that children most prefer could be an effective way to keep them interested and motivated.

Shavlik and her colleagues hypothesized that children prefer books with more causal information. They set out to investigate whether this was true by conducting a study involving 48 children aged 3-4 years from Austin, Texas. Their study involved an adult volunteer who read two different but carefully matched storybooks to the children, and then asked them about their preferences afterwards.

"We read children two books: one rich with causal information, in this case, about why animals behave and look the way they do, and another one that was minimally causal, instead just describing animals' features and behaviors," said Shavlik.

The children appeared to be equally as interested and enthusiastic while reading either type of book. However, when asked which book they preferred they tended to choose the book loaded with causal information, suggesting that the children were influenced by this key difference. "We believe this result may be due to children's natural desire to learn about how the world works," explains Shavlik.

So, how could this help parents and teachers in their quest to get children reading? "If children do indeed prefer storybooks with causal explanations, adults might seek out more causally rich books to read with children - which might in turn increase the child's motivation to read together, making it easier to foster early literacy," said Shavlik.

The study gives the first indicator that causality could be a key to engaging young minds during routine learning activities. Future studies could investigate if causally-rich content can enhance specific learning outcomes, including literacy, language skills and beyond. After all, learning should be about understanding the world around us, not just memorizing information.

Credit: 
Frontiers

New geochemical tool reveals origin of Earth's nitrogen

image: Volcanic gas emissions in Northern Iceland. The research team collected gas samples here that were analyzed as part of this study.

Image: 
(Photo by Peter Barry, © Woods Hole Oceanographic Institution)

Researchers at Woods Hole Oceanographic Institution (WHOI), the University of California Los Angeles (UCLA) and their colleagues used a new geochemical tool to shed light on the origin of nitrogen and other volatile elements on Earth, which may also prove useful as a way to monitor the activity of volcanoes. Their findings were published April 16, 2020, in the journal Nature.

Nitrogen is the most abundant gas in the atmosphere, and is the primary component of the air we breathe. Nitrogen is also found in rocks, including those tucked deep within the planet's interior. Until now, it was difficult to distinguish between nitrogen sources coming from air and those coming from inside the Earth's mantle when measuring gases from volcanoes.

"We found that air contamination was masking the pristine 'source signature' of many volcanic gas samples," says WHOI geochemist Peter Barry, a coauthor of the study.

Without that distinction, scientists weren't able to answer basic questions like: Is nitrogen left over from Earth's formation or was it delivered to the planet later on? How is nitrogen from the atmosphere related to nitrogen coming out of volcanoes?

Barry and lead author Jabrane Labidi of UCLA, now a researcher at Institut de Physique du Globe de Paris, worked in partnership with international geochemists to analyze volcanic gas samples from around the globe--including gases from Iceland and Yellowstone National Park--using a new method of analyzing "clumped" nitrogen isotopes. This method provided a unique way to identify molecules of nitrogen that come from air, which allowed the researchers to see the true gas compositions deep within Earth's mantle. This ultimately revealed evidence that nitrogen in the mantle has most likely been there since our planet initially formed.

"Once air contamination is accounted for, we gained new and valuable insights into the origin of nitrogen and the evolution of our planet," Barry says.

While this new method helps scientists understand the origins of volatile elements on Earth, it may also prove useful as a way of monitoring the activity of volcanoes. This is because the composition of gases bellowing from volcanic centers change prior to eruptions. It could be that the mix of mantle and air nitrogen could one day be used as a signal of eruptions.

Credit: 
Woods Hole Oceanographic Institution

Light from stretchable sheets of atoms for quantum technologies

image: An artist impression showing the evolution of quantum light colour when the atomically thin material is stretched.

Image: 
Dr. Trong Toan Tran, one of the senior authors of the work.

The researchers say their results, using an atomically thin material, hexagonal boron nitride, constitute a significant step forward in understanding light-matter interactions of quantum systems in 2D materials, and the journey towards scalable on-chip devices for quantum technologies. The study is published in Advanced Materials.

The ability to finely tune the colors of quantum light has been proposed as a key step in developing quantum network architectures,where photons, the fundamental building block of light, are exploited to serve as the quantum messenger to communicate between distant sites.

The scientists harnessed the extreme stretchability of hexagonal boron nitride, also known as "white graphene." to such an extent that they were able to demonstrate a world record for the largest spectral, color-tuning range from an atomically thin quantum system.

Lead author, UTS PhD candidate Noah Mendelson said that the demonstrated improvement in spectral tuning, by almost an order of magnitude, would spark interest within both academic and industrial groups "working towards the development of quantum networks and related quantum technologies."

"This material was grown in the lab at UTS with some atomic-scale 'crystal-mistakes' that are ultra-bright and extremely stable quantum sources.

"By stretching the atomically-thin material to induce mechanical expansion of the quantum source, this, in turn resulted in the dramatic tuning range of the colors emitted by the quantum light source," he said.

"As the hexagonal boron nitride was stretched to only a few atomic layers thick the emitted light started to change colour from orange to red much like the LED lights on a Christmas tree, but in the quantum realm," says UTS PhD candidate Noah Mendelson.

"Seeing such color-tuning at the quantum level is not just an amazing feat from a fundamnetal point of view, but it also sheds light on many potential applications in the field of quantum science and quantum engineering," he adds.

Unlike other nanomaterials used as quantum light sources, such as diamond, silicon carbide or gallium nitride hexagonal boron nitride isn't brittle and comes with the unique stretchable mechanical properties of a van der Waals crystal.

"We have always been amazed by the superior properties of hexagonal boron nitride, be they mechanical, electrical or optical. Such properties enable not only unique physics experiments, but could also open doors to a plethora of practical applications in the near future," says UTS Professor Igor Aharonovich, a senior author of the work and chief investigator of the ARC Center of Excellence for Transformative Meta-Optical Materials (TMOS).

The UTS team of experimental physicists, lead by Dr Trong Toan Tran felt that they were on to something very intriguing from the very first observation of the exotic phenomenon.

"We quickly teamed up with one of the world's leading theoretical physicists in this field, ANU's Dr. Marcus Doherty to try to understand the underlying mechanisms responsible for the impressive color-tuning range. The joint effort between UTS and ANU led to the complete understanding of the phenomenon, fully supported by a robust theoretical model, " Dr Toan Tran said.

The team is now preparing their follow-up work: realizing a proof-of-principle experiment involving the entanglement of the two originally different colored photons from two stretched quantum sources in hexagonal boron nitride to form a quantum bit or (qubit)--the building block of a quantum network.

"We think that the success of our work has opened up new avenues for multiple fundamental physics experiments that could lay the foundation for the future quantum internet," concludes Dr Toan Tran.

Credit: 
University of Technology Sydney

Applying mathematics to accelerate predictions for capturing fusion energy

image: PPPL scientists have borrowed a technique from applied mathematics to rapidly predict the behavior of fusion plasma at a much-reduced computational cost.

Image: 
Photo and composite by Elle Starkman/PPPL Office of Communications.

A key issue for scientists seeking to bring the fusion that powers the sun and stars to Earth is forecasting the performance of the volatile plasma that fuels fusion reactions. Making such predictions calls for considerable costly time on the world's fastest supercomputers. Now researchers at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) have borrowed a technique from applied mathematics to accelerate the process.

The technique combines the millisecond behavior of fusion plasmas into longer-term forecasts. By using it, "we were able to demonstrate that accurate predictions of quantities such as plasma temperature profiles and heat fluxes could be achieved at a much reduced computational cost," said Ben Sturdevant, an applied mathematician at PPPL and lead author of a Physics of Plasmas paper(link is external) that reported the results.

Fusion combines light elements in the form of plasma -- the hot, charged state of matter composed of free electrons and atomic nuclei -- that generates massive amounts of energy. Scientists are working around the world to create and control fusion on Earth for a virtually inexhaustible supply of safe and clean power to generate electricity.

Speeding simulations

Sturdevant applied the mathematical technique to the high-performance XGCa plasma code developed by a team led by physicist C.S. Chang at PPPL. The application greatly speeded up simulations of the evolving temperature profile of ions orbiting around magnetic field lines modeled with gyrokinetics -- a widely used model that provides a detailed microscopic description of the behavior of plasma in strong magnetic fields. Also accelerated was modeling the collisions between orbiting particles that cause heat to leak from the plasma and reduce its performance.

The application was the first successful use of the technique, called "equation-free projective integration," to model the evolution of the ion temperature as colliding particles escape from magnetic confinement. Equation free modeling aims to extract long-term macroscopic information from short-term microscopic simulations. The key was improving a critical aspect of the technique called a "lifting operator" to map the large-scale, or macroscopic, states of plasma behavior onto small-scale, or microscopic, ones.

The modification brought the detailed profile of the ion temperature into sharp relief. "Rather than directly simulating the evolution over a long time-scale, this method uses a number of millisecond simulations to make predictions over a longer time-scale," Sturdevant said. "The improved process reduced the computing time by a factor of four."

The results, based on tokamak simulations, are general and could be adapted for other magnetic fusion devices including stellarators and even for other scientific applications. "This is an important step in being able to confidently predict performance in fusion energy devices from first-principles-based physics," Sturdevant said.

Expanding the technique

He next plans to consider the effect of expanding the technique to include the evolution of turbulence on the speed of the process. "Some of these initial results are promising and exciting," Sturdevant said. "We're very interested to see how it will work with the inclusion of turbulence."

Coauthors of the paper include Chang, PPPL physicist Robert Hager and physicist Scott Parker of the University of Colorado. Chang and Parker were advisors, Sturdevant said, while Hager provided help with the XGCa code and the computational analysis.

Support for this work comes from the Exascale Computing Project, a collaborative effort of the DOE Office of Science and the National Nuclear Security Administration, and Scientific Discovery through Advanced Computing (SciDAC). Computer simulations were performed at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility.

PPPL, on Princeton University's Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas -- ultra-hot, charged gases -- and to developing practical solutions for the creation of fusion energy. The Laboratory is managed by the University for the U.S. Department of Energy's Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit energy.gov/science(link is external).

Credit: 
DOE/Princeton Plasma Physics Laboratory

Unusually clear skies drove record loss of Greenland ice in 2019

image: Average pressure over Greenland in summer 2019, with arrows showing wind direction.

Image: 
Tedesco and Fettweis, 2019

Last year was one of the worst years on record for the Greenland ice sheet, which shrunk by hundreds of billions of tons. According to a study published today in The Cryosphere, that mind-boggling ice loss wasn't caused by warm temperatures alone; the new study identifies exceptional atmospheric circulation patterns that contributed in a major way to the ice sheet's rapid loss of mass.

Because climate models that project the future melting of the Greenland ice sheet do not currently account for these atmospheric patterns, they may be underestimating future melting by about half, said lead author Marco Tedesco from Columbia University's Lamont-Doherty Earth Observatory.

The study used satellite data, ground measurements, and climate models to analyze changes in the ice sheet during the summer of 2019.

The researchers found that while 2019 saw the second-highest amount of runoff from melting ice (2012 was worse), it brought the biggest drops in surface mass balance since record-keeping began in 1948. Surface mass balance takes into account gains in the ice sheet's mass -- such as through snowfall -- as well as losses from surface meltwater runoff.

"You can see the mass balance in Greenland as your bank account," said Tedesco. "In some periods you spend more, and in some periods you earn more. If you spend too much you go negative. This is what happened to Greenland recently."

Specifically, in 2019, the ice sheet's surface mass balance dropped by about 320 billion tons below the average for 1981-2010 -- the biggest drop since record-keeping began in 1948. Between 1981 and 2010, the surface mass "bank account" gained about 375 billion tons of ice per year, on average. In 2019, that number was closer to 50 billion tons. And while a gain of 50 billion tons may still sound like good news for an ice sheet, Fettweis explained that it is not, because of another factor: the ice sheet is also shedding hundreds of billions of tons as icebergs break off into the ocean. Under stable conditions, the gains in surface mass balance would be high enough to compensate for the ice that's lost when icebergs calve off. Under the current conditions, the calving far outweighs the surface mass balance gains; Overall, the ice sheet lost an estimated 600 billion tons in 2019, representing a sea level rise of about 1.5 millimeters.

Before now, 2012 was Greenland's worst year for surface mass balance, with a loss of 310 billion tons compared to the 1981-2010 baseline. Yet summer temperatures in Greenland were actually higher in 2012 than in 2019 -- so why did the surface lose so much mass last year?

Tedesco and co-author Xavier Fettweis, from the University of Liège, found that the record-setting ice loss was linked to high-pressure conditions (called anticyclonic conditions) that prevailed over Greenland for unusually long periods of time in 2019.

The high pressure conditions inhibited the formation of clouds in the southern portion of Greenland. The resulting clear skies let in more sunlight to melt the surface of the ice sheet. And with fewer clouds, there was about 50 billion fewer tons of snowfall than usual to add to the mass of the ice sheet. The lack of snowfall also left dark, bare ice exposed in some places, and because ice doesn't reflect as much sunlight as fresh snow, it absorbed more heat and exacerbated melting and runoff.

Conditions were different, but no better, in the northern and western parts of Greenland, because as the high pressure system spun clockwise, it pulled up warm, moist air from the lower latitudes and channeled it into Greenland.

"Imagine this vortex rotating in the southern part of Greenland," Tedesco explained, "and that is literally sucking in like a vacuum cleaner the moisture and heat of New York City, for example, and dumping it in the Arctic -- in this case, along the west coast of Greenland. When that happened, because you have more moisture and more energy, it promoted the formation of clouds in the northern part."

But instead of bringing snowfall, these warm and moist clouds trapped the heat that would normally radiate off of the ice, creating a small-scale greenhouse effect. These clouds also emitted their own heat, exacerbating melting.

Through these combined effects, the atmospheric conditions of the summer of 2019 led to the highest annual mass loss from Greenland's surface since record-keeping began.

With the help of an artificial neural network, Tedesco and Fettweis found that 2019's large number of days with these high-pressure atmospheric conditions was unprecedented. The summer of 2012, one of Greenland's worst years, also saw anticyclonic conditions.

"These atmospheric conditions are becoming more and more frequent over the past few decades," said Tedesco. "It is very likely that this is due to the waviness to the jet stream, which we think is related to, among other things, the disappearance of snow cover in Siberia, the disappearance of sea ice, and the difference in the rate at which temperature is increasing in the Arctic versus the mid-latitudes." In other words, climate change may make the destructive high-pressure atmospheric conditions more common over Greenland.

Current global climate models are not able to capture these effects of a wavier jet stream. As a result, "simulations of future impacts are very likely underestimating the mass loss due to climate change," said Tedesco. "It's almost like missing half of the melting."

The Greenland ice sheet contains enough frozen water to raise sea levels by as much as 23 feet. Understanding the impacts of atmospheric circulation changes will be crucial for improving projections for how much of that water will flood the oceans in the future, said Tedesco.

Credit: 
Columbia Climate School

Researchers restore sight in mice by turning skin cells into light-sensing eye cells

image: Three months after transplantation, immunofluorescence studies confirmed the survival of the chemically induced photoreceptor-like cells (green). They also show integration of the cells into the layers of the mouse retina.

Image: 
Sai Chavala

Researchers have discovered a technique for directly reprogramming skin cells into light-sensing rod photoreceptors used for vision. The lab-made rods enabled blind mice to detect light after the cells were transplanted into the animals' eyes. The work, funded by the National Eye Institute (NEI), published April 15 in Nature. The NEI is part of the National Institutes of Health.

Up until now, researchers have replaced dying photoreceptors in animal models by creating stem cells from skin or blood cells, programming those stem cells to become photoreceptors, which are then transplanted into the back of the eye. In the new study, scientists show that it is possible to skip the stem-cell intermediary step and directly reprogram skins cells into photoreceptors for transplantation into the retina.

"This is the first study to show that direct, chemical reprogramming can produce retinal-like cells, which gives us a new and faster strategy for developing therapies for age-related macular degeneration and other retinal disorders caused by the loss of photoreceptors," said Anand Swaroop, Ph.D., senior investigator in the NEI Neurobiology, Neurodegeneration, and Repair Laboratory, which characterized the reprogrammed rod photoreceptor cells by gene expression analysis.

"Of immediate benefit will be the ability to quickly develop disease models so we can study mechanisms of disease. The new strategy will also help us design better cell replacement approaches," he said.

Scientists have studied induced pluripotent stem (iPS) cells with intense interest over the past decade. IPSCs are developed in a lab from adult cells --rather than fetal tissue-- and can be used to make nearly any type of replacement cell or tissue. But iPS cell reprogramming protocols can take six months before cells or tissues are ready for transplantation. By contrast, the direct reprogramming described in the current study coaxed skin cells into functional photoreceptors ready for transplantation in only 10 days. The researchers demonstrated their technique in mouse eyes, using both mouse- and human-derived skin cells.

"Our technique goes directly from skin cell to photoreceptor without the need for stem cells in between," said the study's lead investigator, Sai Chavala, M.D., CEO and president of CIRC Therapeutics and the Center for Retina Innovation. Chavala is also director of retina services at KE Eye Centers of Texas and a professor of surgery at Texas Christian University and University of North Texas Health Science Center (UNTHSC) School of Medicine, Fort Worth.

Direct reprogramming involves bathing the skin cells in a cocktail of five small molecule compounds that together chemically mediate the molecular pathways relevant for rod photoreceptor cell fate. The result are rod photoreceptors that mimic native rods in appearance and function.

The researchers performed gene expression profiling, which showed that the genes expressed by the new cells were similar to those expressed by real rod photoreceptors. At the same time, genes relevant to skin cell function had been downregulated.

The researchers transplanted the cells into mice with retinal degeneration and then tested their pupillary reflexes, which is a measure of photoreceptor function after transplantation. Under low-light conditions, constriction of the pupil is dependent on rod photoreceptor function. Within a month of transplantation, six of 14 (43%) animals showed robust pupil constriction under low light compared to none of the untreated controls.

Moreover, treated mice with pupil constriction were significantly more likely to seek out and spend time in dark spaces compared with treated mice with no pupil response and untreated controls. Preference for dark spaces is a behavior that requires vision and reflects the mouse's natural tendency to seek out safe, dark locations as opposed to light ones.

"Even mice with severely advanced retinal degeneration, with little chance of having living photoreceptors remaining, responded to transplantation. Such findings suggest that the observed improvements were due to the lab-made photoreceptors rather than to an ancillary effect that supported the health of the host's existing photoreceptors," said the study's first author Biraj Mahato, Ph.D., research scientist, UNTHSC.

Three months after transplantation, immunofluorescence studies confirmed the survival of the lab-made photoreceptors, as well as their synaptic connections to neurons in the inner retina.

Further research is needed to optimize the protocol to increase the number of functional transplanted photoreceptors.

"Importantly, the researchers worked out how this direct reprogramming is mediated at the cellular level. These insights will help researchers apply the technique not only to the retina, but to many other cell types," Swaroop said.

"If efficiency of this direct conversion can be improved, this may significantly reduce the time it takes to develop a potential cell therapy product or disease model," said Kapil Bharti, Ph.D., senior investigator and head of the Ocular and Stem Cell Translational Research Section at NEI.

Credit: 
NIH/National Eye Institute

Nature publishes review article heralding multispecific drugs as the next wave of drug discovery

In the article "Multispecific drugs herald a new era of biopharmaceutical innovation" published today in Nature, Raymond Deshaies, Ph.D., senior vice president of Global Research at Amgen, discusses how the advent of multispecific drugs is leading the next revolution of drug discovery and development. In the review, Deshaies describes the major classes of multispecific drugs and how they work, provides a perspective on what the future holds, and discusses challenges that must be overcome to make the coming wave of transformative innovation a reality.

Instead of engaging targets on their own like conventional medicines, one class of multispecific drugs tethers the active agent at its intended site of action, whereas a second class mobilizes biological mechanisms to do the heavy lifting. These latter agents bring two things together, like a molecular matchmaker. On one end the drug binds to the target to be altered (inhibited, activated or destroyed) and the other end binds to the cellular effector that acts on the target. Matchmaker multispecific drugs have the potential to target the 85% of proteins currently thought of as "undruggable."

"In the future medicines could function very differently than conventional medicines do today. In biopharma pipelines across the industry, we're seeing more and more multispecific drugs that can form connections with two or more proteins," said Deshaies. "They include some highly sophisticated structures that function as molecular matchmakers. By inducing proximity between their targets and natural enzymes or even cells, multispecifics can harness the awesome power of biology to go well beyond what conventional drugs can accomplish. This isn't an incremental improvement in drug design, it's a sea change."

There are many versatile natural mechanisms that induced proximity medicines can use to fight disease when matched with the right target, and Amgen is at the forefront with its Bispecific T-cell engager (BiTE®) molecules and Induced Proximity Platform (IPP). Approximately two-thirds of Amgen's pipeline molecules through Phase 1 are multispecifics.

Credit: 
Amgen