Earth

Open data on malaria genomes will help combat drug resistance

Genome variation data on more than 7,000 malaria parasites from 28 endemic countries is released today (24 February) in Wellcome Open Research. It has been produced by MalariaGEN, a data-sharing network of groups around the world who are working together to build high-quality data resources for malaria research and disease control.

This open data release represents the world's largest resource of genomic data on malaria parasite evolution and drug resistance. It provides benchmark data on parasite genome variation that is needed in the search for new drugs and vaccines, and in the development of surveillance tools for malaria control and elimination.

Malaria is a major global health problem causing an estimated 409,000 deaths in 2019, with 67 per cent of deaths occurring in children under five years of age*. This data resource focuses on Plasmodium falciparum, the species of malaria parasite that is responsible for the most common and deadliest form of the disease.

The Malaria Genomic Epidemiology Network (MalariaGEN) provides researchers and control programmes in malaria-endemic countries with access to DNA sequencing technologies and tools for genomic analysis. Founded in 2005, MalariaGEN now has partners in 39 countries, each leading their own studies into different aspects of malaria biology and epidemiology, with the common goal of finding ways to improve malaria control.

This latest publication represents the work of 49 partner studies at 73 locations in Africa, Asia, South America and Oceania, who together contributed 7,113 samples of P. falciparum for genome sequencing. At the Wellcome Sanger Institute, each sample was analysed for over 3 million genetic variants and the data were carefully curated before returning to partners for use in their own research. This paper brings together the data from all the partner studies to provide an open data resource for the wider scientific community.

Dr Richard Pearson, co-author from the Wellcome Sanger Institute, said: "We have created a data resource that is 'analysis ready' for anyone to use, including those without specialist genetics training. Each annotated dataset sample includes key features that are relevant to malaria control, such as resistance to six major antimalarial drugs, and whether it carries particular structural changes that cause diagnostic malaria tests to fail. Like the Human Genome Project was a resource for the analyses of human genome sequence data, we hope this will be one of the main resources for malaria research."

One of MalariaGEN's core principles is to provide clear attribution and recognition of all the groups that have contributed to a data resource. In this dataset, each sample is listed against the partner study that it belongs to, with a description of the scientific aims of the study and the local investigators that led the work.

Professor Dominic Kwiatkowski, co-author from the Wellcome Sanger Institute and the Big Data Institute at the University of Oxford, said: "It has been a huge privilege to collaborate with our MalariaGEN partners around the world to build this data resource. We are proud to see these genomic data being used in publications by our colleagues in malaria-endemic studies and others in the malaria research community. We hope that the new features in this data release will make it accessible to an even wider audience, and our team is now hard at work to produce the next version."

Professor Abdoulaye Djimde, co-author from the University of Science, Techniques and Technologies of Bamako, Mali, said: "A quantitative assessment of how malaria parasites respond to public health interventions is key for a successful and sustainable elimination campaign. Over time, this openly available resource will facilitate research into the malaria parasite's evolutionary processes, which will ultimately inform effective and sustainable malaria control and elimination strategies that will be key in ending this devastating disease."

Credit: 
Wellcome Trust Sanger Institute

Recycle anaesthetics to reduce carbon emission of healthcare, study concludes

New research has highlighted the value of recycling general anaesthetic used in routine operations.

In the UK, healthcare accounts for more than five per cent of national greenhouse gas emissions, and as much as 10 per cent in the US. Inhaled general anaesthetics are particularly potent greenhouse gases and as little is metabolised almost all that is administered is breathed out to end up in the atmosphere. The commonly used anaesthetic agents have been considered to vary considerably from as little as 1.5 for sevoflurane to more than 60 kg carbon dioxide equivalence for an hour's anaesthetic with desflurane. However, research led by a team from the University of Exeter have discovered that the original assumptions failed to consider the manufacture of the anaesthetics, questioning the validity of the initial assumptions and the subsequent conclusions.

Led by the University of Exeter and funded by Innovate UK, the study published in Resources, Conservation and Recycling set out to model different anaesthetic scenarios including the application of new vapour capture recycling technology allowing for waste anaesthetic to be captured, extracted and purified and remarketed.

The new research built on the last analysis of the carbon footprint of inhalational anaesthesia by Jodi Shearman and colleagues in 2012 and analysed the synthetic process of the commonly used anaesthetics, sevoflurane, isoflurane and desflurane, the use of nitrous oxide, as well as the injectable anaesthetic, propofol in a carefully conducted life cycle analysis.

Modelling the gas combinations typically used for anaesthesia in the UK, they found that the carbon footprint of sevoflurane is as low as that of propofol, where an oxygen and air mix is used at the lowest flow rate so long as anaesthetic recycling is in place. However, the team found that when the current manufacturing method is taken into account, the carbon footprint of sevoflurane anaesthesia is similar to that for desflurane anaesthesia.

The team were also able to conclude that nitrous oxide adds disproportionately to the carbon footprint of anaesthesia and supports the current move by the NHS to reduce the use of this particular greenhouse gas. Furthermore, the carrier gas used to administer the anaesthetic is important - with an air and oxygen mix reducing emissions compared to nitrous oxide.

The value of the research is twofold. First it provides evidence that unless manufacturing processes are considered, traditional values of the carbon footprint of anaesthesia may be serious underestimate and secondly it supports the use of waste anaesthetic capture technology to help reduce the carbon footprint of modern anaesthesia.

Lead author Dr Xiaocheng Hu, of the University of Exeter Medical School, said: "our results are an important step in supporting healthcare providers to reduce their carbon footprint. To reduce the carbon footprint of inhalational anaesthetics, this study encourages the continued reduction in the use of nitrous oxide and recommends a wider adoption of anaesthetic recycling technology."

Credit: 
University of Exeter

Fabricating the future with a new environment friendly method of polymerization

image: Halogen bonding organocatalysts (R-Hal-B) facilitated smooth living cationic polymerization of vinyl (R-Cl) monomers at room temperature, producing a good amount of pure yield, opening doors to achieving low-cost environment friendly vinyl polymerization reactions for industry.

Image: 
Photo courtesy: Dr. Koji Takagi of Nagoya Institute of Technology

Many materials in the modern world--from the plastics that dominate it to the electronic chips that drive it--are constructed of polymers. Given their ubiquity and the evolving requirements of our world, finding better and more efficient methods of making them is an ongoing research concern. In addition, current environmental issues necessitate the use of methods and input materials that are environment friendly.

Recent research by scientists from Nagoya Institute of Technology, Japan, has been in this vein, adding a new twist to a polymerization technique that has been around and successful since the 1980s: living cationic polymerization, where the polymer chain growth does not have the ability to terminate until the monomer is consumed. The scientists have, for the first time, demonstrated metal free organocatalysis for this reaction at room temperature for vinyl and styrene polymers, two of the most common polymers used in plastics. Their method is not only more efficient than current metal-based methods, but also environment friendly. Their findings are published in the Royal Society of Chemistry's Polymer Chemistry.

In their study, they first tested the applicability of non-ionic and multidentate (or several electron-pair accepting) halogen bonding organocatalysts, specifically two iodine-carrying polyfluoro-substituted oligoarenes, to the living cationic polymerization of isobutyl vinyl ether. Mentioning one of their reasons for choosing this, Dr. Koji Takagi, lead scientist in the study, explains in an aside: "The non-ionic characteristic is advantageous because the catalyst is soluble in less polar solvents like toluene which is more suitable for such polymerization of vinyl monomers."

They found that with the tridentate variant, the reaction smoothly progressed even at room temperature, producing good yield--though less than the theoretical limit--in a reasonable amount of time, without the catalyst decomposing or appearing as an impurity in the product. As Dr. Takagi explains, this could be a good advantage over existing metallic catalysts used in industry: "While metal-based catalysts have significantly contributed to the materials sciences over the past century, the contamination of remaining metallic impurities often brings about a decrease in the produced materials' lifetime and performance. We believe that the present finding will lead to the production of highly pure and reliable polymeric materials."

In saying this, he is, of course, referring to the other major finding in the study as well. The second part of their study involved evaluating the applicability of ionic iodoimidazolium catalysts with various counter anions (the negative ions accompanying the positively charged group) to the polymerization of p-methoxystyrene (pMOS) and unsubstituted styrene, the latter of which is more difficult to polymerize than the former.

pMOS easily polymerized at room temperature within two hours and with no catalyst decomposition of a bidentate 2-iodoimidazolium salt that had a triflate counter anion. Unsubstituted styrene gave maximum polymer yield via a reaction at -10°C for 24 hours with an anion-stabilizing and bulky counter ion-containing catalyst.

Speaking of the products yielded, Dr. Takagi says: "Although the obtained polymers are not intended for any specific purpose, our methodology is expected to be applied to the synthesis of conductive polymers and degradable polymers, which should not include metallic impurities if they're to be constructed for practical use."

Indeed, the findings are invaluable for moving forward with the more efficient production of polymeric materials for a variety of applications. However, the successful use of organocatalysts at room temperature also offers several other advantages. For one, organocatalysts lack sensitivity to moisture and oxygen, taking care of the sometimes serious problem that the relatively hygroscopic nature of ionic catalysts poses to such controlled polymerization reactions. Further, they are readily available and therefore, low cost. They are also not toxic to the environment. And when reactions are conducted at room temperature, the energy requirements are low.

This study is, thus, paving the way for low cost electronics in the future that are made of environment friendly materials in sustainable ways.

Credit: 
Nagoya Institute of Technology

DV survivor elder abuse risk

Australian researchers have called for additional services for survivors of intimate partner violence - warning those who have these experiences are more vulnerable to elder abuse.

Women who survive domestic violence continue to experience negative effects well into their older years but they are also more vulnerable to elder abuse, says Flinders University researcher Dr Monica Cations, lead author of the study published in the American Journal of Geriatric Psychiatry.

"This is the first time this relationship has been demonstrated and tells us that older survivors need close monitoring and prevention efforts to keep them safe from further abuse."

The study looked at the psychological and physical impacts and risk for elder abuse associated with historical domestic (intimate partner) violence in older women based on the 12,259 women aged 70-75 included in the Australian Longitudinal Study on Women's Health (ALSWH).

In all, 792 or 6.4% of the cohort reported they had survived domestic violence in their past and have had significantly poorer psychological wellbeing throughout their older age than women who had never experienced intimate partner violence (IPV) - confirming the need for clinical monitoring and ongoing support for survivors as they age.

"Women who survive domestic violence can continue to be socially isolated and financially dependent on others, and these factors can make them easy targets for elder abuse," Dr Cations explains.

"Both domestic violence and aged care services need to be aware of the ongoing vulnerability of survivors. Elder abuse prevention efforts can be targeted to help keep domestic violence survivors safe," she says.

Credit: 
Flinders University

Optimality in self-organized molecular sorting

Torino, February 24, 2021 - The eukaryotic cell is the basic unit of animals and plants. At the microscope, it looks highly structured and subdivided in many membrane-bound compartments. Each compartment has a specific function, and its membrane is populated by specific molecules. How does the cell preserve this amazing internal order, and (in the absence of pathologies) does not degrade into a shapeless bunch of molecules? Such degradation is countered by a continuous process of molecule sorting by which similar molecules are collected and dispatched to the "right" destinations, similarly to what happens when a house is kept clean and tidy by daily chores. It's still mysterious, however, how a living cell may achieve this task without a supervisor directing it.

In a recent Physical Review Letters paper, a collaboration of researchers from Politecnico di Torino, Università di Torino, Italian Institute for Genomic Medicine - IIGM, Istituto Nazionale di Fisica Nucleare - INFN, and Landau Institute for Theoretical Physics (Moscow), hypothesizes that this process of molecular sorting emerges from the combination of two spontaneous mechanisms. The first mechanism is the propensity of similar molecules to aggregate on membranes in the form of "patches", or "droplets", in the same way as water droplets form in a vapor cloud that is cooled down. The second mechanism is the tendency of such droplets to bend the membrane, leading to the formation and further detachment of small vesicles enriched in the molecular components of the original droplets. The various membrane compartments of the eukaryotic cell act thus similarly to the vessels and tubes of a natural distiller, or alembic, that continuously sorts and redirects molecular components toward the appropriate destinations.

In the published work, this process of molecular sorting is studied with mathematical tools and computer simulations, showing that the propensity to aggregation is the main control parameter of the process. For each group of molecules there exists an optimal value of this parameter (neither too large, nor too small), such that the sorting process takes place with the maximum possible speed. Actually, some propensity to molecular aggregation is needed to drive the process, but when the propensity to aggregation is too large, the molecules "freeze" in a large number of small "droplets" that grow very slowly, and the overall sorting process slows down. Experimental observations of this distillation process in cells isolated from the blood vessels of human umbilical cords confirm this theoretical picture, and suggest that evolution may have led the cells to work in the optimal parameter region, where the sorting process achieves maximum efficiency.

These findings are of particular interest, since the misregulation of molecular sorting is a hallmark of severe pathologies, such as cancer. The theoretical identification of the parameters that control the process is an important first step toward a better understanding of the origin of such disruptions and the development of therapies.

Credit: 
Politecnico di Torino

Climate impacts drive east-west divide in forest seed production

image: Mature western forests, such as this stand of mixed conifers in California's Sequoia National Park, may be less able than younger forests back East to reseed themselves and regenerate following large-scale diebacks linked to climate change, a new Duke University-led study finds.

Image: 
USGS

DURHAM, N.C. -- Younger, smaller trees that comprise much of North America's eastern forests have increased their seed production under climate change, but older, larger trees that dominate forests in much of the West have been less responsive, a new Duke University-led study finds.

Declines in these trees' seed production, or fecundity, could limit western forests' ability to regenerate following the large-scale diebacks linked to rising temperatures and intensifying droughts that are now occurring in many states and provinces.

This continental divide, reported for the first time in the new study, "could dramatically alter the composition and structure of 21st century North American forests," said James S. Clark, Nicholas Distinguished Professor of Environmental Science at Duke, who led the research.

Knowing the contrasting responses occur -- and understanding why they happen -- will help scientists more accurately predict future changes to North American forests and develop conservation and management strategies to mitigate the changes, he said.

Researchers from 48 institutions collaborated with Clark on the peer-reviewed study, which appears Feb. 23 in Nature Communications.

Fecundity is a measure of trees' capacity to regenerate after diebacks and other large-scale disturbances by dispersing seeds to habitats where their odds of future survival are more favorable. It's an essential factor for determining future forest responses to climate change, but like many ecological processes it's noisy, highly variable and incredible hard to estimate.

Fecundity changes over time, based on changes in a tree's size, growth rate or access to light, water and other resources, and is driven by two indirect climate impacts -- the effects of growth that depend on climate, and the effects of climate that depend on tree size -- that currently aren't accounted for in the models used to predict future change.

"It was the only major demographic process driving forest response to climate change that we lacked field-based estimates on," Clark said.

To address this problem, he devised new statistical software that allowed him to synthesize decades of raw data on size, growth, canopy spread, and access to resources for nearly 100,000 individual trees at long-term research sites and experimental forests across North America. The unfiltered raw data revealed what previous meta-analyses based on averaged measurements had missed: At the continental scale, fecundity increases as a tree grows larger, up to a point. And then it begins to decline.

"This explains the East-West divide. Most trees in the East are young, growing fast and entering a size class where fecundity increases, so any indirect impact from climate that spurs their growth also increases their seed production," Clark said. "We see the opposite happening with the older, larger trees in the West. There are small and large trees in both regions, of course, but the regions differ enough in their size structure to respond in different ways.

"Now that we understand, in aggregate, how this all works, the next step is to apply it to individual species or stands and incorporate it into the models we use to predict future forest changes," he said.

The data used in the study came from trees in the Mast Inference and Prediction (MASTIF) monitoring network, which includes more than 500 long-term field research sites nationwide, including plots that are also part of the National Ecological Observation Network (NEON).

Credit: 
Duke University

Basic cell health systems wear down in Huntington's disease, novel analysis shows

image: Geomic created plots of the data that mapped differences pertaining to 4,300 genes along dimensions such as mouse age, the extent of Huntington's-causing mutation, and cell type (certain neurons and astrocytes in a region of the brain called the striatum are especially vulnerable in Huntington's). The plots took the form of geometric shapes, like crumpled pieces of paper, whose deformations could be computationally compared to identify genes whose expression changed most consequentially amid the disease. The researchers could then look into how abnormal expression of those genes could affect cellular health and function.

Image: 
Sorbonne Université

Using an innovative computational approach to analyze vast brain cell gene expression datasets, researchers at MIT and Sorbonne Université have found that Huntington's disease may progress to advanced stages more because of a degradation of the cells' health maintenance systems than because of increased damage from the disease pathology itself.

The analysis yielded a trove of specific gene networks governing molecular pathways that disease researchers may now be able to target to better sustain brain cell health amid the devastating neurodegenerative disorder, said co-senior author Myriam Heiman, Associate Professor in MIT's Department of Brain and Cognitive Sciences and an investigator at The Picower Institute for Learning and Memory. Christian Neri of the Sorbonne's Centre National de la Recherche Scientifique is the co-senior and co-corresponding author of the study published in eLife.

"If we can maintain the expression of these compensatory mechanisms, it may be a more effective therapeutic strategy than just trying to affect one gene at a time," said Heiman, who is also a member of the Broad Institute of MIT and Harvard.

In the study, the team led by co-corresponding author Lucile Megret created a process called "Geomic" to integrate two large sets of data from Heiman's lab and one more from UCLA researcher William Yang. Each dataset highlighted different aspects of the disease, such as its effect on gene expression over time, how those effects varied by cell type, and the fate of those cells as gene expression varied.

Geomic created plots of the data that mapped differences pertaining to 4,300 genes along dimensions such as mouse age, the extent of Huntington's-causing mutation, and cell type (certain neurons and astrocytes in a region of the brain called the striatum are especially vulnerable in Huntington's). The plots took the form of geometric shapes, like crumpled pieces of paper, whose deformations could be computationally compared to identify genes whose expression changed most consequentially amid the disease. The researchers could then look into how abnormal expression of those genes could affect cellular health and function.

Big breakdowns

The Geomic analysis highlighted a clear pattern. Over time, the cells' responses to the disease pathology--linked to toxic expansions in a protein called Huntingtin--largely continued intact, but certain highly vulnerable cells lost their ability to sustain gene expression needed for some basic systems that sustain cell health and function. These systems initially leapt into action to compensate for the disease but eventually lost steam.

One of the biggest such breakdowns in an especially vulnerable cell type, Drd-1 expressing neurons, was maintaining the health of energy-producing components called mitochondria. Last year, Heiman's lab published a study in Neuron showing that in some Huntington's-afflicted neurons, RNA leaks out of mitochondria provoking a misguided and immune response that leads to cell death. The new findings affirm a key role for mitochondrial integrity and implicate key genes such as Ndufb10 whose diminished expression may be undermine the cell's network of genes supporting the system.

The Geomic approach also highlighted an especially dramatic decline in the Drd-1 neurons and in astrocytes of expression of multiple genes in pathways that govern endosome regulation, an essential process for determining where proteins go and when they are degraded within the cells. Here, too, key genes like Rab8b and Rab7 emerged as culprits within broader gene networks.

The researchers went on to validate some of their top findings by confirming that key alterations of gene expression were also present in post-mortem samples of brain tissue from human Huntington's patients.

While mitochondrial integrity and endosome regulation are two particularly strong examples, Heiman said, the study lists many others. The Geomic source code and all the data and visualizations it yielded are publicly accessible on a website produced by the authors.

"We've created a database of future targets to probe," Heiman said.

Neri added: "This database sets a precise basis for studying how to properly re-instate brain cell compensation in Huntington's disease, and possibly in other neurodegenerative diseases that share common compensatory mechanisms with Huntington's disease."

Key among these could be regulators of genetic transcription in these affected pathways, Heiman said.

"One promising future direction is that among the genes that we implicate in these network effects, some of these are transcription factors," she said. "They may be key targets to bring back the compensatory responses that decline."

A new way to study disease

While the researchers first applied Geomic's method of "shape deformation analysis" to Huntington's disease, it will likely be of equal utility for studying any neurodegenerative disease like Alzheimer's or Parkinson's, or even other brain diseases, the authors said.

"This is a new approach to study systems level changes, rather than just focusing on a particular pathway or a particular gene," said Heiman. "I think this is a really nice proof of principle and hopefully we can apply this type of methodology to the study of other genomic data from other disease studies."

Credit: 
Picower Institute at MIT

Whale Sharks show remarkable capacity to recover from injuries

image: Whale Shark with dorsal fin damage

Image: 
Marine Conservation Society Seychelles

A new study has for the first time explored the rate at which the world's largest fish, the endangered whale shark, can recover from its injuries. The findings reveal that lacerations and abrasions, increasingly caused through collisions with boats, can heal in a matter of weeks and researchers found evidence of partially removed dorsal fins re-growing.

This work, published in the journal Conservation Physiology, comes at a critical time for these large sharks, that can reach lengths of up to 18 metres. Other recent studies have shown that as their popularity within the wildlife tourism sector increases, so do interactions with humans and boat traffic. As a result, these sharks face an additional source of injury on top of natural threats, and some of these ocean giants exhibit scars caused by boat collisions. Until now very little was known about the impact from such injuries and how they can recover.

"These baseline findings provide us with a preliminary understanding of wound healing in this species" says lead author Freya Womersley, a PhD student with University of Southampton based at the Marine Biological Association, UK. "We wanted to determine if there was a way of quantifying what many researchers were anecdotally witnessing in the field, and so we came up with a technique of monitoring and analysing injuries over time".

The unique spot markings of whale sharks allow researchers across the world to identify individuals and monitor regional populations, making use of websites such as WildBook where people can upload photos of their shark sightings. For this study, the research team examined photographs taken by citizen scientists, researchers and the whale shark tourism industry in two sites in the Indian Ocean where the sharks frequently gather, and used these markings to standardise between images. This method allowed the team to compare photographs taken without specialist equipment over time and increased the amount of data available to assess and monitor how individual wounds changed.

"By using our new method, we were able to determine that these sharks can heal from very serious injuries in timeframes of weeks and months" says Freya. "This means that we now have a better understanding of injury and healing dynamics, which can be very important for conservation management."

The study also highlighted whale sharks' capability to re-grow a partially amputated first dorsal fin, which, to the authors' knowledge, is the first time a shark has ever been scientifically reported exhibiting this phenomenon. Of further interest, their unique spot markings were also observed forming over previously injured spots, which suggests that these beautiful markings are an important feature for this species and persist even after being damaged.

These healing capabilities suggest that whale sharks may be resilient to impacts caused by humans, but the authors of this work note that there may be many other less recognisable impacts of injuries to these animals, such as reduced fitness, foraging capacity and altered behaviours; so injuries need to be prevented where possible. They also found variation within healing rates, with lacerations, typical of propeller injuries, taking longer to heal than other kinds of wounds, highlighting the need for further research to determine the influence of environmental and more nuanced individual factors on injury healing.

Careful management of whale shark aggregation sites, which occur seasonally at a number of coastal regions around the world, is essential to ensure the sharks are protected while spending time in areas of high human activity. If sharks are encountered with injuries in these locations, research such as this can help local teams estimate how old the injury is and make assessments about where and how it might have been inflicted based on knowledge of whale shark movements and tendency to return to the same locations.

Recent research published in Nature found that 71% of pelagic sharks have declined over the last 50 years, and highlighted the need to enforce stricter protections for this important group of ocean inhabitants.

Freya concludes, "Whale sharks have been experiencing population declines globally from a variety of threats as a result of human activity. Therefore, it is imperative that we minimise human impacts on whale sharks and protect the species where it is most vulnerable, especially where human-shark interactions are high.

"There is still a long way to go in understanding healing in whale sharks, and in shark species in general, but our team hope that baseline studies such as this one can provide crucial evidence for management decision makers that can be used to safeguard the future of whale sharks."

Credit: 
University of Southampton

Terahertz imaging of graphene paves the way to industrialisation

image: Graphene Flagship researchers have developed a new measurement standard for the analysis of graphene and layered materials that could accelerate production and optimise device fabrication.

Image: 
Graphene Flagship

X-ray scans revolutionised medical treatments by allowing us to see inside humans without surgery. Similarly, terahertz spectroscopy penetrates graphene films allowing scientists to make detailed maps of their electrical quality, without damaging or contaminating the material. The Graphene Flagship brought together researchers from academia and industry to develop and mature this analytical technique, and now a novel measurement tool for graphene characterisation is ready.

The effort was possible thanks to the collaborative environment enabled by the Graphene Flagship European consortium, with participation by scientists from Graphene Flagship partners DTU, Denmark, IIT, Italy, Aalto University, Finland, AIXTRON, UK, imec, Belgium, Graphenea, Spain, Warsaw University, Poland, and Thales R&T, France, as well as collaborators in China, Korea and the US.

Graphene is often 'sandwiched' between many different layers and materials to be used in electronic and photonic devices. This complicates the process of quality assessment. Terahertz spectroscopy makes things easier. It images the encapsulated materials and reveals the quality of the graphene underneath, exposing imperfections at critical points in the fabrication process. It is a fast, non-destructive technology that probes the electrical properties of graphene and layered materials, with no need for direct contact.

The development of characterisation techniques like terahertz spectroscopy is fundamental to accelerating large-scale production, as they guarantee that graphene-enabled devices are made consistently and predictably, without flaws. Quality control precedes trust. Thanks to other developments pioneered by the Graphene Flagship, such as roll-to-roll production of graphene and layered materials, fabrication technology is ready to take the next step. Terahertz spectroscopy allows us to ramp up graphene production without losing sight of the quality.

"This is the technique we needed to match the high-throughput production levels enabled by the Graphene Flagship," explains Peter Bøggild from Graphene Flagship partner DTU. "We are confident that terahertz spectroscopy in graphene manufacturing will become as routine as X-ray scans in hospitals," he adds. "In fact, thanks to terahertz spectroscopy you can easily map even meter-scale graphene samples without touching them, which is not possible with some other state-of-the-art techniques." Furthermore, the Graphene Flagship is currently studying how to apply terahertz spectroscopy directly into roll-to-roll graphene production lines, and speed up the imaging.

Collaboration was key to this achievement. Graphene Flagship researchers in academic institutions worked closely with leading graphene manufacturers such as Graphene Flagship partners AIXTRON, Graphenea and IMEC. "This is the best way to ensure that our solution is relevant to our end-users, companies that make graphene and layered materials on industrial scales," says Bøggild. "Our publication is a comprehensive case study that highlights the versatility and reliability of terahertz spectroscopy for quality control and should guide our colleagues in applying the technique to many industrially relevant substrates such silicon, sapphire, silicon carbide and polymers." he adds.

Setting standards is an important step for the development of any new material, to ensure it is safe, genuine and will offer a performance that is both reliable and consistent. That is why the Graphene Flagship has a dedicated work-group focused on the standardisation of graphene, measurement and analytical techniques and manufacturing processes. The newly developed method for terahertz spectroscopy is on track to become a standard technical specification, thanks to the work of the Graphene Flagship Standardisation Committee. "This will undoubtedly accelerate the uptake of this new technology, as it will outline how analysis and comparison of graphene samples can be done in a reproducible way," explains Peter Jepsen from Graphene Flagship Partner DTU, who co-authors the study. "Terahertz spectroscopy is yet another step to increase the trust in graphene-enabled products," he concludes.

Amaia Zurutuza, co-author of the paper and Scientific Director at Graphene Flagship partner Graphenea, says: "At Graphenea, we are convinced that terahertz imaging can enable the development of quality control techniques capable of matching manufacturing throughput requirements and providing relevant graphene quality information, which is essential in our path towards the successful industrialisation of graphene."

Thurid Gspann, the Chair of the Graphene Flagship Standardisation Committee, says: "This terahertz [spectroscopy] technique is expected to be widely adopted by industry. It does not require any particular sample preparation and is a mapping technique that allows one to analyse large areas in a time efficient way."

Marco Romagnoli, Graphene Flagship Division Leader for Electronics and Photonics Integration, adds: "The terahertz spectroscopy tool for wafer-scale application is a state-of-the-art, high TRL system to characterise multilayer stacks on wafers that contain CVD graphene. It works in a short time and with good accuracy, and provides the main parameters of interest, such as carrier mobility, conductivity, scattering time and carrier density. This high-value technical achievement is also an example of the advantage of being part of a large collaborative project like the Graphene Flagship."

Andrea C. Ferrari, Science and Technology Officer of the Graphene Flagship and Chair of its Management Panel, adds: "Yet again, Graphene Flagship researchers are pioneering a new characterisation technique to facilitate the development of graphene technology. This helps us progress steadily on our innovation and technology roadmap and will benefit the industrial uptake of graphene in a wide range of applications."

Credit: 
Graphene Flagship

Seasonal variation in daylight influences brain function

image: Brain opioid receptors measured with positron emission tomography (A) and regions, where opioid receptor density varied seasonally.

Image: 
University of Turku

Seasons have an impact on our emotions and social life. Negative emotions are more subdued in the summer, whereas seasonal affective disorder rates peak during the darker winter months. Opioids regulate both mood and sociability in the brain.

In the study conducted at the Turku PET Centre, Finland, researchers compared how the length of daylight hours affected the opioid receptors in humans and rats.

"In the study, we observed that the number of opioid receptors was dependent on the time of the year the brain was imaged. The changes were most prominent in the brain regions that control emotions and sociability. The changes in the opioid receptors caused by the variation in the amount of daylight could be an important factor in seasonal affective disorder," says Postdoctoral Researcher Lihua Sun from the Turku PET Centre and the University of Turku.

Animal studies confirm the significance of daylight

The researchers wanted to ensure that the changes in brain function were caused by the amount of daylight and not some other factor. To achieve this, they measured the opioid receptors in rats when the animals were kept in standard conditions where only the length of daylight hours was changed. The results were similar to those observed in humans.

"On the basis of the results, the duration of daylight is a particularly critical factor in the seasonal variation of opioid receptors. These results help us to understand the brain mechanisms behind seasonal affective disorder," says Professor Lauri Nummenmaa from the Turku PET Centre.

The study was conducted with Positron Emission Tomography (PET) and altogether 204 volunteers participated as subjects. A small dose of radioactive tracer that binds to the brain's opioid receptors was injected in the subjects' blood circulation. The decay of the tracers was measured with a PET scanner. The study is based on the AIVO database hosted by Turku University Hospital and Turku PET Centre. The database contains different in vivo molecular brain scans for extensive analyses. Furthermore, the amount of opioid receptors was studied with PET imaging of rats. Animal studies were conducted at the Central Animal Laboratory, University of Turku, with the genuine support of Professor Anne Roivainen and Dr Emrah Yatkin.

Credit: 
University of Turku

'Problem of missing ice' finally solved by movement of the Earth's crust

image: Greenland glaciers 2018

Image: 
NIOZ, Kim Sauter

During ice ages, the global mean sea level falls because large amounts sea water are stored in the form of huge continental glaciers. Until now, mathematical models of the last ice age could not reconcile the height of the sea level and the thickness of the glacier masses: the so-called Missing Ice Problem. With new calculations that take into account crustal, gravitational and rotational perturbation of the solid Earth, an international team of climate researchers has succeeded in resolving the discrepancy, among them Dr. Paolo Stocchi from the Royal Netherlands Institute for Sea Research (NIOZ). The study, now published in the journal Nature Communications, could significantly advance research into the climate of the past and help to make better sea-level predictions for the future.

Paolo Stocchi: "Our new reconstruction revolutionizes what we thought about the global continental ice mass during the Last Ice Age. The total mass of the Last Ice Age glaciers was 20% smaller and accumulated faster than previously thought."

Growing and melting glaciers

With the alternation of ice ages and warm ages, the glaciers on Greenland, North America and Europe grow and shrink over the course of tens of thousands of years. The more water is stored in the form of ice, the less water there is in the oceans - and the lower the sea level. Climate researchers want to find out how much the glaciers could melt in the course of man-made climate change in the next centuries and how much the sea level will rise as a result. To do this, they look into the past. If one succeeds in understanding the growth and melting of the glaciers during the last ice and warm periods, then conclusions can be drawn for the future.

The "problem of the missing ice"

But this look into the past is difficult because the thickness of the glaciers and the height of the sea level can no longer be measured directly in retrospect. Climate researchers therefore have to laboriously collect clues that can be used to reconstruct the past. However, depending on which clues you collect, the results are different and seem to contradict one another. Previous models and calculations led to the so-called "missing ice" riddle. Geological evidence from ocean areas suggest that sea level might have been 120-140 m lower than today during the last Ice Age 20,000 years ago. The uncertainty of these data is quite large, though. To account for these low sea levels, as much as twice the current mass of the Greenland ice sheet would have to have been frozen worldwide. However, these glacier masses could not possibly have been that large at the time, according to climate models. Also, there is no geological evidence at higher latitudes for such a large mass of ice. How to explain then that the water wasn't in the sea and at the same time it wasn't stored in the freezer on land either?

80,000 years of ice sheets and sea level changes accurately reconstructed

This problem has now been solved with a new method by an international team of scientists led by Dr. Evan Gowan (Alfred-Wegener-Institut, Helmholtz-Zentrum für Polar- und Meeresforschung, in Bremerhaven). Among them the geophysicist Dr Paolo Stocchi from The Royal Netherlands Institute for Sea Research. "We have found a way to accurately reconstruct the last 80,000 years of ice sheets and sea level changes," says Dr. Paolo Stocchi, who has contributed to the creation of the novel global ice sheet model by including crustal, gravitational and rotational perturbation of the solid Earth. Their new model explains past local sea levels that are lower than today by incorporating the relative motion of the sea surface and Earth's crust. In this way, past local sea levels that are much lower than today, can be modelled without requiring an unrealistically large global ice mass. The solid Earth motions would do the trick!

Understanding the behavior of glaciers by looking at the Earth's mantle

With the new method, the scientists have eventually reconciled sea level and glacier mass: According to their calculations, the sea level must have been around 116 meters lower than today at the time. There is no discrepancy in terms of glacier mass. Unlike the previous global model, the team took a closer look at the geological conditions in the proximity and underneath the formerly glaciated areas, and not in the far-field ocean areas: How steep were the mountain slopes? Where did glaciers reach the sea? Did friction interfere with ice flow velocity? And how much? The new model includes all these local factors. It also accounts for ice- and water-load-induced crustal deformations. The latter are important because they alter the topography of the land, thus affecting the ice flow and eventually the volume of glaciers. "Crustal deformations are regulated by solid Earth physical parameters such as viscosity," says Paolo Stocchi. The Earth's mantle, in fact, behaves like a highly viscous fluid on geological time scales and deforms under the weight of a fluctuating ice mass. "By assuming different viscosities of the earth's mantle, we model different evolutions of the land topography, which then result in different scenarios for the ice masses." These can now be brought into harmony with the marine geological evidence from the ocean areas, without the need for extra mass.

The established isotope model needs to be revised

The technical article by Evan Gowan and his team takes a critical look at the method for estimating glacier masses that has been the standard in science for many years: the method of measuring oxygen isotopes. Isotopes are atoms of the same element that differ in the number of their neutrons and therefore have different weights. For example, there is the lighter 16O isotope and the heavier 18O isotope of oxygen. The theory says that the light 16O evaporates from the sea and the heavy 18O remains in the water. Accordingly, during ice ages, when large mainland glaciers form and the amount of water in the sea decreases, the 18O concentration in the oceans must increase. But as it turns out, this established method results in discrepancies when it comes to reconciling sea level and glacier mass for the time 20,000 years ago and before.

"The isotope model has been used widely for years to determine the volume of ice in glaciers up to many millions of years before our time. Our work now raises doubts about the reliability of this method," says Paolo Stocchi. His goal now is to use the new model to quantify the current rates of crustal deformation in the North Sea and Wadden Sea, thus revealing the actual contribution of current climate change to the regional relative sea-level changes.

Credit: 
Royal Netherlands Institute for Sea Research

Fat cells may influence how the body reacts to heart failure, study shows

image: Pediatric cardiology researcher Jason Dyck and his team found that limiting the release of fat into the body from fat cells during heart failure led to better outcomes in mice--and could have the potential to eventually do the same in human patients.

Image: 
University of Alberta

University of Alberta researchers have found that limiting the amount of fat the body releases into the bloodstream from fat cells during heart failure could help improve outcomes for patients.

In a recent study published in the American Journal of Physiology, Jason Dyck, professor of pediatrics in the Faculty of Medicine & Dentistry and director of the U of A's Cardiovascular Research Centre, found that mice with heart failure that were treated with a drug blocking the release of fat into the bloodstream from fat cells saw less inflammation in the heart and throughout the body, and had better outcomes than a control group.

"Many people believe that, by definition, heart failure is only a condition of the heart. But it's much broader and multiple organs are affected by it," said Dyck, who holds the Canada Research Chair in Molecular Medicine and is a member of the Alberta Diabetes Institute and the Women and Children's Health Research Institute. "What we've shown in mice is that if you can target fat cells with a drug and limit their ability to release stored fat during heart failure, you can protect the heart and improve cardiac function.

"I think it really opens the door for other avenues of investigation and therapies for treating heart failure," Dyck noted.

During times of stress, such as heart failure, the body releases stress hormones, such as epinephrine and norepinephrine, to help the heart compensate. But because the heart can't function any better--and is in fact damaged further by being forced to pump faster--the body releases more stress hormones and the process cascades, with heart function continuing to decline. This is why a common treatment for heart failure is beta-blocker drugs, which are designed to block the effects of stress hormones on the heart.

The release of stress hormones also triggers the release of fat from its storage deposits in fat cells into the bloodstream to provide extra energy to the body, a process called lipolysis. Dyck's team found that during heart failure, the fat cells in mice were also becoming inflamed throughout the body, mobilizing and releasing fat faster than normal and causing inflammation in the heart and rest of the body. This inflammation put additional stress on the heart, adding to the cascade effect, increasing damage and reducing heart function.

"Our research began by looking at how the function of one organ can affect other organs, so I thought it was very fascinating to find that a fat cell can influence cardiac function in heart failure," Dyck said. "Fortunately, we had a drug that could inhibit fat mobilization from fat cells in mice, which actually protected the hearts from damage caused by inflammation."

Dyck points out that although his results are promising, more work is needed to better understand the exact mechanisms at play in the process and develop a drug that could work in humans.

"This work is a proof-of-concept showing that abnormal fat-cell function contributes to worsening heart failure, and now we're working on understanding the mechanisms of how the drug works to limit lipolysis better," he said. "Once we get that, that's the launchpad for making sure it's safe and efficacious, then advancing it to our chemists, and then maybe some early trials in humans."

Dyck said the findings--and a better understanding of how organ functions affect other organs--could be used to develop new approaches to several other diseases.

"We know that people have high rates of lipolysis when they have heart failure, so I presume this approach would benefit all types of heart failure," he said. "But if you consider that inflammation is associated with a wide variety of different diseases, like cancer, diabetes or other forms of heart disease, then this approach could have a much wider benefit."

Credit: 
University of Alberta Faculty of Medicine & Dentistry

Alaska thunderstorms may triple with climate change

Warming temperatures will potentially alter the climate in Alaska so profoundly later this century that the number of thunderstorms will triple, increasing the risks of widespread flash flooding, landslides, and lightning-induced wildfires, new research finds.

In a pair of new papers, a research team led by scientists at the Paris Sciences and Letters University and the National Center for Atmospheric Research (NCAR) show that the sea ice around Alaska could largely give way to open water in the warmer months, creating an ample source of moisture for the atmosphere. This moisture, combined with warmer temperatures that can hold more water vapor, would turbocharge summertime storms over Alaska by the end of the century under a high greenhouse gas emissions scenario.

"Alaska can expect three times as many storms, and those storms will be more intense," said NCAR scientist Andreas Prein, a co-author of the new papers. "It will be a very different regime of rainfall."

The thunderstorms would extend throughout Alaska, even in far northern regions where such storms are virtually unheard of. In more southern regions of the state that currently experience occasional thunderstorms, the storms would become far more frequent and peak rainfall rates would increase by more than a third.

The scientists used a suite of advanced computer models and a specialized algorithm to simulate future weather conditions and to track the sources of moisture in the atmosphere. They noted that the impacts in Alaska could be significantly reduced if society curbed emissions.

The findings have far-reaching implications for the 49th state. Flooding is already the most expensive type of natural disaster in central Alaska, and wildfires ignited by lightning strikes are a major hazard.

"We suspect that the increasing number of thunderstorms might have significant impacts, such as amplifying spring floods or causing more wildfire ignitions," said Basile Poujol, a scientist with the Paris Sciences and Letters University and lead author of both studies. "Further studies are necessary to determine whether these impacts are likely to occur and, if so, their potential effects on ecosystems and society."

The studies, published in Climate Dynamics, were funded by the National Science Foundation, which is NCAR's sponsor, and by the European Research Council.

A major climate shift

Alaska is expected to warm by 6-9 degrees Celsius (about 11-16 degrees Fahrenheit) by the end of the century if society pumps out high amounts of greenhouse gases. The vast state is already experiencing damaging impacts from warmer temperatures, including longer wildfire seasons, record heat waves, and landslides and sinkholes caused by melting permafrost.

If thunderstorms become more common in Alaska, it would represent a major shift in the state's climate.

Organized convective storms, including powerful systems of thunderstorms, are a frequent occurrence in the tropics and midlatitudes, where the atmosphere is moist and solar heating creates instability and rapidly rising parcels of air. In contrast, the colder Arctic provides an inhospitable environment for high-impact thunderstorms.

For the first paper, which focused on how Alaskan thunderstorms may change later this century, the authors compared computer simulations of Alaska's current-day climate with the conditions expected at the end of the century. They fed data from global climate models into the higher-resolution NCAR-based Weather Research and Forecasting (WRF) model, which enabled them to generate detailed simulations of Alaska's weather and climate. They then applied a specialized storm-tracking algorithm, focusing on large thunderstorm clusters in the simulations that extended for dozens to hundreds of miles and unleashed more than an inch of rain per hour - the type of event that could lead to far-reaching flash flooding and landslides.

To confirm that the models were realistic, the authors compared the simulations of recent atmospheric conditions with observations of actual conditions from radar, satellite, lightning sensors, and other sources.

The results showed that thunderstorm frequency south of the Yukon River increased from about once a year to every month during the warm season. Hourly rainfall rates increased noticeably, ranging up to 37% higher in the cores of storms. In addition, thunderstorms began appearing in regions that had not previously experienced them, such as the North Slope and West Coast.

The second paper focused on the reasons for the increase in thunderstorms. After using WRF and other models to develop a detailed representation of the atmosphere over Alaska, including temperature, water vapor, and seasonal sea ice cover, the research team applied a specialized model to trace air parcels back to their sources.

"Our goal was to determine the sources of moisture and associated changes that would fuel such a significant increase in thunderstorms over Alaska," said NCAR scientist Maria Molina, a co-author of the second study.

The results showed that moist air masses from ice-free regions of the Gulf of Alaska, Bering Sea, and Arctic Ocean will provide abundant fuel for storms. The warmer atmosphere will experience increasingly powerful thunderstorms that are more likely to organize and form large-scale clusters, increasing the potential for heavy rain and lightning.

Prein said the effects of increased storms in Alaska could be particularly severe because the landscape will be reshaped by melting permafrost and the northerly migration of boreal forests.

"The potential for flash flooding and landslides is definitely increasing, and the Arctic is becoming way more flammable," he said. "It's hard to grasp what the ecological changes will be in the future."

These modeling results from the two studies are in agreement with observed increases in thunderstorm activity in Arctic regions. The authors urged more research into other high-latitude regions to understand if they will experience similar changes.

"There's a lot of value in doing targeted regional climate model simulations that can capture smaller-scale events like thunderstorms and open the door for us to begin to understand more of the complex ways that climate change will impact many aspects of life all over the globe," said NCAR scientist Andrew Newman, a co-author of the first paper. "These two studies show the potential for the Arctic to experience previously unseen weather events in addition to traditionally highlighted changes such as sea ice loss."

Credit: 
National Center for Atmospheric Research/University Corporation for Atmospheric Research

Transforming urban systems: Toward sustainability

image: Aerial image of Patapsco River in downtown Baltimore, to illustrate an urban system.

Image: 
Will Parson/Chesapeake Bay Program

Urban areas are on the rise and changing rapidly in form and function, with spillover effects on virtually all areas of the Earth. The UN estimates that by 2050, 68% of the world's population will reside in urban areas. In the inaugural issue of npj Urban Sustainability, a new Nature Partner Journal out today, a team of leading urban ecologists outlines a practical checklist to guide interventions, strategies, and research that better position urban systems to meet urgent sustainability goals.

Co-author Steward Pickett of Cary Institute of Ecosystem Studies explains, "Urban areas shape demographics, socio-economic processes, urban form, technologies, and the environment - both near and far. As the world becomes more urbanized, what we do in cities will be key to achieving global sustainability goals.There is great potential, but achieving it will require integrating knowledge, methods, and expertise from different disciplines to advance global urban science that catalyzes discovery and innovation."

Pickett collaborated with Timon McPhearson, a Research Fellow at Cary Institute and Professor at The New School in New York City, and lead author Weiqi Zhou of the Research Center for Eco-Environmental Sciences of the Chinese Academy of Sciences in Beijing, on the paper, which is the first to bring together five leading frameworks of urban ecology. Their synthesis refines our capacity to understand urban systems, ranging from cities to urban regions, by facilitating the interdisciplinary science needed to achieve sustainability and improve human and environmental wellbeing.

The paper's authors are international leaders in advancing the key frameworks of urban ecology explored in the paper, among them: the human ecosystem, disturbance and extreme events in urban systems, resilience in cities and urban communities, dynamic heterogeneity, and the new 'continuum of urbanity' that describes interactions and flows in urban-rural-wild regions. Although these and other frameworks have been instrumental in guiding the development of urban ecology, their synthesis addresses a need for greater conceptual comprehensiveness and unification.

Pickett explains, "Urban ecological science is a young discipline. Because of its youth, many different methods have been proposed to unify the discipline so that it can progress more rapidly and be more accessible to urban designers, policy makers, architects, and engineers. But these various conceptual tools, theories, and approaches are seemingly quite disparate, and rarely has the overlap among them been evaluated to promote a more complete, and therefore more useful, synthesis."

Overarching global urban conditions provide the content for the frameworks: complexity, diffuseness, connectivity, and diversity. Frameworks were explored using a 'metacity' concept that conceives of urban areas, at any scale, as consisting of patches differentiated by interacting biophysical, social, and technological components. Four case studies were detailed: extreme urban heat, the role of vacant land in urban areas, green stormwater infrastructure, and new urban development in China. For the latter two, the authors provide practical examples of how the frameworks can serve as a checklist for assessing sustainability planning.

McPhearson comments, "By offering a strong interdisciplinary lens on urban systems, the integrated frameworks can counter the risk that problems prioritized by special interests may be over-simplified, that opportunistically identified themes may be pursued to the detriment of strategic choices, or that strictly technological response to an immediate crisis may substitute for more inclusive and systemic rosters of choices."

The paper builds on more than 25 years of Cary Institute leadership in human-natural systems research. Pickett, a pioneer of American urban ecology, notes that "The long-term Baltimore Ecosystem Study has uniquely prepared this group to make the synthetic contribution represented by this paper. Cary Institute's emphasis on collaboration and synthesis, and the intellectual freedom it affords to pursue radical new directions in ecology, including deeply interdisciplinary ones, are important catalysts for the work underpinning this paper."

Credit: 
Cary Institute of Ecosystem Studies

Dingo effects on ecosystem visible from space

image: The 5600 kilometres long Dingo Fence is one of the longest structures in the world.

Image: 
Mike Letnic

The environmental impacts of removing dingoes from the landscape are visible from space, a new UNSW Sydney study shows.

The study, recently published in Landscape Ecology, pairs 32 years' worth of satellite imagery with site-based field research on both sides of the Dingo Fence in the Strzelecki Desert.

The researchers found that vegetation inside the fence - that is, areas without dingoes - had poorer long-term growth than vegetation in areas with dingoes.

"Dingoes indirectly affect vegetation by controlling numbers of kangaroos and small mammals," says Professor Mike Letnic, senior author of the study and researcher at UNSW's Centre for Ecosystem Science.

"When dingoes are removed, kangaroo numbers increase, which can lead to overgrazing. This has follow-on effects to the entire ecosystem."

The Dingo Fence, which spans across parts of Queensland, NSW and South Australia, was erected in the 1880s to keep dingoes away from livestock. At 5600 kilometres long, it's one of the longest structures in the world.

Up until now, most dingo research has been site-based or conducted using drone imagery. But NASA and United States Geological Survey's Landsat program - which has been taking continuous images of the area since 1988 - has made landscape-wide analysis possible.

"The differences in grazing pressure on each side of the fence were so pronounced they could be seen from space," says Prof. Letnic.

The satellite images were processed and analysed by Dr Adrian Fisher, a remote sensing specialist at UNSW Science and lead author of the study. He says the vegetation's response to rainfall is one of the key differences between areas with, and without, dingoes.

"Vegetation only grows after rainfall, which is sporadic in the desert," says Dr Fisher.

"While rainfall caused vegetation to grow on both sides of the fence, we found that vegetation in areas without dingoes didn't grow as much - or cover as much land - as areas outside the fence."

A domino effect

Apex predators play an important role in maintaining the biodiversity of an ecosystem.

Removing them from an area can trigger a domino effect for the rest of the ecosystem - a process called trophic cascade.

For example, an increase in kangaroo populations can lead to overgrazing, which in turn reduces vegetation and damages the quality of the soil. Less vegetation can hinder the survival of smaller animals, like the critically endangered Plains Wanderer.

Changes to vegetation triggered by the removal of dingoes have also been shown to reshape the desert landscape by altering wind flow and sand movement.

"The removal of apex predators can have far-reaching effects on ecosystems that manifest across very large areas," says Prof. Letnic. "These effects have often gone unnoticed because large predators were removed from many places a long time ago.

"The Australian dingo fence - which is a sharp divide between dingo and non-dingo areas - is a rare opportunity to observe the indirect role of an apex predator."

A harsh, dry landscape

Satellite imagery traditionally only looks at photosynthesizing vegetation - that is, plants, trees and grass that are visibly green.

But the researchers used a model to factor in non-green vegetation, like shrubs, dry grasses, twigs, branches and leaf litter.

"Non-photosynthesizing vegetation has a different reflectance spectrum to photosynthesizing vegetation," says Dr Fisher.

"By using the satellite image and a calibrated scientific model, we were able to estimate the non-green vegetation cover - which is especially important when studying a desert landscape." The model was developed by the Joint Remote Sensing Research Program, a collaborative group that includes UNSW.

While there are other contributing factors to the difference in vegetation - for example, differing rainfall patterns and land use - the satellite imagery and site analysis showed dingoes played a central role.

"There were clear differences in landscape on either side of the dingo fence," says Dr Fisher. "Dingoes may not be the whole explanation, but they are a key part of it."

Harnessing satellite intelligence

Satellite image technology is a powerful tool for assessing large-scale role of not only dingoes, but all kinds of environmental change.

In 2019, researchers from UNSW Engineering used powerful satellite radar imaging technology to map severe floods in near real-time - intelligence that could help emergency services make tactical decisions during extreme weather events.

Dr Fisher hopes to next use Landsat imagery - which is freely available to download - to study how different amounts of vegetation can influence bushfire frequency.

"Our study is an example of how satellite technology can be used in big picture environmental research," says Dr Fisher.

"With over three decades' worth of data, this technology has opened up so many research possibilities."

Credit: 
University of New South Wales