Tech

Brain cell network supplies neurons with energy

image: When an astrocyte in the thalamus is filled with a dye, it diffuses into neighboring cells of the network (red). These include many oligodendrocytes (green), as shown by the overlay (B3, yellow).

Image: 
© Group Steinhäuser; from: Cerebral Cortex, January 2018;28: 213-222; doi: 10.1093/cercor/bhw368

The human brain has about as many neurons as glial cells. These are divided into four major groups: the microglia, the astrocytes, the NG2 glial cells, and the oligodendrocytes. Oligodendrocytes function primarily as a type of cellular insulating tape: They form long tendrils, which consist largely of fat-like substances and do not conduct electricity. These wrap around the axons, which are the extensions through which the nerve cells send their electrical impulses. This prevents short circuits and accelerates signal forwarding.

Astrocytes, on the other hand, supply the nerve cells with energy: Through their appendages they come into contact with blood vessels and absorb glucose from these. They then transport it to the interfaces between two neurons, the synapses. Before that, they partially convert the sugar into other energy-rich molecules. "We have now been able to show that oligodendrocytes play an important role in the distribution of these compounds," explains Prof. Christian Steinhäuser from the Institute of Cellular Neurosciences at the University of Bonn (Germany). "This is apparently especially true in a particular brain region, the thalamus."

Huge supply network

The thalamus is also called the "gateway to consciousness". The sensory signals it receives include those from the ears, eyes, and skin. It then forwards them to the respective responsible centers of the cerebral cortex. Only then do we become aware of this information, for instance the sound of an instrument.

It has long been known that astrocytes can form close connections: They build intercellular networks through tunnel-like coupling. Molecules can migrate from one cell to another through these "gap junctions". A few years ago, Steinhäuser and his colleagues were able to show that there are also oligodendrocytes in these networks in the thalamus, about as many as astrocytes. The cells form a huge network in this way, which neuroscientists also call a "panglial network" ("pan" comes from Greek and means "comprehensive"). In other regions, however, the networks consist predominantly of coupled astrocytes. "We wanted to know why this is different here," explains Dr. Camille Philippot of Steinhäuser's research group, who conducted much of the work. "Our results demonstrate that the high-energy compounds travel through this network from the blood vessels to the synapses," Philippot emphasizes. "And oligodendrocytes seem to be indispensable in this process."

The researchers were for instance able to demonstrate this in mice, in which the oligodendrocytes are unable to participate in the network because they lack the appropriate tunnels. In these mice, energy molecules no longer reached the synapses in sufficient quantities. The same was true if the astrocytes lacked the appropriate connecting links. "The thalamus apparently requires both cell types for transport," Steinhäuser concludes.

Starved neurons cannot communicate

The researchers were also able to show the consequences of such a disrupted energy supply for neuronal information processing. The synapses are where two neurons meet - a sender cell and a receiver cell. When a pulse from the sender cell arrives at the synapse, it releases messenger molecules into the synaptic cleft. These neurotransmitters dock onto the recipient cell and trigger electrical signals there, the postsynaptic potentials. When these signals are generated, potassium and sodium ions pass through the membrane of the recipient cell - sodium ions inward, potassium ions outward. These, like the neurotransmitters, must then be pumped back again. "And for that, the neurons need energy," explains Steinhäuser, who is also a member of the Transdisciplinary Research Area "Life and Health" at the University of Bonn. "When energy is lacking, pumping activity ceases." In the experiments, "starved" neurons were therefore no longer able to generate postsynaptic activity after just a few minutes.

Credit: 
University of Bonn

Light-controlled Higgs modes found in superconductors; potential sensor, computing uses

image: This illustration shows light at trillions of pulses per second (red flash) accessing and controlling Higgs modes (gold balls) in an iron-based superconductor. Even at different energy bands, the Higgs modes interact with each other (white smoke).

Image: 
Illustration courtesy of Jigang Wang/Iowa State University

AMES, Iowa - Even if you weren't a physics major, you've probably heard something about the Higgs boson.

There was the title of a 1993 book by Nobel laureate Leon Lederman that dubbed the Higgs "The God Particle." There was the search for the Higgs particle that launched after 2009's first collisions inside the Large Hadron Collider in Europe. There was the 2013 announcement that Peter Higgs and Francois Englert won the Nobel Prize in Physics for independently theorizing in 1964 that a fundamental particle - the Higgs - is the source of mass in subatomic particles, making the universe as we know it possible.

(Plus, there are the Iowa State University physicists on the author list of a 2012 research paper describing how the ATLAS Experiment at the collider observed a new particle later confirmed to be the Higgs.)

And now Jigang Wang, a professor of physics and astronomy at Iowa State and a senior scientist at the U.S. Department of Energy's Ames Laboratory, and a team of researchers have discovered a form of the famous particle within a superconductor, a material capable of conducting electricity without resistance, generally at very cold temperatures.

Wang and his collaborators - including Chang-Beom Eom, the Raymond R. Holton Chair for Engineering and Theodore H. Geballe Professor at the University of Wisconsin-Madison; Ilias Perakis, professor and chair of physics at the University of Alabama at Birmingham; and Eric Hellstrom, professor and interim chair of mechanical engineering at Florida State University - report the details in a paper recently published online by the journal Nature Communications.

They write that in lab experiments they've found a short-lived "Higgs mode" within iron-based, high-temperature (but still very cold), multi-energy band, unconventional superconductors.

A quantum discovery

This Higgs mode is a state of matter found at the quantum scale of atoms, their electronic states and energetic excitations. The mode can be accessed and controlled by laser light flashing on the superconductor at terahertz frequencies of trillions of pulses per second. The Higgs modes can be created within different energy bands and still interact with each other.

Wang said this Higgs mode within a superconductor could potentially be used to develop new quantum sensors.

"It's just like the Large Hadron Collider can use the Higgs particle to detect dark energy or antimatter to help us understand the origin of the universe," Wang said. "And our Higgs mode sensors on the table-top have the potential help us discover the hidden secrets of quantum states of matter."

That understanding, Wang said, could advance a new "quantum revolution" for high-speed computing and information technologies.

"It's one way this exotic, strange, quantum world can be applied to real life," Wang said.

Light control of superconductors

The project takes a three-pronged approach to accessing and understanding the special properties, such as this Higgs mode, hidden within superconductors:

Wang's research group uses a tool called quantum terahertz spectroscopy to visualize and steer pairs of electrons moving through a superconductor. The tool uses laser flashes as a control knob to accelerate supercurrents and access new and potentially useful quantum states of matter.

Eom's group developed the synthesis technique that produces crystalline thin films of the iron-based superconductor with high enough quality to reveal the Higgs mode. Hellstrom's group developed deposition sources for the iron-based superconducting thin film development.

Perakis' group led the development of quantum models and theories to explain the results of the experiments and to simulate the salient features that come from the Higgs mode.

The work has been supported by a grant to Wang from the National Science Foundation and grants to Eom and Perakis from the U.S. Department of Energy.

"Interdisciplinary science is the key here," Perakis said. "We have quantum physics, materials science and engineering, condensed matter physics, lasers and photonics with inspirations from fundamental, high-energy and particle physics."

There are good, practical reasons for researchers in all those fields to work together on the project. In this case, students from the four research groups worked together with their advisors to accomplish this discovery.

"Scientists and engineers," Wang wrote in a research summary, "have recently come to realize that certain materials, such as superconductors, have properties that can be exploited for applications in quantum information and energy science, e.g., processing, recording, storage and communication."

Credit: 
Iowa State University

Fastener with microscopic mushroom design holds promise

image: A fastener with microscopic mushroom shapes could be as strong as Velcro but with less noise and less damage to other fabrics, researchers say.

Image: 
Preeti Sharma

WASHINGTON, January 19, 2021 -- A Velcro-like fastener with a microscopic design that looks like tiny mushrooms could mean advances for everyday consumers and scientific fields like robotics.

In Biointerphases, published by AIP Publishing, researchers from Wageningen University in the Netherlands show how the design can use softer materials and still be strong enough to work.

Probabilistic fasteners work, because they are designed with a tiny pattern on one surface that interlocks with features on the other surface. Currently available fasteners, like Velcro and 3M, are called hook and loop fasteners. That design requires harder, stiff material, which is what causes the loud ripping sound when they are peeled off and why they can damage delicate surfaces, such as fabrics, when attached to them.

The team believes a 3D mushroom design can be made with softer, more flexible materials. The half-spherical mushroom shapes provide sufficient interlocking force on the fabric and hold strong.

For the study, the authors used 3D printing combined with molding to create soft surfaces patterned with the tiny mushrooms. That material was then safely attached to three different fabrics and removed without causing damage to them.

"We wanted to prove that, if you go toward these less stiff features, they can be used to attach and detach to soft and delicate surfaces, like fabrics, without damage. It can be used in many applications such as for diapers or silent fasteners for military use," author Preeti Sharma said. "There is still a lot of research to be done, but the mushroom-shaped design worked quite well for soft mechanical fasteners."

The design could lead to advances in the field of soft robotics. Soft robotics aims to build robots with designs that mimic living creatures like octopuses, caterpillars, and worms.

In that kind of robotics, interfaces play a significant role. With advances that make the current mushroom design stronger but keep its softness, it could be used to help robots walk on walls and ceilings like a gecko -- an animal that can do that because of an attachment-detachment process that's similar to how probabilistic fasteners work.

The design also could be used on grippers for robots used in farming and other agricultural jobs, Sharma said.

Sharma said more research into the design is needed before it is ready to be used in a commercially available product. Minor changes to the mushroom shape, possibly lengthening or shortening it to make it more effective, could lead to an even better product, she said.

Credit: 
American Institute of Physics

Land deals meant to improve food security may have hurt

Large-scale land acquisitions by foreign investors, intended to improve global food security, had little to no benefit, increasing crop production in some areas while simultaneously threatening local food security in others, according to researchers who studied their effects.

The study, published in the Proceedings of the National Academy of Sciences and led by the University of Notre Dame, combined satellite imagery with agricultural surveys as well as household dietary datasets of 160 large-scale land acquisitions across four continents between 2005 and 2015. It is the first comprehensive global analysis of the impact of the land acquisitions of its kind.

"These land deals have been happening for the last two decades on a massive scale," said Marc Muller, assistant professor in the Department of Civil and Environmental Engineering and Earth Sciences at Notre Dame and lead author of the study. "Our goal was to use empirical data to sort out whether or not large-scale land acquisitions have improved food security by using empirical data. But what we found was that there was either no impact or a negative impact. There was no positive impact."

Following a global food crisis during the early 2000s, foreign investors purchased more than 220 million acres of land in middle-income and developing countries, according to the study's estimates, to increase crop production and contribute to the global food supply.

"In many countries throughout the world land is being commodified, so it is becoming easier to buy and sell land. Those, and rising food prices, were drivers for these companies," Muller said.

There are two competing arguments when it comes to land acquisitions. Proponents view the multinational companies that purchased the land as better positioned to improve production and increase crop yields. But those who oppose argue that the acquisitions encroach on natural resources, lead to displacement of local farm workers and can have a negative impact on local residents -- including giving rise to livelihood losses, social instability and/or violence in those regions.

While scientists have analyzed these types of acquisitions using modeling studies, and others have looked at specific situations as a result of the land deals through case studies, Muller said this is the first global analysis of this scale.

Muller and his team analyzed land deals across Latin America, eastern Europe, Africa and Asia. By combining satellite imagery, researchers could see whether crop lands expanded and/or intensified. "We also used data from agriculture surveys to identify what types of crops had been planted in and around those lands prior to the acquisition compared to after, to account for potential transitions from local crops to export-bound crops, and crops that can also be used for biofuel," such as palm oil and sugar cane, Muller said.

According to the study, trends differed depending on the region -- and in some cases the acquisitions had a negative effect on household diets.

In Latin America and eastern Europe -- where countries are considered middle-income -- investors purchased land in intensified agricultural areas, where crops were already export-bound and local residents already consumed food from global markets. "So, in that sense, these land deals didn't really change much," said Muller. "They didn't increase crop production and they didn't cause more damage to local food insecurity than what was already taking place. In Africa and Asia, things looked very different."

The research showed that those land acquisitions increased cropland, cultivating previously uncultivated land, and showed a clear transition from local staple crops such as tapioca to export-bound crops such as wheat and flex crops for potential use as biofuel.

"These crops are interesting for investors because if the price of food is low and the price of energy is high, you can then use the crops for energy," said Muller. "But these types of crops are not nutrient dense, so it's not great in terms of food security. As a matter of fact, the data from the household surveys we studied showed a consistent decrease in diet diversity after the deals took place."

The study is the first in a series the research team will produce based on their analysis. Forthcoming studies will look at the impact of large-scale land acquisitions in relation to water, energy and environment.

Credit: 
University of Notre Dame

Astronomers dissect the anatomy of planetary nebulae using Hubble Space Telescope images

image: On the left is an image of the Jewel Bug Nebula (NGC 7027) captured by the Hubble Space Telescope in 2019 and released in 2020. Further analysis by researchers produced the RGB image on the right, which shows extinction due to dust, as inferred from the relative strength of two hydrogen emission lines, as red; emission from sulfur, relative to hydrogen, as green; and emission from iron as blue.

Image: 
STScI, Alyssa Pagan

Images of two iconic planetary nebulae taken by the Hubble Space Telescope are revealing new information about how they develop their dramatic features. Researchers from Rochester Institute of Technology and Green Bank Observatory presented new findings about the Butterfly Nebula (NGC 6302) and the Jewel Bug Nebula (NGC 7027) at the 237th meeting of the American Astronomical Society on Friday, Jan. 15.

Hubble's Wide Field Camera 3 observed the nebulae in 2019 and early 2020 using its full, panchromatic capabilities, and the astronomers involved in the project have been using emission line images from near-ultraviolet to near-infrared light to learn more about their properties. The studies were first-of-their-kind panchromatic imaging surveys designed to understand the formation process and test models of binary-star-driven planetary nebula shaping.

"We're dissecting them," said Joel Kastner, a professor in RIT's Chester F. Carlson Center for Imaging Science and School of Physics and Astronomy. "We're able to see the effect of the dying central star in how it's shedding and shredding its ejected material. We're able to see that material that the central star has tossed away is being dominated by ionized gas, where it's dominated by cooler dust, and even how the hot gas is being ionized, whether by the star's UV or by collisions caused by its present, fast winds."

Kastner said analysis of the new HST images of the Butterfly Nebula is confirming that the nebula was ejected only about 2,000 years ago--an eyeblink by the standards of astronomy - and that the S-shaped iron emission that helps give it the "wings" of gas may be even younger. Surprisingly, they found that while astronomers previously believed they had located the nebula's central star, it was actually a star not associated with the nebula that is much closer to Earth than the nebula. Kastner said he hopes that future studies with the James Webb Space Telescope could help locate the actual central star.

The team's ongoing analysis of the Jewel Bug Nebula is built on a 25-year baseline of measurements dating back to early Hubble imaging. Paula Moraga Baez, an astrophysical sciences and technology Ph.D. student from DeKalb, Ill., called the nebula "remarkable for its unusual juxtaposition of circularly symmetric, axisymmetric, and point-symmetric (bipolar) structures." Moraga noted, "The nebula also retains large masses of molecular gas and dust despite harboring a hot central star and displaying high excitation states."

Jesse Bublitz '20 Ph.D. (astrophysical sciences and technology), now a postdoctoral researcher at Green Bank Observatory, has continued analysis of NGC 7027 with radio images from the Northern Extended Millimeter Array (NOEMA) Telescope, where he identified molecular tracers of ultraviolet and X-ray light that continue to shape the nebula. The combined observations from telescopes at other wavelengths, like Hubble, and bright molecules CO+ and HCO+ from NOEMA indicate how different regions of NGC 7027 are affected by the irradiation of its central star.

"We're very excited about these findings," said Bublitz. "We had hoped to find structure that clearly showed CO+ and HCO+ spatially coincident or entirely in distinctive regions, which we did. This is the first map of NGC 7027, or any planetary nebula, in the molecule CO+, and only the second CO+ map of any astronomical source."

Credit: 
Rochester Institute of Technology

How to train a robot (using AI and supercomputers)

image: Examples of 3D point clouds synthesized by the progressive conditional generative adversarial network (PCGAN) for an assortment of object classes. PCGAN generates both geometry and color for point clouds, without supervision, through a coarse to fine training process. [Credit: William Beksi, Mohammad Samiul Arshad, UT Arlington]

Image: 
[William Beksi, UT Arlington]

Before he joined the University of Texas at Arlington as an Assistant Professor in the Department of Computer Science and Engineering and founded the Robotic Vision Laboratory there, William Beksi interned at iRobot, the world's largest producer of consumer robots (mainly through its Roomba robotic vacuum).

To navigate built environments, robots must be able to sense and make decisions about how to interact with their locale. Researchers at the company were interested in using machine and deep learning to train their robots to learn about objects, but doing so requires a large dataset of images. While there are millions of photos and videos of rooms, none were shot from the vantage point of a robotic vacuum. Efforts to train using images with human-centric perspectives failed.

Beksi's research focuses on robotics, computer vision, and cyber-physical systems. "In particular, I'm interested in developing algorithms that enable machines to learn from their interactions with the physical world and autonomously acquire skills necessary to execute high-level tasks," he said.

Years later, now with a research group including six PhD computer science students, Beksi recalled the Roomba training problem and begin exploring solutions. A manual approach, used by some, involves using an expensive 360 degree camera to capture environments (including rented Airbnb houses) and custom software to stitch the images back into a whole. But Beksi believed the manual capture method would be too slow to succeed.

Instead, he looked to a form of deep learning known as generative adversarial networks, or GANs, where two neural networks contest with each other in a game until the 'generator' of new data can fool a 'discriminator.' Once trained, such a network would enable the creation of an infinite number of possible rooms or outdoor environments, with different kinds of chairs or tables or vehicles with slightly different forms, but still -- to a person and a robot -- identifiable objects with recognizable dimensions and characteristics.

"You can perturb these objects, move them into new positions, use different lights, color and texture, and then render them into a training image that could be used in dataset," he explained. "This approach would potentially provide limitless data to train a robot on."

"Manually designing these objects would take a huge amount of resources and hours of human labor while, if trained properly, the generative networks can make them in seconds," said Mohammad Samiul Arshad, a graduate student in Beksi's group involved in the research.

GENERATING OBJECTS FOR SYNTHETIC SCENES

After some initial attempts, Beksi realized his dream of creating photorealistic full scenes was presently out of reach. "We took a step back and looked at current research to determine how to start at a smaller scale - generating simple objects in environments."

Beksi and Arshad presented PCGAN, the first conditional generative adversarial network to generate dense colored point clouds in an unsupervised mode, at the International Conference on 3D Vision (3DV) in Nov. 2020. Their paper, "A Progressive Conditional Generative Adversarial Network for Generating Dense and Colored 3D Point Clouds," shows their network is capable of learning from a training set (derived from ShapeNetCore, a CAD model database) and mimicking a 3D data distribution to produce colored point clouds with fine details at multiple resolutions.

"There was some work that could generate synthetic objects from these CAD model datasets," he said. "But no one could yet handle color."

In order to test their method on a diversity of shapes, Beksi's team chose chairs, tables, sofas, airplanes, and motorcycles for their experiment. The tool allows the researchers to access the near-infinite number of possible versions of the set of objects the deep learning system generates.

"Our model first learns the basic structure of an object at low resolutions and gradually builds up towards high-level details," he explained. "The relationship between the object parts and their colors -- for examples, the legs of the chair/table are the same color while seat/top are contrasting -- is also learned by the network. We're starting small, working with objects, and building to a hierarchy to do full synthetic scene generation that would be extremely useful for robotics."

They generated 5,000 random samples for each class and performed an evaluation using a number of different methods. They evaluated both point cloud geometry and color using a variety of common metrics in the field. Their results showed that PCGAN is capable of synthesizing high-quality point clouds for a disparate array of object classes.

SIM2REAL

Another issue that Beksi is working on is known colloquially as 'sim2real.' "You have real training data, and synthetic training data, and there can be subtle differences in how an AI system or robot learns from them," he said. "'Sim2real' looks at how to quantify those differences and make simulations more realistic by capturing the physics of that scene - friction, collisions, gravity -- and by using ray or photon tracing."

The next step for Beksi's team is to deploy the software on a robot, and see how it works in relationship to the sim-to-real domain gap.

The training of the PCGAN model was made possible by TACC's Maverick 2 deep learning resource, which Beksi and his students were able to access through the University of Texas Cyberinfrastructure Research (UTRC) program, which provides computing resources to researchers at any of the UT System's 14 institutions.

"If you want to increase resolution to include more points and more detail, that increase comes with an increase in computational cost," he noted. "We don't have those hardware resources in my lab, so it was essential to make use of TACC to do that."

In addition to computation needs, Beksi required extensive storage for the research. "These datasets are huge, especially the 3D point clouds," he said. "We generate hundreds of megabytes of data per second; each point cloud is around 1 million points. You need an enormous amount of storage for that."

While Beksi says the field is still a long way from having really good robust robots that can be autonomous for long periods of time, doing so would benefit multiple domains, including health care, manufacturing, and agriculture.

"The publication is just one small step toward the ultimate goal of generating synthetic scenes of indoor environments for advancing robotic perception capabilities," he said.

Credit: 
University of Texas at Austin, Texas Advanced Computing Center

Appearance, social norms keep students off Zoom cameras

ITHACA, N.Y. - When the semester shifted online amid the COVID-19 pandemic last spring, Cornell University instructor Mark Sarvary, and his teaching staff decided to encourage - but not require - students to switch on their cameras.

It didn't turn out as they'd hoped.

"Most of our students had their cameras off," said Sarvary, director of the Investigative Biology Teaching Laboratories in the College of Agriculture and Life Sciences (CALS).

"Students enjoy seeing each other when they work in groups. And instructors like seeing students, because it's a way to assess whether or not they understand the material," Sarvary said. "When we switched to online learning, that component got lost. We wanted to investigate the reasons for that."

Sarvary and co-instructor Frank Castelli, a CALS Active Learning Initiative education postdoctoral researcher, surveyed the 312 students in the class at the end of the semester to figure out why they weren't using their cameras - and to try to come up with ways to turn that trend around.

They found that while some students had concerns about the lack of privacy or their home environment, 41% of the 276 respondents cited their appearance, and more than half of those who selected "other" as their reason for keeping their camera off explained that it was the norm. This suggested that explicitly encouraging camera use could boost participation without adverse effects, the researchers said.

"We felt it would create an undue burden and add stress in an already stressful time to require the cameras to be on, and we found this could disproportionately affect certain groups of students, such as underrepresented minorities," said Castelli, first author of "Why Students Do Not Turn on Their Video Cameras During Online Classes and an Equitable and Inclusive Plan to Encourage Them to Do So," which published Jan. 10 in Ecology and Evolution.

In the survey, Castelli and Sarvary found that among underrepresented minorities, 38% said they were concerned about other people being seen behind them, and 26% were concerned about their physical location being visible; while among non-underrepresented minorities, 24% were worried about people behind them and 13% about their physical locations.

"It's a more inclusive and equitable strategy to not require the cameras but to instead encourage them, such as through active learning exercises," Castelli said. "This has to be done carefully so it doesn't create an environment where you're making those without cameras on feel excluded. But at the same time, if you don't explicitly ask for the cameras and explain why, that can lead to a social norm where the camera is always off. And it becomes a spiral of everyone keeping it off, even though many students want it on."

Establishing camera use as the norm, explaining the reasons that cameras improve the class and employing active learning techniques and icebreakers, such as beginning each class with a show-and-tell, are techniques that could boost participation, the authors suggested in the study.

"Active learning plays an important role in online learning environments," Sarvary said. "Students may feel more comfortable turning on their cameras in breakout rooms. Polling software or Zoom chats are alternatives that can help the instructor assess student learning, even without seeing nodding or smiling or confused expressions."

The authors also suggested instructors address potential distractions, give breaks to help maintain attention, and poll their students to learn about other potential barriers to camera use or participation.

Though they have not yet formally studied the effect, the instructors in the 24 sections of the laboratory class all observed improved camera participation when they used some of these strategies last fall.

"We wanted to develop an engaging and inclusive virtual learning environment, using the best pedagogical methods," Sarvary said. "That's why we wanted to know why the students are not turning their cameras on, rather than just assuming or, as some instructors do, requiring them to turn their cameras on. We wanted to take an education research approach and figure out the best practices."

Credit: 
Cornell University

Protected areas vulnerable to growing emphasis on food security

image: The image of a female Asian elephant in a tea plantation on the fringes of Kaziranga
National Park in India, bordering the Eastern Himalaya biodiversity hotspot, exemplifies
potential impacts to endangered species conservation in cropland-impacted parks. The elephant in the image is mock charging at rescuers extracting her calf from a trench at the edge of the
field.

Image: 
Image courtesy of Sashanka Barbaruah-Wildlife Trust of India

Protected areas are critical to mitigating extinction of species; however, they may also be in
conflict with efforts to feed the growing human population. A new study shows that 6% of all
global terrestrial protected areas are already made up of cropland, a heavily modified habitat
that is often not suitable for supporting wildlife. Worse, 22% of this cropland occurs in areas
supposedly enjoying the strictest levels of protection, the keystone of global biodiversity
protection efforts.

This finding was published in the Proceedings of the National Academy of Sciences by
researchers at the University of Maryland's National Socio-Environmental Synthesis Center
(SESYNC) and National Institute for Mathematical and Biological Synthesis ( NIMBioS ) at the
University of Tennessee. In order to comprehensively examine global cropland impacts in

protected areas for the first time, the authors synthesized a number of remotely sensed
cropland estimates and diverse socio-environmental datasets.

The persistence of many native species--particularly habitat specialists (species that depend on
a narrow set of natural systems), rare, and threatened species--is incompatible with conversion
of habitat to cropland, thus compromising the primary conservation goal of these protected
areas. Guided by the needs of conservation end users, the researchers used methods that
provide an important benchmark and reproducible methods for rapid monitoring of cropland in
protected areas.

"Combining multiple remote sensing approaches with ongoing inventory and survey work will
allow us to better understand the impacts of conversion on different taxa," says lead author
Varsha Vijay, a conservation scientist who was a postdoctoral fellow at SESYNC while working
on the study. "Cropland in biodiversity hotspots warrant particularly careful monitoring. In many
of these regions, expanding cropland to meet increasing food demand exposes species to both
habitat loss and increased human-wildlife conflict," she adds.

Countries with higher population density, lower income inequality, and higher agricultural
suitability tend to have more cropland in their protected areas. Even though cropland in
protected areas is most dominant in mid-northern latitudes, the tradeoffs between biodiversity
and food security may be most acute in the tropics and subtropics. This increased tradeoff is
due to higher levels of species richness coinciding with a high proportion of cropland-impacted
protected areas.

"The findings of this study emphasize the need to move beyond area-based conservation
targets and develop quantitative measures to improve conservation outcomes in protected
areas, especially in areas of high food insecurity and biodiversity" says Lucas Joppa, chief
environmental officer of Microsoft, who has published numerous papers on the topic of
protected area effectiveness but who was not an author on the study.
2021 is a historic "Year of Impact," when many countries and international agencies are
developing new decadal targets for biodiversity conservation and protected areas. As countries
aim to meet these goals and the 2030 Sustainable Development Goals, there is an increasing
need to understand synergies and tradeoffs between these goals in order to ensure a more
sustainable future. Studies such as these offer insights for protected area planning and
management, particularly as future protected areas expand into an agriculturally dominated
matrix. Though the study reveals many challenges for the future, it also reveals potential
scenarios for restoration in mid-northern latitudes and for cooperation between conservation
and food programs in regions with both high levels of food insecurity and biodiversity.

"Despite clear connections between food production and biodiversity, conservation and
development planning are still often treated as independent processes," says study co-author
Paul Armsworth from the University of Tennessee. "Rapid advances in data availability provide
exciting opportunities for bringing the two processes together," adds Vijay.

Credit: 
University of Maryland

New approach emerges to better classify, treat brain tumors

image: Dr. Jin-Xiong She and MD/PhD student Paul Tran.

Image: 
Kim Ratliff, Augusta University Photographer

AUGUSTA, Ga. (Jan. 19, 2021) - A look at RNA tells us what our genes are telling our cells to do, and scientists say looking directly at the RNA of brain tumor cells appears to provide objective, efficient evidence to better classify a tumor and the most effective treatments.

Gliomas are the most common brain tumor type in adults, they have a wide range of possible outcomes and three subtypes, from the generally more treatable astrocytomas and oligodendrogliomas to the typically more lethal glioblastomas.

Medical College of Georgia scientists report in the journal Scientific Reports that their method, which produces what is termed a transcriptomic profile of the tumor is particularly adept at recognizing some of the most serious of these tumors, says Paul M.H. Tran, MD/PhD student.

Gliomas are currently classified through histology, primarily the shape, or morphology, pathologists see when they look at the cancerous cells under a microscope, as well as identification of known cancer-causing gene mutations present.

"We are adding a third method," says Dr. Jin-Xiong She, director of the MCG Center for Biotechnology and Genomic Medicine, Georgia Research Alliance Eminent Scholar in Genomic Medicine and the study's corresponding author. Tran, who is doing his PhD work in She's lab, is first author.

While most patients have both the current classification methods performed, there are sometimes inconsistent findings between the two groups, like traditional pathology finding a cancer is a glioblastoma when the mutation study did not and vice versa, and even when two pathologists look at the same brain tumor cells under a microscope, the scientists say.

To more directly look at what a cancer cell is up to, they opted to look at relatively unexplored gene expression, more specifically the one-step downstream RNA, which indicates where the cell is headed. DNA expression equals RNA since DNA makes RNA, which makes proteins, which determine cell function. One way cancer thrives is by altering gene expression, turning some up and others way down or off.

They suspected the new approach would provide additional insight about the tumor, continue to assess the efficacy of existing classification methods and likely identify new treatment targets.

"RNA would be a snapshot of what is high and what is low currently in those glial cells as they are taken out of the body," Tran says. "They are actually looking at how many copies of RNA relevant genes are making. Normally that gene expression determines everything from your hair color to how much you weigh," She says. "The transcriptomic profile counts the number of copies of each gene you have in the cell."

The glial cells, whose job is to support neurons, have a tightly regulated gene expression that enables them to do just that. With cancer, one of the first things that happens is how many RNA copies of each gene the cells are making changes and the important cell function changes with it. "You change gene expression to become something different," She says.

Transcriptomic profiling starts like the other methods with a tumor sample from the surgeon, but then it goes through an automated process to extract RNA, which is put into an instrument that can read gene expression levels for the different genes. The massive amounts of data generated then is fed into a machine learning algorithm Tran developed, which computes the most likely glioma subtype and a prognosis associated with it.

They started with The Cancer Genome Atlas (TCGA) program and the Repository of Molecular Brain Neoplasia Data (REMBRANDT), two datasets that had already done the work of looking at RNA and also provided related clinical information, including outcomes on more than 1,400 patients with gliomas. Tran, She and their colleagues used their algorithm to discover patterns of gene expression and used those patterns to classify all glioma patients without any other input. They then compared the three major glioma subtypes that emerged with standard classification methods.

Their transcriptomic classification had about 90% agreement with the traditional approach looking at cells under a microscope and about 93% agreement with looking at genetic mutations, She says. They found about a 16% discrepancy between the two standard measures.

"All three methods don't agree on about 10-15% of patients," She says, but notes the most accurate analysis among the three should be theirs because their method is better than the others at predicting survival.

And the discrepancies they found between classification methods could be significant for some patients despite close percentages.

"We found our method may have some advantages because we found some patients actually had a worse prognosis that could be identified by our method, but not by the other approaches," Tran says.

As an example, patients with a mutation in a gene called IDH, or isocitrate dehydrogenase, most typically have an astrocytoma or oligodendroglioma, which are generally more responsive to treatment and have better survival rates than glioblastomas. However they also found that even some lower-grade gliomas with this IDH mutation can progress to what's called a secondary glioblastoma, something which may not be found by the other two methods. The IDH mutation is rare in primary glioblastomas, Tran notes.

Using the standard techniques, which look at a snapshot in time, these astrocytomas that progress to more lethal glioblastomas were classified as a less serious tumor in 27 patients. "That progression phenomenon is known but our technique is better at identifying those cases," Tran says.

Further analysis also found that about 20% of the worse-prognosis patients had mutations in the promoter region of the TERT gene. The TERT gene is best known for making telomerases, enzymes that enable our chromosomes to stay a healthy length, a length known to decrease with age. TERT function is known to be hijacked by cancer to enable the endless cell proliferation that is a cancer hallmark. This mutation is not usually present in a glioma that starts out as a more aggressive glioblastoma, and implicates a mutation in the TERT promoter is important in glioma progression, they say.

"The implication would be that if we have inhibitors or something else that target the TERT gene, then you may be able to prevent some of those cases from having a worse prognosis," Tran says.

These findings also point to strengths of the different classification methods, in this case suggesting that classification by mutation may not pick up these most aggressive brain tumors rather their new transcriptomic method, as well as the older approach of looking at the cancer cells under a microscope, are better at making this important distinction.

"It is known that a certain proportion of your lower-grade gliomas can progress to become a glioblastoma and those are some of the ones that can sometimes be misidentified by the original techniques," Tran says. "Using our gene expression method, we found them even though some of them have the IDH mutation."

All these variations have groups like the World Health Organization asking for better ways to determine poor prognosis IDH patients, they write. Other variations include some glioblastomas with the normal IDH gene carry one of the worse prognoses for gliomas, but there is a subgroup of glioblastomas that act more like astrocytes and tend to carry a better prognosis.

Now that the MCG team has a better indication of which patients will have a worse prognosis, next steps including finding out why and maybe what can be done.

In addition to accuracy of prognosis, a second way to assess a tumor classification method is whether it points you toward better treatment options, She says, which they are now moving toward. He notes that most drugs and many of our actions, like exercise and what we eat, alter RNA expression.

"Right now, if anyone gives us RNA expression data from patients anywhere in the world, we can quickly tell them which glioma subtype it most likely is," Tran says. The fact that equipment that can examine RNA expression is becoming more widely available, should make transcriptomic profiling more widely available, they say.

Gliomas are tumors of glial cells -- which include astrocytes, oligodendrocytes and microglial cells -- brain cells which outnumber neurons and whose normal job is to surround and support neurons.

Identification of IDH gene mutations in the cells has already made standard glioma classification more systematic, the scientists say. The mutation can be identified by either staining the biopsy slide or by sequencing for it.

Much progress also has been made in using machine learning to automate and objectify cancer diagnosis and subtyping they write, including glioblastomas. Glioblastomas have been characterized using transcriptome-based analysis but not all gliomas, like the current study.

Like most genes, the IDH gene normally has many jobs in the body, including processing glucose and other metabolites for a variety of cell types. But when mutated, it can become destructive to cells, producing factors like reactive oxygen species, which damage the DNA and contribute to cancer and other diseases. These mutations can result with age and/or environmental exposures. IDH inhibitors are in clinical trials for a variety of cancers including gliomas.

Increasing insight also is emerging into the significant DNA methylation that occurs in cancer, which alters gene expression, resulting in changes like silencing tumor suppressor genes and producing additional cancer-causing genetic mutations.

Credit: 
Medical College of Georgia at Augusta University

The brain region responsible for self-bias in memory

image: Regions showing enhanced activation during the maintenance of self-associated stimluli (left), including both classic self-referential processing regions (VMPFC) and regions in the working memory network.

Image: 
Yin et al., JNeurosci 2021

A brain region involved in processing information about ourselves biases our ability to remember, according to new research published in JNeurosci.

People are good at noticing information about themselves, like when your eye jumps to your name in a long list or you manage to hear someone address you in a noisy crowd. This self-bias extends to working memory, the ability to actively think about and manipulate bits of information: people are also better at remembering things about themselves.

To pinpoint the source of this bias, Yin et al. measured participants' brain activity in an fMRI scanner while they tried to remember the location of different colored dots representing themselves, a friend, or a stranger. The participants' fastest response time came when recalling the dot representing themselves, even though it was an arbitrary connection. When people held the self-representing dot in working memory, they had greater activity in the ventromedial prefrontal cortex (VMPFC) -- an area involved in processing self-relevant information. Greater synchrony between the VMPFC and working memory regions corresponded to faster response times. When the researchers interfered with VMPFC activity with transcranial direct current stimulation, the self-bias disappeared, indicating activity in the region drives the bias.

Credit: 
Society for Neuroscience

Green med diet cuts non-alcoholic fatty liver disease by half - Ben-Gurion U. study

image: MRI photos illustrate the green MED diet effect on hepatic fat loss A green Mediterranean (MED) diet reduces intrahepatic fat more than other healthy diets and cuts non-alcoholic fatty liver disease (NAFLD) in half, according to a long-term clinical intervention trial led by Ben-Gurion University of the Negev researchers and a team of international colleagues.

Image: 
Gut 2021

BEER-SHEVA, Israel...January 18, 2021 - A green Mediterranean (MED) diet reduces intrahepatic fat more than other healthy diets and cuts non-alcoholic fatty liver disease (NAFLD) in half, according to a long-term clinical intervention trial led by Ben-Gurion University of the Negev researchers and a team of international colleagues.

The findings were published in Gut, a leading international journal focused on gastroenterology and hepatology.

"Our research team and other groups over the past 20 years have proven through rigorous randomized long-term trials that the Mediterranean diet is the healthiest," says lead researcher Prof. Iris Shai, an epidemiologist in the BGU School of Public Health who is also an adjunct professor at the Harvard T.H. Chan School of Public Health. "Now, we have refined that diet and discovered elements that can make dramatic changes to hepatic fat and other key health factors." Other Harvard investigators are Profs. Meir Stampfer and Frank Hu, chair of the Department of Nutrition at the Chan School.

NAFLD affects 25% to 30% of people in the United States and Europe. While some fat is normal in the liver, excessive fat (5% or higher) leads to insulin resistance, type 2 diabetes, cardiovascular risk, as well as decreased gut microbiome diversity and microbial imbalance. Since no drug is currently available to treat fatty liver, the only intervention is weight loss and curtailing of alcohol consumption.

This MRI-nutritional clinical trial (called Direct-Plus), conducted by an international research team led by Prof. Shai is the first to develop and test a new green Mediterranean diet. This modified MED diet is rich in vegetables, includes daily intake of walnuts (28 grams), and less processed and red meat. It is enriched with green components, high in polyphenols, including three to four cups of green tea/day and 100 grams (frozen cubes/day) of a Mankai green shake. Mankai, an aquatic green plant also known as duckweed, is high in bioavailable protein, iron, B12, vitamins, minerals, and polyphenols.

"Addressing this common liver disease by targeted lifestyle intervention might promote a more effective nutritional strategy," says Dr. Anat Yaskolka-Meir, first author and member of the BGU School of Public Health. "This clinical trial demonstrates an effective nutritional tool for NAFLD beyond weight loss."

This 18-month trial DIRECT-PLUS began in 2017 at the Nuclear Research Center Negev in Dimona, Israel, when 294 workers in their fifties with abdominal obesity were randomly divided into three groups: healthy dietary regimen, Mediterranean diet and green Mediterranean diet. In addition to the diet, all the participants were given a physical exercise regimen with a free gym membership. The participants underwent MRI scans to quantify the exact proportion of excess intrahepatic fat before and after the trial.

The results showed that every diet led to liver fat reduction. However, the green MED diet resulted in the greatest reduction of hepatic fat (-39%), as compared to the traditional Mediterranean diet (-20%) and the healthy dietary guidelines (-12%). The results were significant after adjusting for weight loss.

Overall, the green MED diet produced dramatic reductions in fatty liver. NAFLD prevalence dropped from 62% at baseline to 31.5% in the green Mediterranean group, down to 47.9% in the Mediterranean group and 54.8% in the healthy dietary regimen group.

Specifically, greater Mankai and walnut intake and less red/processed meat intake were significantly associated with the extent of IHF loss, after controlling for other variables. Both MED groups had significantly higher total plasma polyphenol levels. More specific polyphenols, found in walnuts and Mankai, were detected in the green MED group. The researchers hypothesize the effect of polyphenols and the reduction in red meat play a role in liver fat reduction.

Credit: 
American Associates, Ben-Gurion University of the Negev

Smart vaccine scheme quick to curb rabies threat in African cities

image: A child bring his puppy for vaccination in Malawi

Image: 
Mission Rabies

More people could be protected from life-threatening rabies thanks to an agile approach to dog vaccination using smart phone technology to spot areas of low vaccination coverage in real time.

Vets used a smart phone app to help them halve the time it takes to complete dog vaccination programmes in the Malawian city of Blantyre.

The custom-made app lets them quickly spot areas with low inoculation rates in real time, allowing them to jab more dogs more quickly, and with fewer staff.

Rabies is a potentially fatal disease passed on to humans primarily through dog bites. It is responsible for some 60,000 deaths worldwide each year, 40 per cent of which are children. It places a huge financial burden on some of the world's poorest countries.

Researchers predict that more than one million people globally will die from rabies between 2020 and 2035 if dog vaccination rates - coupled with treatment immediately following a bite - do not increase.

Current mass vaccination programmes include door-to-door visits, which ensure high uptake, but are costly and time consuming. Drop-in centres are more efficient, but do not treat as many dogs, and can be difficult for owners to reach.

Research showed that distance from drop-in centres was the biggest reason why owners did not get their dog vaccinated,

To overcome this, vets led by the University of Edinburgh and the charity Mission Rabies applied their data-driven approach using the app developed with the World Veterinary Service. The app allows the team to record data on vaccinations and access GPS locations.

The team increased the numbers of drop-in centres within around 800 metres of owners' homes from 44 to 77 - a distance that their research indicated most owners were willing to walk.

In areas with low uptake, they used 'roaming'- vaccine stations quickly set up to serve localised areas with low vaccine coverage, such as at the end of a street. The vets also engaged with local communities and media to raise awareness of the scheme.

They targeted 70 per cent of the city's dog population - around 35,000 animals - and vaccinated them in 11 days, compared with 20 days using the previous approach. The scheme needed 904 staff-days, as opposed to 1719.

Researchers say the findings have the potential to not only benefit urban dog populations, but also farmers whose livestock is at risk of infection from dogs.

Lead researcher Dr Stella Mazeri, veterinary epidemiologist at the University of Edinburgh's Royal (Dick) School of Veterinary Studies, said: "Delivering vaccinations to at risk populations in a highly efficient manner is a major societal challenge. Attempts to eliminate rabies remain under funded despite knowing that dog vaccination is a highly effective way to reduce the disease burden in both humans and dogs. We are pleased to see that the real time interrogation of data has allowed us to improve the efficiency of vaccination clinics."

Luke Gamble, CEO of Mission Rabies, said, "This research provides an important piece of the puzzle in powering forward strategies to eliminate canine transmitted rabies. Practically, this paper steers how we better implement campaigns to efficiently vaccinate hundreds of thousands of dogs against rabies in challenging environments - and this is turn, prevents the deaths of thousands of children around the world each year. The amazing support of the University of Edinburgh in this field of research - is genuinely saving lives."

Credit: 
University of Edinburgh

Latch, load and release: Elastic motion makes click beetles click, study finds

image: Illinois researchers Aimy Wissa, Marianne Alleyne and Ophelia Bolmin studied the motion of a click beetle's jump and present the first analytical framework to uncover the physics behind ultrafast motion by small animals.

Image: 
Photo by L. Brian Stauffer

CHAMPAIGN, Ill. -- Click beetles can propel themselves more than 20 body lengths into the air, and they do so without using their legs. While the jump's motion has been studied in depth, the physical mechanisms that enable the beetles' signature clicking maneuver have not. A new study examines the forces behind this super-fast energy release and provides guidelines for studying extreme motion, energy storage and energy release in other small animals like trap-jaw ants and mantis shrimps.

The multidisciplinary study, led by University of Illinois Urbana-Champaign mechanical science and engineering professors Aimy Wissa and Alison Dunn, entomology professor Marianne Alleyne and mechanical science and engineering graduate student and lead author Ophelia Bolmin, is published in the Proceedings of that National Academy of Science.

Many insects use various mechanisms to overcome the limitations of their muscles. However, unlike other insects, click beetles use a unique hingelike tool in their thorax, just behind the head, to jump.

See a video describing this research [LINK: https://www.youtube.com/watch?v=1lmsWcvW7fM&feature=youtu.be]

To determine how the hinge works, the team used high-speed X-rays to observe and quantify how a click beetle's body parts move before, during and after the ultrafast energy release.

"The hinge mechanism has a peg on one side that stays latched onto a lip on the other side of the hinge," Alleyne said. "When the latch is released, there is an audible clicking sound and a quick unbending motion that causes the beetle's jump."

Seeing this ultrafast motion using a visible-light camera helps the researchers understand what occurs outside the beetle. Still, it doesn't reveal how internal anatomy controls the flow of energy between the muscle, other soft structures and the rigid exoskeleton.

Using the X-ray video recordings and an analytical tool called system identification, the team identified and modeled the clicking motion forces and phases.

The researchers observed large, yet relatively slow deformations in the soft tissue part of the beetles' hinge in the lead-up to the fast unbending movement.

"When the peg in the hinge slips over the lip, the deformation in the soft tissue is released extremely quickly, and the peg oscillates back and forth in the cavity below the lip before coming to a stop," Wissa said. "The fast deformation release and repeated, yet decreasing, oscillations showcase two basic engineering principles called elastic recoil and damping."

The acceleration of this motion is more the 300 times that of the Earth's gravitational acceleration. That is a lot of energy coming from such a small organism, the researchers said.

"Surprisingly, the beetle can repeat this clicking maneuver without sustaining any significant physical damage," Dunn said. "That pushed us to focus on figuring out what the beetles use for energy storage, release and dissipation."

"We discovered that the insect uses a phenomenon called snap-buckling - a basic principle of mechanical engineering - to release elastic energy extremely quickly," Bolmin said. "It is the same principle that you find in jumping popper toys."

"We were surprised to find that the beetles use these basic engineering principles. If an engineer wanted to build a device that jumps like a click beetle, they would likely design it the same way nature did," Wissa said. "This work turned out to be a great example of how engineering can learn from nature and how nature demonstrates physics and engineering principles."

"These results are fascinating from an engineering perspective, and for biologists, this work gives us a new perspective on how and why click beetles evolved this way," Alleyne said. "This kind of insight may have never come to light, if not for this interdisciplinary collaboration between engineering and biology. It opens a new door for both fields."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

A new archaeology for the Anthropocene era

image: Archaeological studies of low-density, agrarian-based cities such as ancient Angkor Wat in Cambodia are increasingly being used to inform the development of more sustainable urban centres in the future.

Image: 
Alison Crowther

Indiana Jones and Lara Croft have a lot to answer for. Public perceptions of archaeology are often thoroughly outdated, and these characterisations do little to help.

Yet archaeology as practiced today bears virtually no resemblance to the tomb raiding portrayed in movies and video games. Indeed, it bears little resemblance to even more scholarly depictions of the discipline in the entertainment sphere.

A paper published today in Nature Ecology and Evolution aims to give pause to an audience that has been largely prepared to take such out-of-touch depictions at face value. It reveals an archaeology practiced by scientists in white lab coats, using multi-million-euro instrumentation and state of the art computers.

It also reveals an archaeology poised to contribute in major ways to addressing such thoroughly modern challenges as biodiversity conservation, food security and climate change.

"Archaeology today is a dramatically different discipline to what it was a century ago," observes Nicole Boivin, lead author of the study and Director of the Institute's Department of Archaeology. "While the tomb raiding we see portrayed in movies is over the top, the archaeology of the past was probably closer to this than to present-day archaeology. Much archaeology today is in contrast highly scientific in orientation, and aimed at addressing modern-day issues."

Examining the research contributions of the field over the past few decades, the authors reach a clear conclusion - archaeology today has a great deal to contribute to addressing the challenges of the modern era.

"Humans in the present era have become one of the great forces shaping nature," emphasizes Alison Crowther, coauthor and researcher at both the University of Queensland and the MPI Science of Human History. "When we say we have entered a new, human-dominated geological era, the Anthropocene, we acknowledge that role."

How can archaeology, a discipline focused on the past, hope to address the challenges we face in the Anthropocene?

"It is clear that the past offers a vast repertoire of cultural knowledge that we cannot ignore," highlights Professor Boivin.

The two researchers show the many ways that data about the past can serve the future. By analysing what worked and didn't work in the past - effectively offering long-term experiments in human society - archaeologists gain insight into the factors that support sustainability and resilience, and the factors that work against them. They also highlight ancient solutions to modern problems.

"We show how researchers have improved the modern world by drawing upon information about the ways people in the past enriched soils, prevented destructive fires, created greener cities and transported water without fossil fuels," notes Dr. Crowther.

People also continue to use, and adapt, ancient technologies and infrastructure, including terrace and irrigation systems that are in some cases centuries or even millennia old.

But the researchers are keen to highlight the continued importance of technological and social solutions to climate change and the other challenges of the Anthropocene.

"It's not about glorifying the past, or vilifying progress," emphasizes Professor Boivin. "Instead, it's about bringing together the best of the past, present and future to steer a responsible and constructive course for humanity."

Credit: 
Max Planck Institute of Geoanthropology

UCI researchers: Climate change will alter the position of the Earth's tropical rain belt

Irvine, Calif. -- Future climate change will cause a regionally uneven shifting of the tropical rain belt - a narrow band of heavy precipitation near the equator - according to researchers at the University of California, Irvine and other institutions. This development may threaten food security for billions of people.

In a study published today in Nature Climate Change, the interdisciplinary team of environmental engineers, Earth system scientists and data science experts stressed that not all parts of the tropics will be affected equally. For instance, the rain belt will move north in parts of the Eastern Hemisphere but will move south in areas in the Western Hemisphere.

According to the study, a northward shift of the tropical rain belt over the eastern Africa and the Indian Ocean will result in future increases of drought stress in southeastern Africa and Madagascar, in addition to intensified flooding in southern India. A southward creeping of the rain belt over the eastern Pacific Ocean and Atlantic Ocean will cause greater drought stress in Central America.

"Our work shows that climate change will cause the position of Earth's tropical rain belt to move in opposite directions in two longitudinal sectors that cover almost two thirds of the globe, a process that will have cascading effects on water availability and food production around the world," said lead author Antonios Mamalakis, who recently received a Ph.D. in civil & environmental engineering in the Henry Samueli School of Engineering at UCI and is currently a postdoctoral fellow in the Department of Atmospheric Science at Colorado State University.

The team made the assessment by examining computer simulations from 27 state-of-the-art climate models and measuring the tropical rain belt's response to a future scenario in which greenhouse gas emissions continue to rise through the end of the current century.

Mamalakis said the sweeping shift detected in his work was disguised in previous modelling studies that provided a global average of the influence of climate change on the tropical rain belt. Only by isolating the response in the Eastern and Western Hemisphere zones was his team able to highlight the drastic alterations to come over future decades.

Co-author James Randerson, UCI's Ralph J. & Carol M. Cicerone Chair in Earth System Science, explained that climate change causes the atmosphere to heat up by different amounts over Asia and the North Atlantic Ocean.

"In Asia, projected reductions in aerosol emissions, glacier melting in the Himalayas and loss of snow cover in northern areas brought on by climate change will cause the atmosphere to heat up faster than in other regions," he said. "We know that the rain belt shifts toward this heating, and that its northward movement in the Eastern Hemisphere is consistent with these expected impacts of climate change."

He added that the weakening of the Gulf Stream current and deep-water formation in the North Atlantic is likely to have the opposite effect, causing a southward shift in the tropical rain belt across the Western Hemisphere.

"The complexity of the Earth system is daunting, with dependencies and feedback loops across many processes and scales," said corresponding author Efi Foufoula-Georgiou, UCI Distinguished Professor of Civil & Environmental Engineering and the Henry Samueli Endowed Chair in Engineering. "This study combines the engineering approach of system's thinking with data analytics and climate science to reveal subtle and previously unrecognized manifestations of global warming on regional precipitation dynamics and extremes."

Foufoula-Georgiou said that a next step is to translate those changes to impacts on the ground, in terms of flooding, droughts, infrastructure and ecosystem change to guide adaptation, policy and management.

Credit: 
University of California - Irvine