Tech

Mapping citrus microbiomes: The first step to finding plant-microbiome treasures

image: On the left PhD candidate Jin Xu in a citrus orchard; on the right Yunzeng Zhang, who recently became a professor at Yangzhou University in China. In the middle is the first figure of the paper depicting the different compartment of the citrus microbiome.

Image: 
Tess Deyett

Due to their complexity and microscopic scale, plant-microbe interactions can be quite elusive. Each researcher focuses on a piece of the interaction, and it is hard to find all the pieces let alone assemble them into a comprehensive map to find the hidden treasures within the plant microbiome. This is the purpose of review, to take all the pieces from all the different sources and put them together into something comprehensive that can guide researchers to hidden clues and new associations that unlock the secrets of a system. Like any good treasure map, there are still gaps in the knowledge and the searcher must be clever enough to fill in those gaps to find the "X". Without a map, there is only aimless wandering, but with a map, there is hope of finding the hidden treasures of the plant microbiome.

Yunzheng Zhang, Nian Wang, and colleagues recently put together a map, "The Citrus Microbiome: From Structure and Function to Microbiome Engineering and Beyond," which they published in the Phytobiomes Journal. Their map outlines the structure and potential functions of the plant microbiome and how this knowledge can guide us to new engineering feats and a greater understanding of the hidden treasures of the plant microbiome. Once revealed, insights into the microbiome may help researchers grow more resilient citrus plants that are both more suitable for changing climates and more capable of surviving pathogen pressures.

Citrus is a globally important perennial fruit crop that has many economic and emotional ties to society. It is cultivated in more than 140 countries, exposing it to a variety of environmental and disease pressures. Our relationships with citrus are diverse, but not as diverse as the relationship between the citrus and its microbiome. As we shift to an ever-destabilizing environment, it becomes more imperative to understand these interactions to ensure citrus will be present in our lives in the years to come.

Nian Wang believes, "harnessing citrus-microbiome interactions to address biotic and abiotic stressors offers an opportunity to increase sustainable citrus production." To create this treasure map of citrus-microbe interactions, Wang and colleagues established The International Citrus Microbiome Consortium in 2015. Their goal was to form international collaborations to address this global problem. Before understanding the ins and outs of this ecosystem, the team had to know which individual microbes build the citrus microbiome community and how the microbial community may function throughout the different niches of the plant.

After identifying microbes and some of the overall functions the microbiome is performing collectively, the question turns to how they are doing it. Once researchers understand the who, what, and how of the microbiome they can then start to engineer new microbes or design synthetic communities to perform a specific function that may address current deficiencies like crop production, disease spread, and other citrus health ailments. In their review, Zhang, Wang, and colleagues specifically address the first question by combining multiple metagenomic studies into one cohesive paper and confidently stating which microbes make up the citrus microbiome in the rhizosphere (around the root), rhizoplane (on the root), endorhiza (inside the root), and phyllosphere (leaf surface).

This review highlights the exhaustive research that has been done on the citrus microbiome thus far and cleverly assembles this knowledge into one comprehensive figure. The citrus rhizospheres were enriched with microbes belonging to the pyla of Proteobacteria and Bacteroidetes, which is common in other plant rhizospheres. Bradyrhizobium and Burkholderia microbes were the most dominant groups in the transition from rhizosphere to rhizoplane. The endorhiza microbiome biomass is approximately a fifth of the rhizosphere biomass and is dominated by Proteobacteria, Firmicutes, and Actinobacteria, as is the phyllosphere.

But authors note that there is still so much to learn and discover. At this point, researchers only know which microbes are present in some of the tissue, meaning there is still a significant amount of research to be done to unlock the hidden treasures within the citrus microbiome. With this knowledge, it's time to start incorporating multiomic technologies, such as transcriptomics (the study of RNA) and metabolomics (the study of metabolites), in addition to the DNA-sequencing techniques that have previously been used. It is an exciting time to be a plant or agricultural scientist because they are on the cusp of integrating novel technologies to understand a world we can't even see: the world of the microbiome.

"I think artificial intelligence will be critical for us to mine a humongous amount of data. Regarding how to utilize the microbiome, I think synthetic community or consortia of microbes, as well as CRISPR-mediated genome editing, will provide the most promising path for the application," said Wang. An integrated problem of this size, involving so many moving parts, does not only need an international consortium of top researchers but also a highly interdisciplinary team that includes not only plant scientists, microbiologists, and plant pathologists but also experts in the fields of bioinformatics, horticulture, and computer science. Only banded together can this crew of scientists put together the missing pieces of the map and unlock the treasures within the citrus microbiome before it's too late.

Credit: 
American Phytopathological Society

As global climate shifts, forests' futures may be caught in the wind

image: Unlike daily weather, global prevailing wind patterns are believed to be relatively stable over millennial timescales. A new study finds that these wind currents have helped shape genetic diversity in the world's forests and could impact how well different tree populations are able to adapt to a changing climate. In this image, black arrows and white paths represent global prevailing wind directions, and colored dots represent the locations of the tree populations whose genetic data were analyzed in the study.

Image: 
Matthew Kling, UC Berkeley

Berkeley -- Forests' ability to survive and adapt to the disruptions wrought by climate change may depend, in part, on the eddies and swirls of global wind currents, suggests a new study by researchers at the University of California, Berkeley.

Unlike animals, the trees that make up our planet's forests can't uproot and find new terrain if conditions get tough. Instead, many trees produce seeds and pollen that are designed to be carried away by the wind, an adaptation that helps them colonize new territories and maximize how far they can spread their genes.

The new study compared global wind patterns with previously published genetic data of nearly 100 tree and shrub species collected from forests around the world, finding significant correlations between wind speed and direction and genetic diversity throughout our planet's forests. The findings are the first to show that wind may not only influence the spread of an individual tree or species' genes, but it can also help shape genetic diversity and direct the flow of gene variants across entire forests and landscapes.

Understanding how genetic variants move throughout a species range will become increasingly important as climate change alters the conditions of local habitats, the researchers say.

"How trees move and how plants move, in general, is a big area of uncertainty in plant ecology because it's hard to study plant movements directly -- they happen as a result small, rare movements of seeds and pollen," said study lead author Matthew Kling, a postdoctoral researcher in integrative biology at UC Berkeley. "However, to predict how species distributions, and plant ecology, in general, will respond to climate change, we need to understand how these species are going to be able to move long distances to track the movement of natural resources and climate conditions over time."

While animals, birds and insects can also disperse pollen and seeds, wind's strong directionality makes it particularly important for understanding how different tree species will respond to climate change, said study senior author David Ackerly, a professor and dean of UC Berkeley's Rausser College of Natural Resources.

"As the world warms, many plants and animals will need to move to places with suitable habitat in the future to survive," Ackerly said. "Wind dispersal has a particularly interesting connection to climate change because wind can either push the genes or organisms in the right direction, toward more suitable habitat, or in the opposite direction. It may be the only terrestrial dispersal vector that can be aligned with or against the direction of climate change."

Any way the wind blows

Despite the fickle nature of daily weather conditions, large-scale global wind patterns are largely determined by Earth's shape, rotation and the locations of the continents, and are believed to be relatively stable over millennial time-scales. These wind patterns are also unlikely to be dramatically altered by climate change, Kling said.

To examine whether these global prevailing winds have shaped the genetic diversity of modern-day forests, Kling compared current planetary wind models -- compiled from 30 years of global wind data -- with genetic data from 72 publications covering 97 tree and shrub species and 1,940 plant populations worldwide.

Kling's analysis revealed three key ways that global wind patterns are shaping forests' genetic diversity. First, tree populations that are connected by stronger wind currents tend to be more genetically similar than tree populations that are not as connected. Second, tree populations that are more downwind, or farther in the direction that the wind blows, tend to have more genetic diversity in general. Finally, genetic variants are more likely to disperse in the direction of the wind.

Though these patterns can only be statistically validated by looking at many populations of trees throughout the world, they can sometimes be evident when examining the genetic diversity of a single tree species across its habitat range, Kling said.

For example, the island scrub oak, or Quercus pacifica, is native to the Channel Islands in Southern California, where prevailing winds tend to blow to the southeast. Kling's analysis showed that scrub oak populations on islands that are connected by higher wind speeds are more genetically similar to each other. Genetic variants also appear to have dispersed more frequently to the islands in the southward and eastward directions than the reverse, leading to greater genetic diversity to the south and east.

Kling hopes that recognizing these patterns will help conservationists and ecologists better understand how well tree and plant species in different regions of the globe will adapt to a warming world.

"Populations in different portions of a species range have evolved over time to be well-adapted to the climate in that specific part of the range, and as climate changes, they can become out of sync with those conditions," Kling said. "Understanding how quickly genetic variants from elsewhere in the species range can get where they are needed is important for understanding how quickly the species will respond to climate change, and how vulnerable, versus resilient, a given population might be."

Credit: 
University of California - Berkeley

'Unmaking' a move: Correcting motion blur in single-photon images

image: Single-photon imaging is the future of imaging technology, thanks to its high temporal resolution and excellent image quality. Researchers at Tokyo University of Science developed a novel deblurring method that is effective even for images with multiple moving objects that cause motion blur

Image: 
Free-Photos from Pixabay

Imaging technology has come a long way since the beginning of photography in the mid-19th century. Now, many state-of-the-art cameras for demanding applications rely on mechanisms that are considerably different from those in consumer-oriented devices. One of these cameras employs what is known as "single-photon imaging," which can produce vastly superior results in dark conditions and fast dynamic scenes. But how does single-photon imaging differ from conventional imaging?

When taking a picture with a regular CMOS camera, like the ones on smartphones, the camera sensor is open to a large influx of photons during a predefined exposure time. Each pixel in the sensor grid outputs an analog value that depends on the number of photons that hit that pixel during exposure. However, this type of imaging has few ways to deal with moving objects; the movement of the object has to be much slower than the exposure time to avoid blurring. In contrast, single-photon cameras capture a rapid burst of consecutive frames with very short individual exposure times. These frames are binary--a grid of 1s and 0s that respectively indicate whether one photon arrived at each pixel or not during exposure. To reconstruct an actual picture from these binary frames (or bit planes), many of them have to be processed into a single non-binary image. This can be achieved by assigning different levels of brightness to all the pixels in the grid, depending on how many of the bit planes had a "1" for each pixel.

Besides its higher speed, the completely digital nature of single-photon imaging allows for designing clever image reconstruction algorithms that can make up for technical limitations or difficult scenarios. At Tokyo University of Science, Japan, Professor Takayuki Hamamoto has been leading a research team focused on taking the capabilities of single-photon imaging further. In the latest study by Prof. Hamamoto and his team, which was published in IEEE Access, they developed a highly effective algorithm to fix the blurring caused by motion in the imaged objects, as well as common blurring of the entire image such as that caused by the shaking of the camera.

Their approach addresses many limitations of existing deblurring techniques for single-photon imaging, which produce low-quality pictures when multiple objects in the scene are moving at different speeds and dynamically overlapping each other. Instead of adjusting the entire image according to the estimated motion of a single object or on the basis of spatial regions where the object is considered to be moving, the proposed method employs a more versatile strategy.

First, a motion estimation algorithm tracks the movement of individual pixels through statistical evaluations on how bit values change over time (over different bit planes). In this way, as demonstrated experimentally by the researchers, the motion of individual objects can be accurately estimated. "Our tests show that the proposed motion estimation technique produced results with errors of less than one pixel, even in dark conditions with few incident photons," remarks Prof. Hamamoto.

The team then developed a deblurring algorithm that uses the results of the motion estimation step. This second algorithm groups pixels with a similar motion together, thereby identifying in each bit plane separate objects moving at different speeds. This allows for deblurring each region of the image independently according to the motions of objects that pass through it. Using simulations, the researchers showed that their strategy produced very crisp and high-quality images, even in low-light dynamic scenes crowded with objects coursing at disparate velocities.

Overall, the results of this study aptly showcase how greatly single-photon imaging can be improved if one gets down to developing effective image processing techniques. "Methods for obtaining crisp images in photon-limited situations would be useful in several fields, including medicine, security, and science. Our approach will hopefully lead to new technology for high-quality imaging in dark environments, like outer space, and super-slow recording that will far exceed the capabilities of today's fastest cameras," says Prof. Hamamoto. He also states that even consumer-level cameras might timely benefit from progress in single-photon imaging.

We are certainly getting closer to a new era in digital photography, and studies like this one are crucial for paving the wave towards that future!

Credit: 
Tokyo University of Science

Monash study may help boost peptide design

Peptides " short strings of amino acids" play a vital role in health and industry with a huge range of medical uses including in antibiotics, anti-inflammatory and anti-cancer drugs. They are also used in the cosmetics industry and for enhancing athletic performance. Altering the structure of natural peptides to produce improved compounds is therefore of great interest to scientists and industry. But how the machineries that produce these peptides work still isn't clearly understood.

Associate Professor Max Cryle from Monash University's Biomedicine Discovery Institute (BDI) has revealed a key aspect of peptide machinery in a paper published in Nature Communications today that provides a key to the "Holy Grail" of re-engineering peptides..

The findings will advance his lab's work into re-engineering glycopeptide antibiotics to counter the pressing global threat posed by antimicrobial resistance, and more broadly to improving the properties of peptides generally.

"Peptide synthesis machineries are often largely modular assembly lines, with each module comprised of different component parts. Changing what you make in these assembly lines, that is, peptides with new bioactivities, is a "holy grail" in re-design," Associate Professor Cryle said. "One of the things we tried to understand in this study was where the selectivity of these machineries come from - they're very selective for making one specific peptide and understanding where this specificity comes from is a bit of a mystery," he said.

"We were able to structurally characterise a part of such a machinery that generates the links within the peptides at a stage that hasn't been previously determined. What we showed is that these domains responsible for the linking of amino acids into peptides don't play a general role in selecting the amino acids during this process."

"This is good news from a re-engineering point of view because it means we don't need to concern ourselves with changing multiple pieces of the machinery to make single amino acid changes, we just need to focus on changing the building block that goes in and that's quite promising."

Associate Professor Cryle led a multidisciplinary team of scientists who enlisted a variety of techniques to model the peptide structures including using the Australian Synchrotron for X-ray crystallography along with chemical and biochemical techniques. He collaborated with groups in Canberra, Brisbane and Germany who helped with computational modelling and bioinformatics.

"Our ability to understand the enzymes that make natural peptides is key to our ability to produce improved ones to target issues like antimicrobial resistance," he said. "Now we can actually start to think about ways to change the machinery's acceptance of different building blocks and in this way we can make new peptides with improved antibacterial properties," he said.

In the future, a collaboration with Dr Evi Stegmann's group at the University of Tübingen in Germany will help translate the findings from a theoretical lab solution to eventually developing a commercial-scale production of new and improved antibiotics, he said.

Credit: 
Monash University

Integrating medical imaging and cancer biology with deep neural networks

image: Neural network framework.

Image: 
Smedley, Aberle, and Hsu, doi 10.1117/1.JMI.8.3.031906.

Despite our remarkable advances in medicine and healthcare, the cure to cancer continues to elude us. On the bright side, we have made considerable progress in detecting several cancers in earlier stages, allowing doctors to provide treatments that increase long-term survival. The credit for this is due to "integrated diagnosis," an approach to patient care that combines molecular information and medical imaging data to diagnose the cancer type and, eventually, predict treatment outcomes.

There are, however, several intricacies involved. The correlation of molecular patterns, such as gene expression and mutation, with image features (e.g., how a tumor appears in a CT scan), is commonly referred to as "radiogenomics." This field is limited by its frequent use of high-dimensional data, wherein the number of features exceeds that of observations. Radiogenomics is also plagued by several simplifying model assumptions and a lack of validation datasets. While machine learning techniques such as deep neural networks can alleviate this situation by providing accurate predictions of image features from gene expression patterns, there arises a new problem: we do not know what the model has learned.

"The ability to interrogate the model is critical to understanding and validating the learned radiogenomic associations," explains William Hsu, associate professor of radiological sciences at the University of California, Los Angeles, and director of the Integrated Diagnostics Shared Resource. Hsu's lab works on problems related to data integration, machine learning, and imaging informatics. In an earlier study, Hsu and his colleagues used a method of interpreting a neural network called "gene masking" to interrogate trained neural networks to understand learned associations between genes and imaging phenotypes. They demonstrated that the radiogenomic associations discovered by their model were consistent with prior knowledge. However, they only used a single dataset for brain tumor in their previous study, which means the generalizability of their approach remained to be determined.

Against this backdrop, Hsu and his colleagues, Nova Smedley, former graduate student and lead author, and Denise Aberle, a thoracic radiologist, have carried out a study investigating whether deep neural networks can represent associations between gene expression, histology (microscopic features of biological tissues), and CT-derived image features. They found that the network could not only reproduce previously reported associations but also identify new ones. The results of this study are published in the Journal of Medical Imaging.

The researchers used a dataset of 262 patients to train their neural networks to predict 101 features from a massive collection of 21,766 gene expressions. They then tested its predictive ability on an independent dataset of 89 patients, while pitting its ability against that of other models within the training dataset. Finally, they applied gene masking to determine the learned associations between subsets of genes and the type of lung cancer.

They found that the overall performance of neural networks at representing these datasets was better than the other models and generalizable to datasets from another population. Additionally, the results of gene masking suggested that the prediction of each imaging feature was related to a unique gene expression profile governed by biological processes.

The researchers are encouraged by their findings. "While radiogenomic associations have previously been shown to accurately risk stratify patients, we are excited by the prospect that our model can better identify and understand the significance of these associations. We hope this approach increases the radiologist's confidence in assessing the type of lung cancer seen on a CT scan. This information would be highly beneficial in informing individualized treatment planning," observes Hsu.

Credit: 
SPIE--International Society for Optics and Photonics

Timing is everything in new implant tech

image: A new version of wireless implants developed at Rice University allows for multiple stimulators, as seen here, to be programmed and magnetically powered from a single transmitter outside the body. The implants could be used to treat spinal cord injuries or as pacemakers.

Image: 
Secure and Intelligent Micro-Systems Lab/Rice University

HOUSTON - (May 10, 2021) - Implants that require a steady source of power but don't need wires are an idea whose time has come.

Now, for therapies that require multiple, coordinated stimulation implants, their timing has come as well.

Rice University engineers who developed implants for electrical stimulation in patients with spinal cord injuries have advanced their technique to power and program multisite biostimulators from a single transmitter.

A peer-reviewed paper about the advance by electrical and computer engineer Kaiyuan Yang and his colleagues at Rice's Brown School of Engineering won the best paper award at the IEEE's Custom Integrated Circuits Conference, held virtually in the last week of April.

The Rice lab's experiments showed an alternating magnetic field generated and controlled by a battery-powered transmitter outside the body, perhaps on a belt or harness, can deliver power and programming to two or more implants to at least 60 millimeters (2.3 inches) away.

The implants can be programmed with delays measured in microseconds. That could enable them to coordinate the triggering of multiple wireless pacemakers in separate chambers of a patient's heart, Yang said.

"We show it's possible to program the implants to stimulate in a coordinated pattern," he said. "We synchronize every device, like a symphony. That gives us a lot of degrees of freedom for stimulation treatments, whether it's for cardiac pacing or for a spinal cord."

The lab tested its tiny implants, each about the size and weight of a vitamin, on tissue samples, live hydra vulgaris and in rodents. The experiments proved that over at least a short distance, the devices were able to stimulate two separate hydra to contract and activate a fluorescent tag in response to electrical signals, and to trigger a response at controlled amplitudes along a rodent's sciatic nerve.

"There's a study on spinal cord regeneration that shows multisite stimulation in a certain pattern will help in the recovery of the neuro system," Yang said. "There is clinical research going on, but they're all using benchtop equipment. There are no implantable tools that can do this."

The lab's devices, called MagNI (for magnetoelectric neural implants) were introduced early last year as possible spinal cord stimulators that didn't require wires to power and program them. That means wire leads don't have to poke through the skin of the patient, a situation that would risk infection. The other alternative, as used in many battery-powered implants, is to replace them via surgery every few years.

Credit: 
Rice University

Informed tourists make whale watching safer for whales

image: A boat blocks two whales, which will have to change their course in order to avoid a collision. By studying both whale behavior and tourism practices, researchers from the Smithsonian and Arizona State University hope to provide scientific information that policy makers and tourist companies can use to make whale-watching safer for whales.

Image: 
Héctor Guzmán

According to the International Whaling Commission, whale-watching tourism generates more than $2.5 billion a year. After the COVID-19 pandemic, this relatively safe outdoor activity is expected to rebound. Two new studies funded by a collaborative initiative between the Smithsonian Tropical Research Institute (STRI) in Panama and Arizona State University (ASU) show how science can contribute to whale watching practices that ensure the conservation and safety of whales and dolphins.

"The Smithsonian's role is to provide scientific advice to policy makers as they pioneer management strategies to promote whale conservation," said STRI marine biologist, Hector Guzmán, whose previous work led the International Maritime Organization to establish shipping corridors in the Pacific to prevent container ships from colliding with whales along their migration routes. "Now we have methods to measure how whale behavior changes as a result of whale-watching practices. These two papers were published in a special volume of Frontiers in Marine Science dedicated to studies of whale-watching practices worldwide."

Whale watching is on the rise around the world and is part of sustainable tourism-development projects in countries such as Cambodia, Laos, Nicaragua and Panama. But critics say that jobs and increased income for tour operators and coastal residents cannot be justified if whales are harmed.

Whale-watching regulations in Panama first established with Guzman's help in 2005, and modified in 2017 and 2020, prohibit activities that cause whales to change their behavior. The aim of the first study was to discover if the presence of tourism boats caused the whales to change their behavior during the breeding season.

Researchers monitored humpback whales (Megaptera novaeangliae) during their August-September breeding season within Panama's Las Perlas Archipelago protected area. From a high vantage point on Contadora Island and from whale-watching vessels, they recorded the number of tourist boats and whales present and activity, including changes in direction, breaching, slapping the water, dives and spy-hops (raising the head above the water surface) on 47 occasions.

They discovered that whale-watching vessels frequently disregarded legal guidelines designed to protect the whales: deliberately chasing whales, getting too close to adult whales and calves, and forcing whales to change their behavior. Other notable observations included:

Tourist boats chased groups that included calves more often than groups of adults.

Groups that included a calf changed direction more often than did other group types.

Whales changed direction more often when more than two to three tourist boats were present.

Roughly 1,000 whale watchers visit the Las Perlas islands each year, and that number is growing. In the second study, researchers interviewed tourists waiting to return to the mainland at the Contadora airport to better understand the whale watching experience. They interviewed every third person waiting in line.

Ninety-nine percent of the tourists who saw whales reported seeing at least one behavior while whale watching, and 68% reported that their experience met or exceeded their expectations. 30% said that they did not observe a whale. Half reported that they had observed either their boat or other nearby boats chasing whales at high speed, or that they had gotten closer to the whales than the distance permitted by law.

Breeding whales are threatened by marine pollution, ship strikes, climate change, noise and disturbances while they are resting, socializing and feeding. In the future, researchers hope to measure the amount of cortisol (a stress hormone) in whale fecal samples to find out if the animals are under stress, use better technology (e.g., theodolites--instruments that measure angles) to measure the distance between boats and whales, use drones with cameras to document interactions and continue to survey tourists to better understand whale watching and inform management strategies to keep these magnificent animals safe.

"I wanted to do a study with practical outcomes for conservation, not just another paper that sits on a shelf," said Katie Surrey, doctoral candidate at ASU and co-author of both papers. "In Las Perlas, where whales come to breed, we observed harassing behavior, like ten tourist boats surrounding a single mother and calf. But we also talked to tourists and operators who learned a significant amount about whales and champion better whale-watching practices and conservation efforts as a result. For my dissertation I plan to find out more about what motivates both the tourists and the operators, so that we can suggest ways to both improve their experience and safeguard the whales."

Credit: 
Smithsonian Tropical Research Institute

Geoscientists find that shallow wastewater injection drives deep earthquakes in Texas

image: Manoochehr Shirzaei, an associate professor of geosciences at Virginia Tech. Photo by Steven Mackay.

Image: 
Virginia Tech

In a newly published paper, Virginia Tech geoscientists have found that shallow wastewater injection -- not deep wastewater injections -- can drive widespread deep earthquake activity in unconventional oil and gas production fields.

Brine is a toxic wastewater byproduct of oil and gas production. Well drillers dispose of large quantities of brine by injecting it into subsurface formations, where its injection can cause earthquakes, according to Guang Zhai, a postdoctoral research scientist in the Department of Geosciences, part of the Virginia Tech College of Science, and a visiting assistant researcher at the University of California, Berkeley.

The findings appear in the May 10 edition of the journal Proceedings of the National Academy of Sciences. Joining Zhai on the paper are Manoochehr Shirzaei, an associate professor of geosciences at Virginia Tech, and Michael Manga, a professor and chair of Berkeley's Department of Earth and Planetary Science. In the U.S. Department of Energy-funded study, the team focused on the Delaware Basin in western Texas, one of the most productive and unconventional hydrocarbon fields in the United States.

Since 2010, the basin has experienced a significant increase in shallow wastewater injection and widespread deep seismicity, including the recent 5.0 magnitude event near Mentone, Texas. Most of the earthquakes were relatively small, but some have been large and widely felt.

"It is quite interesting that injection above the thick, overall low-permeability shale reservoir can induce an earthquake within the deep basement, despite a minimal hydraulic connection," Zhai said. "What we have found is that the so-called poroelastic stresses can activate basement faults, which is originated from the fluid injection causing rock deformation."

Poroelasticity is the resulting interaction between fluid flow and solid deformations within a porous formation, here sandstone. "This finding is significant because it puts poroelastic stresses in the spotlight as the main driver for basinwide earthquakes in the Basin," said Shirzaei, who is also an affiliated faculty member of the Virginia Tech Global Change Center.

Yet, predicting the amount of seismic activity from wastewater injection is problematic because it involves numerous variables, one of which is injection depth, Zhai said. Although it is well known that fluid pressure increase due to deep injection is the dominant reason for the recent seismicity increase in the central and eastern United States, it is still questionable exactly how shallow injections cause earthquakes.

During the study, the team looked at how varying amounts of injected brine perturbed the crustal stresses deep under the Delaware Basin and how these disturbances lead to earthquakes on a given fault. Added Zhai, "Fluids such as brine and natural groundwater can both be stored and move through rocks that are porous."

The trio used data analytics and computer modeling to mimic the large volume of fluid extraction from shale reservoirs from more than 1,500 shale production wells during 1993 to 2020, with 400 wells injecting brine in sandstone formations from 2010 to 2020. To make the scenario realistic, the model included the mechanical properties of rocks in the Delaware Basin, Shirzaei said.

The team found that the basinwide earthquakes mainly occur where the deep stress increases because of shallow injection. This means there is a causal link between deep earthquakes and shallow fluid injection via elastic stress transfer.

"The deep stress change is sensitive to shallow aquifer properties, especially the hydraulic diffusivity, which describes the ease of fluid flow in porous medium," Manga said. "One question to ask is why some areas that host lots of shallow injection lack seismicity. Our approach offers a way to investigate other significant factors that control induced earthquakes."

In addition to human interventions, the tectonic settings themselves also help predetermine the magnitude and liklihood of the earthquake, Shirzaei said. This study and future work will provide a viable way to assess induced seismic hazards, combining natural and human factors. The ultimate goal: to minimize the hazards from disposing of wastewater during natural gas production until long-term, renewable energy technologies become available to all.

"As the future energy demands increase globally, dealing with the enormous amount of coproduced wastewater remains challenging, and safe shallow injection for disposal is more cost-efficient than deep injection or water treatment," Zhai said. "We hope the mechanism we find in this study can help people rethink the ways induced earthquakes are caused, eventually helping with better understanding them and mitigating their hazards."

Credit: 
Virginia Tech

Stanford researchers map how people in cities get a health boost from nature

image: New research maps out how parks, lakes, trees and other urban green spaces boost physical activity and overall human wellbeing in cities.

Image: 
Jacob Lund / Shutterstock

Your local city park may be improving your health, according to a new paper led by Stanford University researchers. The research, published in Proceedings of the National Academy of Sciences, lays out how access to nature increases people's physical activity - and therefore overall health - in cities. Lack of physical activity in the U.S. results in $117 billion a year in related health care costs and leads to 3.2 million deaths globally every year. It may seem like an intuitive connection, but the new research closes an important gap in understanding how building nature into cities can support overall human wellbeing.

"Over the past year of shelter-in-place restrictions, we've learned how valuable and fulfilling it can be to spend time outdoors in nature, especially for city-dwellers," said study lead author Roy Remme, a postdoctoral researcher at the Stanford Natural Capital Project at the time of research. "We want to help city planners understand where green spaces might best support people's health, so everyone can receive nature's benefits."

In cities, nature provides cooling shade to neighborhood streets, safe harbor for pollinators and rainwater absorption to reduce flooding. It's widely understood that physical activity improves human health, but how parks, lakes, trees and other urban green spaces boost physical activity and overall wellbeing is an unsolved piece of the puzzle.

Boosting health through nature activity

The team combined decades of public health research with information on nature's benefits to people in cities. They considered how activities like dog walking, jogging, cycling and community gardening are supported by cities' natural spaces. They also factored in things like distance to urban greenery, feelings of safety and accessibility to understand how those elements can alter the benefits of nature for different people. From tree-lined sidewalks to city parks and waterfronts, the team created a model framework to map out urban nature's physical health benefits.

The researchers' framework explores how people might choose to walk an extra few blocks to enjoy a blooming garden or bike to work along a river path, reaping the health benefits of physical activity they may have missed if not motivated by natural spaces.

In Amsterdam, city planners are currently implementing a new green infrastructure plan. Using the city as a hypothetical case study, the researchers applied their framework to understand how Amsterdam's plans to build or improve new parks might affect physical activity for everyone in the city. They also looked at the effects on different sub-populations, like youth, elderly and low-income groups. This example illustrates how the city could invest in urban nature to have the greatest physical activity benefits for human health.

The research will ultimately serve as the basis for a new health model in Natural Capital Project software - free, open-source tools that map the many benefits nature provides people. The software was recently used to inform an assessment of 775 European cities to understand the potential of nature-based solutions for addressing climate change. Eventually, the new health model software will be available to city planners, investors and anyone else interested in new arguments and tools for targeting investments in nature in cities.

Nature's contributions are multidimensional - they can support cognitive, emotional and spiritual well-being, as well as physical health. Previous work from the Natural Capital Project has shown many of these connections, but the new research adds an important link to physical health that had been missing from the equation.

"Nature experience boosts memory, attention and creativity as well as happiness, social engagement and a sense of meaning in life," said Gretchen Daily, senior author on the paper and faculty director of the Stanford Natural Capital Project. "It might not surprise us that nature stimulates physical activity, but the associated health benefits - from reducing cancer risks to promoting metabolic and other functioning - are really quite astonishing."

Equity in access to nature

As our world becomes more urbanized and city-centric, the ability to easily access outdoor natural spaces becomes increasingly challenging, especially for overburdened communities. Identifying where urban nature is missing in vulnerable or overburdened communities - then working to fill those gaps - could provide people with valuable new opportunities to improve their health. The researchers hope the new study will equip urban planners with a more complete understanding of the benefits nature can provide their communities.

"Our ultimate goal is to create more healthy, equitable and sustainable cities," said Anne Guerry, co-author and Chief Strategy Officer at the Natural Capital Project. "This research is actionable - and gets us one big step closer."

Credit: 
Stanford University

Solving the cocktail party problem

Conducting a discussion in a noisy place can be challenging when other conversations and background noises interfere with our ability to focus attention on our conversation partner. How the brain deals with the abundance of sounds in our environments, and prioritizes among them, has been a topic of debate among cognitive neuroscientists for many decades.

Often referred to as the "Cocktail Party Problem", its central question focuses on whether we can absorb information from a few speakers in parallel, or whether we are limited to understanding speech from only one speaker at a time.

One reason this question is difficult to answer is that attention is an internal state not directly accessible to researchers. By measuring the brain activity of listeners as they attempt to focus attention on a single speaker and ignore a task-irrelevant one, we can gain insight into the internal operations of attention and how these competing speech stimuli are represented and processed by the brain.

In a study recently published in the journal eLife, researchers from Israel's Bar-Ilan University set out to explore whether words and phrases are identified linguistically or just represented in the brain as "acoustic noise", with no further linguistic processing applied.

"Answering this question helps us better understand the capacity and limitations of the human speech-processing system. It also gives insight into how attention helps us deal with the multitude of stimuli in our environments - helping to focus primarily on the task-at-hand, while also monitoring what is happening around us," says Dr. Elana Zion Golumbic, of Bar-Ilan University's Gonda (Goldschmied) Multidiciplinary Brain Research Center, who led the study.

Zion Golumbic and team measured brain activity of human listeners as they listened to two speech stimuli, each presented to a different ear. Participants were instructed to focus their attention on the content of one speaker, and to ignore the other.

The researchers found evidence that the so-called unattended speech, generated from background conversations and noise, is processed at both acoustic and linguistic levels, with responses observed in auditory and language-related areas of the brain.

Additionally, they found that the brain response to the attended speaker in language-related brain regions was stronger when it 'competed' with other speech (in comparison to non-speech competition). This suggests that the two speech-inputs compete for the same processing resources, which may underlie the increased listening effort required for staying focused when many people talk at once.

The study contributes to efforts to understand how the brain deals with the abundance of auditory inputs in our environment. It has theoretical implications for how we understand the nature of attentional selection in the brain. It also carries substantial applicative potential for guiding the design of smart assistive devices to help individuals focus their attention better, or navigate noisy environments. The methods developed here also provide a useful new approach for testing the basis for individual differences in the ability to focus attention in noisy environments.

Credit: 
Bar-Ilan University

The formation of the Amazon Basin influenced the distribution of manatees

image: Erica Martinho, first author of the article. Manatees first split from their common ancestor after geological events isolated the South American region from the sea. The African species may have originated in migration borne by marine currents

Image: 
Erica Martinho

All three species of manatee now present on Earth share a common ancestor from which they split some 6.5 million years ago, when a huge lake in Amazonia, then linked to the Caribbean, was cut off from the sea. The African manatee Trichechus senegalensis is not as genetically close to the West Indian manatee T. manatus as was thought, and adaptation to this complex environment by the Amazonian manatee T. inunguis has left at least one mark in its genetic code. 

These are key findings of a study supported by FAPESP and published in Scientific Reports, with hitherto unknown details of the evolutionary history of these aquatic mammals. The authors are an international group of scientists led by researchers at the University of Campinas (UNICAMP) in the state of São Paulo, Brazil. They achieved the first sequencing of the mitochondrial DNA of all three manatee species.

“About 20 million years ago, Amazonia was connected to the Caribbean by Lake Pebas, a mega-wetland that has since disappeared. Manatees inhabited both Amazonia and the Caribbean – not the extant species but a common ancestor. Some 9 million years ago, the sea level fell, and Pebas shrank and became disconnected from the Caribbean. The manatees in Amazonia became semi-isolated. There was a sea inlet into the lake, but between 6 million and 5 million years ago Amazonia was totally cut off from the Caribbean. The populations became separate and began specializing in either a river or marine environment,” said Mariana Freitas Nery, principal investigator for the study. Nery is a professor at UNICAMP’s Institute of Biology and has a Young Investigator Grant from FAPESP.

There are few fossil records of manatees, but the researchers were able to reconstruct the evolutionary history of Trichechus by cross-referencing the existing information with geological and genetic data. Tissue samples were obtained via collaboration with researchers at the Mamirauá Institute of Sustainable Development in Amazonas, the Federal University of Minas Gerais (UFMG), and institutions in Belgium and the United States.

Mitochondrial DNA (inherited from the mother) contains fewer genes than nuclear DNA but is easier to sequence in the laboratory and provides crucial information on the evolution of any living being. “We were able to add information not found in the studies performed to date, especially on the African manatee. Existing phylogenies referred only to the species found in the Americas and even so only for some genes. It has always been hard to get access to material from T. senegalensis, but we succeeded thanks to this international collaboration. We ended up with a well-founded hypothesis on the distribution of these aquatic mammals,” Érica Martinha Silva de Souza, first author of the article, told Agência FAPESP. The study was part of her doctoral research at UNICAMP.

From South America to Africa

“It’s very interesting to see how the history of the Amazon Basin influenced the distribution of several fish, bird, reptile and mammal species. Our study shows clearly the influence of the formation of the Amazon Basin on the distribution of manatees,” Souza said.

In January 2021, while the authors were finishing the article, it was announced that the first fossils of a fourth member of the group, the extinct Western Amazon manatee Trichechus hesperamazonicus, had been dated. Fragments of the mandibles and palate found in what is now Rondônia state, Brazil, were dated to 45,000 years ago. No manatees have ever been sighted in the area.

According to the UNICAMP researchers, if fossils were found in Africa they would be of great help in establishing when manatees arrived on the continent, probably via sea currents. 

Studies conducted by other groups, in which the morphologies of the different species were compared and some genes were analyzed, found that the African manatee T. senegalensis, which inhabits the coastal waters between Senegal and northern Angola, as well as the rivers that flow into them, is closest to T. manatus, found in an area between the southeast coast of the US, the Caribbean and Northeast Brazil.

The mitochondrial DNA analyzed by the UNICAMP group showed that T. manatus is actually a closer relative of T. inunguis than of T. senegalensis. The links between T. senegalensis and T. manatus observed in other studies are probably due to characteristics of their habitats. 

Manatees feed on bottom-growing aquatic plants, and this explains the shape of their jaws and teeth. They can move comfortably between freshwater, sea water, and the brackish water of estuaries, although T. inunguis prefers freshwater. This flexibility is signaled by a mutation in ND4, a gene associated with the cellular respiratory chain. The same mutation has been detected in river dolphins, subterranean mammals and alpacas inhabiting high altitudes. Other studies have also shown that the mutation may be linked to changes in temperature in the environment and adaptations required to suit a low-energy diet to the needs of a large body, all of which applies to T. inunguis, the Amazonian species.

“We found what we call positive selection in this gene specific to the cellular respiratory chain,” Nery said. “The freshwater environment is complex and dynamic, with variations in temperature, sediment and acidity, especially in the Amazon Basin, so it was expected that the species would display more molecular ‘footprints’ of adaptation to this environment.”

All three species have been intensely hunted because they are docile and relatively unafraid of humans. Moreover, their habitats are under constant threat. As a result, they are classed as vulnerable by the International Union for Conservation of Nature (IUCN). Detailed knowledge is key to their conservation.

For this reason, the UNICAMP researchers support groups that are working on conservation. They are currently sequencing the whole nuclear genome of the manatees found in Brazil. “From the standpoint of evolutionary history, I don’t expect this sequencing to change much compared with what we’ve already found in the mitochondrial genome, but we’re looking for fresh information that will help us understand these animals more fully. So far we’ve written the most complete history possible,” Nery said.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Graphene key for novel hardware security

image: A team of Penn State researchers has developed a new hardware security device that takes advantage of microstructure variations to generate secure keys.

Image: 
Jennifer McCann,Penn State

As more private data is stored and shared digitally, researchers are exploring new ways to protect data against attacks from bad actors. Current silicon technology exploits microscopic differences between computing components to create secure keys, but artificial intelligence (AI) techniques can be used to predict these keys and gain access to data. Now, Penn State researchers have designed a way to make the encrypted keys harder to crack.

Led by Saptarshi Das, assistant professor of engineering science and mechanics, the researchers used graphene -- a layer of carbon one atom thick -- to develop a novel low-power, scalable, reconfigurable hardware security device with significant resilience to AI attacks. They published their findings in Nature Electronics today (May 10).

"There has been more and more breaching of private data recently," Das said. "We developed a new hardware security device that could eventually be implemented to protect these data across industries and sectors."

The device, called a physically unclonable function (PUF), is the first demonstration of a graphene-based PUF, according to the researchers. The physical and electrical properties of graphene, as well as the fabrication process, make the novel PUF more energy-efficient, scalable, and secure against AI attacks that pose a threat to silicon PUFs.

The team first fabricated nearly 2,000 identical graphene transistors, which switch current on and off in a circuit. Despite their structural similarity, the transistors' electrical conductivity varied due to the inherent randomness arising from the production process. While such variation is typically a drawback for electronic devices, it's a desirable quality for a PUF not shared by silicon-based devices.

After the graphene transistors were implemented into PUFs, the researchers modeled their characteristics to create a simulation of 64 million graphene-based PUFs. To test the PUFs' security, Das and his team used machine learning, a method that allows AI to study a system and find new patterns. The researchers trained the AI with the graphene PUF simulation data, testing to see if the AI could use this training to make predictions about the encrypted data and reveal system insecurities.

"Neural networks are very good at developing a model from a huge amount of data, even if humans are unable to," Das said. "We found that AI could not develop a model, and it was not possible for the encryption process to be learned."

This resistance to machine learning attacks makes the PUF more secure because potential hackers could not use breached data to reverse engineer a device for future exploitation, Das said. Even if the key could be predicted, the graphene PUF could generate a new key through a reconfiguration process requiring no additional hardware or replacement of components.

"Normally, once a system's security has been compromised, it is permanently compromised," said Akhil Dodda, an engineering science and mechanics graduate student conducting research under Das's mentorship. "We developed a scheme where such a compromised system could be reconfigured and used again, adding tamper resistance as another security feature."

With these features, as well as the capacity to operate across a wide range of temperatures, the graphene-based PUF could be used in a variety of applications. Further research can open pathways for its use in flexible and printable electronics, household devices and more.

Credit: 
Penn State

Recycling critical metals in e-waste: Make it the law, experts warn EU, citing raw material security

End-of-life circuit boards, certain magnets in disc drives and electric vehicles, EV and other special battery types, and fluorescent lamps are among several electrical and electronic products containing critical raw materials (CRMs), the recycling of which should be made law, says a new UN-backed report funded by the EU.

A mandatory, legal requirement to recycle and reuse CRMs in select e-waste categories is needed to safeguard from supply disruptions elements essential to manufacturers of important electrical and electronic and other products, says a European consortium behind the report, led by the Switzerland-based World Resources Forum.

The CEWASTE consortium warns that access to the CRMs in these products is vulnerable to geo-political tides. Recycling and reusing them is "crucial" to secure ongoing supplies for regional manufacturing of electrical and electronic equipment (EEE) essential for defence, renewable energy generation, LEDs and other green technologies, and to the competitiveness of European firms.

Today, recycling most of the products rich in CRMs is not commercially viable, with low and volatile CRM prices undermining efforts to improve European CRM recycling rates, which today are close to zero in most cases.

The report (available post-embargo at cewaste.eu) identifies gaps in standards and proposes an improved, fully tested certification scheme to collect, transport, process and recycle this waste, including tools to audit compliance.

"A European Union legal framework and certification scheme, coupled with broad financial measures will foster the investments needed to make recycling critical raw materials more commercially viable and Europe less reliant on outside supply sources," says the consortium.

"Acceptance by the manufacturing and recycling industry is also needed, as the standards will only work when there is widespread adoption."

The report follows the 2020 EU action plan to make Europe less dependent on third countries for CRMs by, for example, diversifying supply from both primary and secondary sources while improving resource efficiency and circularity.

Adds the consortium: "By adopting this report's recommendations, the EU can be more self-sustaining, help drive the world's green agenda and create new business opportunities at home."

The project says the following equipment categories contain CRMs in concentrations high enough to facilitate recycling:

Printed circuit boards from IT equipment, hard disc drives and optical disc drives

Batteries from WEEE and end of life vehicles

Neodymium iron boron magnets from hard disc drives, and electrical engines of e-bikes, scooters and end-of-life vehicles (ELVs)

Fluorescent powders from cathode ray tubes (CRTs; in TVs and monitors) and fluorescent lamps

Recovery technologies and processes are well established for some CRMs, such as palladium from printed circuit boards or cobalt from lithium-ion batteries.

For other CRMs, ongoing recycling technology development will soon make industrial scale operations possible but needs financial support and sufficient volumes to achieve cost-efficient operations.

Of 60+ requirements in European e-waste-related legislation and standards, few address the collection of CRMs in the key product categories, the consortium found.

They propose several additional technical, managerial, environmental, social and traceability requirements for facilities that collect, transport, and treat waste, for integration into established standards, such as the EU 50625-series.

The overall scheme was tested at European firms in Belgium, Italy, Portugal, Spain and Switzerland, as well as in Colombia, Rwanda and Turkey.

"Greater CRM recycling is a society-wide responsibility and challenge," says the consortium. "The relevant authorities must improve the economic framework conditions to make it economically viable."

CEWASTE project recommendations include:

Legislate a requirement to recycle specific critical raw materials in e-waste

Use market incentives to spur the economic viability of recovering CRMs and to stimulate the use of recovered CRMs in new products

Create platforms where demand for recycled components, materials and CRMs can meet supply

Raise awareness of the importance of CRM recycling

Consolidate fractions of CRM-rich products into quantities more attractive for recyclers

Improve access to information on CRM-rich components and monitor actual recycling

Enforce rules around shipment of CRM-rich fractions outside the EU and respect of technical standards along the value chain

Integrate CEWASTE norms and requirements into the European standard for e-waste treatment (EN 50625 series) and make the whole set legally binding

* Support more targeted private investments in new technology research and development.

Credit: 
Terry Collins Assoc

This system helps robots better navigate emergency rooms

image: The team trained the algorithm on videos from YouTube, mostly coming from documentaries and reality shows, such as "Trauma: Life in the ER" and "Boston EMS." The set of more than 700 videos is available for other research teams to train other algorithms and robots.

Image: 
University of California San Diego

Computer scientists at the University of California San Diego have developed a more accurate navigation system that will allow robots to better negotiate busy clinical environments in general and emergency departments more specifically. The researchers have also developed a dataset of open source videos to help train robotic navigation systems in the future.

The team, led by Professor Laurel Riek and Ph.D. student Angelique Taylor, detail their findings in a paper for the International Conference on Robotics and Automation taking place May 30 to June 5 in Xi'an, China.

The project stemmed from conversations with clinicians over several years. The consensus was that robots would best help physicians, nurses and staff in the emergency department by delivering supplies and materials. But this means robots have to know how to avoid situations where clinicians are busy tending to a patient in critical or serious condition.

"To perform these tasks, robots must understand the context of complex hospital environments and the people working around them," said Riek, who holds appointments both in computer science and emergency medicine at UC San Diego.

Taylor and colleagues built the navigation system, the Safety Critical Deep Q-Network (SafeDQN), around an algorithm that takes into account how many people are clustered together in a space and how quickly and abruptly these people are moving. This is based on observations of clinicians' behavior in the emergency department. When a patient's condition worsens, a team immediately gathers around them to render aid. Clinicians' movements are quick, alert and precise. The navigation system directs the robots to move around these clustered groups of people, staying out of the way.

"Our system was designed to deal with the worst case scenarios that can happen in the ED," said Taylor, who is part of Riek's Healthcare Robotics lab at the UC San Diego Department of Computer Science and Engineering.

The team trained the algorithm on videos from YouTube, mostly coming from documentaries and reality shows, such as "Trauma: Life in the ER" and "Boston EMS." The set of more than 700 videos is available for other research teams to train other algorithms and robots.

Researchers tested their algorithm in a simulation environment, and compared its performance to other state-of-the-art robotic navigation systems. The SafeDQN system generated the most efficient and safest paths in all cases.

Next steps include testing the system on a physical robot in a realistic environment. Riek and colleagues plan to partner with UC San Diego Health researchers who operate the campus' healthcare training and simulation center.

The algorithms could also be used outside of the emergency department, for example during search and rescue missions.

Credit: 
University of California - San Diego

USTC realizes coherent storage of light over one-hour

image: Energy level diagram and experimental setup

Image: 
MA Yu et al.

Remote quantum distribution on the ground is limited because of the loss of photon in optical fibers. One solution for remote quantum communication lies in quantum memories: photons are stored in the long-lived quantum memory (quantum flash drive) and then quantum information is transmitted by the transportation of the quantum memory. Given the speed of aircrafts and high-speed trains, it is critical to increase the storage time of the quantum memories to the order of hours.

In a new study published in Nature Communications, a research team led by Prof. LI Chuanfeng and Prof. ZHOU Zongquan from University of Science and Technology of China (USTC) extended the storage time of the optical memories to over one hour. It broke the record of one minute achieved by German researchers in 2013, and made a great stride towards the application of quantum memories.

In the attempt to achieve optical storage in a zero-first-order-Zeeman (ZEFOZ) magnetic field, the complicated and unknown energy level structures in both the ground and excited states have challenged researchers for a long time. Recently, researchers used the spin Hamiltonians to predict the level structures. However, an error may occur in the theoretical prediction.

To overcome the problem, researchers from USTC adopted the spin wave atomic frequency comb (AFC) protocol in a ZEFOZ field, namely ZEFOZ-AFC method, successfully implementing the long-lived storage of light signals.

Dynamical decoupling (DD) was used to protect the spin coherence and extend the storage time. The coherent nature of this device is verified by implementing a time-bin-like interference experiment after 1h storage with a fidelity of 96.4%. The result showed the great storage capacity of the coherent light and its potential in quantum memories.

This study expands the optical storage time from the order of minutes to the order of hours. It meets the basic requirements of the optical storage lifetime for quantum memories. Through optimizing the storage efficiency and signal-to-noise ratios (SNR), researchers are expected to transmit quantum information by classical carriers in a new quantum channel.

Credit: 
University of Science and Technology of China