Tech

Beef consumption hurting river quality

image: A team of researchers have published a paper in Nature Sustainability that shows irrigation of cattle feed crops is the greatest consumer of river water in the Western United States, implicating beef and dairy consumption as the leading driver of water shortages and fish imperilment in the region.

Image: 
University of Delaware

Across the globe, humans are using freshwater resources faster than those resources can be naturally replenished. In the Western United States, for example, water extractions from the Colorado River have exceeded total river flow, causing rapid depletion of water storage reservoirs. In addition, as these water sources dry up, species of fish, plants and animals are also adversely impacted.

A new study published in Nature Sustainability shows that irrigation of cattle feed crops is the greatest consumer of river water in the Western United States, implicating beef and dairy consumption as the leading driver of water shortages and fish imperilment in the region.

Kyle Davis, assistant professor of Geography and Spatial Sciences and Plant and Soil Sciences, performed the crop water-use estimates for the study. He said the researchers wanted to understand what sectors within the U.S. economy and which crops contributed the most to unsustainable water use, as well as what regions of the United States were most vulnerable to that water loss.

"We looked at agriculture, industry, domestic use, and thermoelectric power generation and quantified what their water demand is on a monthly basis," said Davis. "Then we incorporated those estimates into a national hydrological model to understand how those human water uses within different watersheds in the United States would lead to reduced availability for people and aquatic species downstream."

Irrigation of cattle feed crops is the single largest consumptive user at both regional and national scales, accounting for 23 percent of all water consumption nationally, 32 percent in the Western U.S. and 55 percent in the Colorado River basin.

Alfalfa, for example, is a water-intensive crop that is planted on a large scale to support the beef industry in the Western U.S. In addition, a crop like corn, which on its own is pretty water use efficient, is produced on such a large scale to support the cattle industry that it ends up requiring a lot of unsustainable water use.

After identifying crops and sectors that were contributing to unsustainable water use, the researchers estimated the implications for aquatic species in the Western U.S.

"We made estimates of how many different types of fish species we would expect to have local or global extinctions as a result of this increased water use to support feed production for beef," said Davis.

Sixty fish species in the Western U.S. are at an elevated risk of global extinction due to flow depletion, and 53 of those are primarily due to irrigation of cattle-feed crops.

Summer water flow depletion in particular is responsible for nearly 1,000 instances of increased risk of local extinction of fish species in watersheds in the Western U.S. Of these 1,000 instances, 690 are estimated to have occurred primarily due to irrigation of cattle-feed crops, absent any other water uses.

As far as solutions to the problem go, the research points to fallowing programs, specifically ones that are practiced in a couple of irrigation districts in California.

"The idea is that you pay farmers not to cultivate anything in their field for a particular growing season and the water that they would have applied for irrigation can then be repurposed for other uses," said Davis. "It can be diverted to increase urban water availability or used to increase environmental flows and the water available for natural systems. We've found that fallowing programs are really effective in terms of saving water or effectively repurposing it."

Davis also said that the intent of the study is not to vilify farmers or ranchers, as they are at the heart of the solution to the water challenges.

"We see huge opportunities for farmers and ranchers to be appropriately compensated for helping resolve our water problems and thereby enhancing their incomes and benefiting their communities," said Davis.

The paper also showed how different regions in the U.S are reliant on western beef production to meet their demand. They did this to show that production - and the water resources to support it - often occurs in places geographically removed from where beef is actually consumed, meaning dietary decisions can impact places that are geographically removed from where those foods are actually produced.

In Delaware, consumers are contributing little to water scarcity in the West, as most of the beef in the eastern United States is not sourced from the western states. Beef consumers living in the Los Angeles, Portland, Denver and San Francisco areas, however, consume beef that relies on water resources in places where hydrological and ecological impacts are high.

"Consumers can be made more aware that by eating a lot of beef, they're potentially contributing to unsustainable water use in certain parts of the U.S. More importantly though, this information could be used by corporations and others with control over food supply chains to source beef and cattle feed from places where water is more abundant," said Davis.

Overall, Davis said that he doesn't want to give the impression that all beef is bad, noting that in many places, cattle are important for converting grasses that humans can't eat directly into edible material.

"Cattle play an important role in food security and nutrition, but their environmental impacts can be large. Ensuring that beef and cattle feed are produced in places where water resources are relatively abundant can help to achieve balance between satisfying our diets and protecting the environment," said Davis. "In all, it's good to have knowledge of where your food comes from and what natural resources it requires in order for each person to make more informed food choices."

Credit: 
University of Delaware

Implementing microbiome diagnostics in personalized medicine: Rise of pharmacomicrobiomics

image: The only peer-reviewed journal covering all trans-disciplinary OMICs-related areas, including data standards and sharing; applications for personalized medicine and public health practice; and social, legal, and ethics analysis.

Image: 
Mary Ann Liebert, Inc., publishers

New Rochelle, NY, March 2, 2020--A new Commentary identifies three actionable challenges for translating pharmacomicrobiomics to personalized medicine in 2020. Pharmacomicrobiomics is the study of how microbiome variations within and between individuals affect drug action, efficacy, and toxicity. This personalized medicine horizon scanning is featured in OMICS: A Journal of Integrative Biology, the peer-reviewed interdisciplinary journal published by Mary Ann Liebert, Inc., publishers. Click here to read the full-text article free on the OMICS: A Journal of Integrative Biology website until April 2, 2020.

Ramy Aziz and Marwa ElRakaiby, Cairo University (Egypt), Mariam Rizkallah, Leibniz Institute for Prevention Research and Epidemiology (Bremen, Germany), and Rama Saad, University of Illinois (Chicago) coauthored the article entitled "Translating Pharmacomicrobiomics: Three Actionable Challenges/Prospects in 2020." The authors ask the question, "Has the time not come for routine microbiome testing and establishing pharmacomicrobiomic guidelines, at least for some drugs, in 2020?"

The Commentary describes three actionable challenges to translate pharmacomicrobiomics from laboratory bench to patient bedside and personalized medicine innovation: (1) systematic high-throughput microbiome screening studies; (2) phage-enabled precision microbiome engineering/editing; and (3) pharmamicrobiomic testing in the clinic.

Vural Özdemir, MD, PhD, Editor-in-Chief of OMICS: A Journal of Integrative Biology states: "Clinical pharmacomicrobiomics is an exciting and overdue health care innovation field, especially for pharmaceuticals with well-documented drug-microbiome interactions. Pharmacomicrobiomics has come a long way since its debut in 2010. Over the next decade, a growing number of microbiome diagnostics will likely be utilized to choose the right drug, at the right dose, for the right patient, toward personalized medicine. Pharmacomicrobiomics can include both interventional (e.g., microbiome editing) and diagnostic approaches (microbiome testing) for personalized/precision medicine. Authored by pioneers of pharmacomicrobiomics, the new OMICS horizon scanning article offers new insights on the road ahead for microbiome diagnostics in the clinic."

Credit: 
Mary Ann Liebert, Inc./Genetic Engineering News

'Smart water' may aid oil recovery

image: Low-salinity brine injected into crude oil forms nanoscale droplets that help separate oil from rock in reservoirs, according to Rice University engineers. The black ring around the droplets, seen in a cryogenic electron microscope image, is asphaltene.

Image: 
Wenhua Guo/Rice University

HOUSTON - (March 2, 2020) - Now there's evidence that oil and water do mix. Sort of.

Scientists at Rice University's Brown School of Engineering show that microscopic saltwater droplets emulsify crude oil when each has the right composition. Understanding how they combine is important to enhanced oil recovery.

Rice chemical and biological engineer Sibani Lisa Biswal and her colleagues went to great lengths to characterize the three elements most important to oil recovery: rock, water and the crude itself.

They confirmed wells are more productive when water with the right salt concentration is carefully matched to both the oil and the rock, carbonate or sandstone formation. If the low-salinity brine can create emulsion droplets in a specific crude, the brine appears to also alter the wettability of the rock. The wettability determines how easily the rock will release oil.

The team's work appears in the open-access Nature journal Scientific Reports.

Co-lead author Jin Song said the first hints of seawater's effect came from wells in the North Sea. "Oil companies found that when they injected seawater, which has relatively low salinity, oil recovery was surprisingly good," he said.

Even with that understanding, he said research has been limited. "Usually in the oil and gas industry, when they're looking into low-salinity water, they tend to focus on the effect of the brine and ignore the effect of the oil," said Song, who earned his Ph.D. at Rice this year and is now a researcher at Shell.

"So people haven't been able to find a good indicator or any correlation between the effectiveness of low-salinity water and experimental conditions," he said. "Our work is the first to identify some of the properties of the oil that indicate how effective this technique can be in a specific field.

The team tested how injected brine is dispersed and how it affects oils' interfacial tension and electrostatic interactions with rock.

"How to characterize wettability accurately is a challenge," Biswal said. "Oftentimes, we assume that reservoir rock underground are under a mixed-wet state, with regions that are oil-wet and regions that are water-wet.

"If you can alter your oil-wet sites to water-wet sites, then there's less of a driving force to hold the oil to the mineral surface," she said. "In low-salinity water injection, the brine is able to displace the trapped oil. As you change from oil-wet to water-wet, the oil is released from the mineral surface."

The researchers tested two brines, one high-salinity and one with a quarter of the salinity of seawater, on Indiana limestone cores against six crude oils from the Gulf of Mexico, Southeast Asia and the Middle East and a seventh oil with added asphaltene. They found that high-salinity brine clearly inhibited water droplets from emulsifying in crude, unlike the low-salinity samples.

To better understand the thermodynamic nature of the emulsion, Rice research scientist Wenhua Guo took cryogenic electron microscope images of about 100 mixtures of oil and water. Because oil is opaque, the samples had to be placed in very thin containers, and then frozen with liquid nitrogen to keep them stable for imaging.

"This is the first time anyone has seen these water droplets inside crude oil," Biswal said. "They spontaneously arise inside the crude oil when you expose it to a low-salinity brine."

The images revealed droplets varying in size from 70 to just over 700 nanometers. Biswal said chemical surfactants -- aka soap -- are also good at loosening oil in a reservoir, but are prohibitively expensive. "You can change the salt concentration to modify the composition of the brine and get the same effect as in including the detergent," she said. "So it's basically a low-cost technique trying to achieve the same goal as detergent."

Credit: 
Rice University

To bee, or not to bee, a question for almond growers

image: Researchers isolated trees to assess effects of bee activity.

Image: 
Drs. Saez and Negri

Pollination by bees is vital even when crops are assumed to be pollinator independent. That's according to a study co-authored by Ethel Villalobos, a researcher in the University of Hawaii at Manoa's College of Tropical Agriculture and Human Resources Department of Plant and Environmental Protection Sciences and lead of the UH Honeybee Project.

In a paper published in the February issue of Nature Scientific Reports, Villalobos collaborated with a team comprised mainly of Argentinian researchers and were led by Agustin Saez and Pedro Negri, co-founders of a start-up company called BeeFlow. Their series of field experiments examined the true "independence" of a new self-fertilizing almond variety called 'Independence.'

Eighty percent of the world's almonds are produced in California. The crop requires the pollination services of two million colonies of honey bees during the flowering season, which growers rent from beekeepers. Pollinating almonds requires the equivalent of moving half of all managed bees in the U.S. to California for a few weeks.

Besides the cost and effort, the pesticides used on almonds may be dangerous to bees. Thus there is intensive research going into breeding pollinator-independent almonds, which could cut the bees out of the equation entirely. However, this study shows the solution is not that simple.

'Independence' almonds still performed better when bees were assisting in pollination. They had a 60 percent increase in fruit set and a 20 percent increase in kernel yield when pollinated by bees. Even with the cost of renting colonies, farmers would still realize 10 percent more profit when the almonds were fertilized by bees.

"This article highlights the danger of misinformation--the variety is marketed as 'self-fertilizing,' but growers don't know they will get less yield without bees," Villalobos explained. "This could also generate conflicts between almond growers. If one grower's self-fertile almonds are planted near another's bee-dependent varieties, the bees will visit both, and growers who rent bees will be paying for services benefiting others and getting less for their own crops."

Another potential problem is if allegedly pollinator-independent varieties continue replacing pollinator-dependent varieties, many beekeepers may lose one of their key annual incomes, threatening their financial well-being. This could lead to the loss of one of the most important incentives for maintaining a beekeeping industry in the U.S. and the narrowing of a work source for beekeepers.

Most important, Villalobos concluded, is that bees have played an important role in keeping agricultural pesticide use in check, and attempting to eliminate them removes this check.

"Since they are recognized and appreciated by most people, honey bees have helped raise awareness of how farm health is related to our own health," she said. "We tend to lose sight of the benefits of protecting our natural world. Bees have helped us face that there are choices to be made."

Credit: 
University of Hawaii at Manoa

Tool for identifying frail patients to reduce surgical risk works in health system setting

PITTSBURGH, March 2, 2020 - Frail patients in private-sector, multi-hospital health systems may benefit from a tool that can quickly predict their risk for poor outcomes following surgery, including postoperative mortality, readmission and extended hospital stays.

New research from UPMC -- published recently in the Annals of Surgery -- demonstrated that this prospective assessment index, which was previously validated in the VA Health System, was effectively implemented at scale in UPMC's diverse environment, requiring only 30 seconds per patient to accurately stratify patients for frailty.

Initially developed by the VA eight years ago, the so-called Risk Analysis Index (RAI) has been shown to accurately classify a patient's frailty and to predict their risk for adverse health consequences following surgery and other procedures. Until now, however, questions remained about the feasibility of implementing the index within complex, multi-hospital systems and the validity of the scores in a non-veteran population.

"Previous studies have shown that frail patients are at higher risk for poor outcomes after surgery, and our recent research shows that even procedures that physicians typically consider 'low risk' result in a higher rate of adverse events for frail patients," said Daniel Hall, M.D., corresponding author of the study and associate professor of surgery, University of Pittsburgh School of Medicine. "We want to ensure that all patients have the best outcomes possible based on their unique health factors. Validating the RAI in UPMC's large-scale clinical environment is a step toward that goal and provides a roadmap for other health systems."

The RAI uses a variety of clinical and patient-reported factors, such as age, gender, appetite, chronic health conditions and daily activity levels to generate a score representing each patient's level of frailty. Hall and his team implemented the RAI as a patient-facing questionnaire, integrating the tool into the electronic health record at UPMC surgical practices in Pittsburgh.

From July 1 to Dec. 31, 2016, 42,738 RAI assessments were completed for 36,261 patients across five surgical clinics -- or 77% of eligible patients within the first six months. Among this sample and compared to patients with "normal" RAI scores, patients considered "very frail" suffered five times the rate of death, three times the rate of readmission and 10 times the rate of extended hospital stays.

"This represents the largest reported cohort of patients with prospectively measured frailty within a clinical setting," said Hall, who also is medical director of the Wolff Center at UPMC, the health system's multi-disciplinary quality improvement center.

More importantly, he noted, his team demonstrated that the RAI has value as a broad screening instrument -- regardless of the patient's gender -- because it does not require the dedicated expertise of specific providers, disruption to the clinical workflow or additional clinic time beyond the initial surgical visit.

"With the silver tsunami of aging boomers, accurate and rapid risk stratification will be increasingly essential to ensure that surgical treatment is offered to the right patients and consistent with older patients' goals and values," said Hall. "Clinicians worried about the burden of frailty assessment used to ask, 'How can I afford to measure frailty?' However, these data demand a different question: 'How can I afford NOT to measure frailty?"

Credit: 
University of Pittsburgh

Study maps landmarks of peripheral artery disease to guide treatment development

image: Illinois researchers used a suite of imaging methods to create the first holistic picture of peripheral artery disease recovery. Pictured: postdoctoral researcher Jamila Hedhli and professor Wawrzyniec Dobrucki.

Image: 
Photo by Fred Zwicky

CHAMPAIGN, Ill. -- Novel biomedical advances that show promise in the lab often fall short in clinical trials. For researchers studying peripheral artery disease, this is made more difficult by a lack of standardized metrics for what recovery looks like. A new study from University of Illinois at Urbana-Champaign researchers identifies major landmarks of PAD recovery, creating signposts for researchers seeking to understand the disease and develop treatments.

"Having these landmarks could aid in more optimal approaches to treatment, identifying what kind of treatment could work best for an individual patient and when it would be most effective," said Illinois bioengineering professor Wawrzyniec L. Dobrucki, who led the study. He also is affiliated with the Carle Illinois College of Medicine.

PAD is a narrowing of the arteries in the limbs, most commonly the legs, so they don't receive enough blood flow. It often isn't diagnosed until walking becomes painful, when the disease is already fairly advanced. Diabetes, obesity, smoking and age increase the risk for PAD and can mask the symptoms, making PAD difficult to diagnose. Once diagnosed, there is no standard treatment, and doctors may struggle to find the right approach for a patient or to tell whether a patient is improving, Dobrucki said.

The researchers used multiple imaging methods to create a holistic picture of the changes in muscle tissue, blood vessels and gene expression through four stages of recovery after mice had the arteries in their legs surgically narrowed to mimic the narrowing found in PAD patients. They published their results in the journal Theranostics.

"There are a lot of people who study PAD, so there are all these potential new therapies, but we don't see them in the clinics," said postdoctoral researcher Jamila Hedhli, the first author of the paper. "So the main goal of this paper is utilizing these landmarks to standardize our practice as researchers. How can we see if the benefit of certain therapies is really comparable if we are not measuring the same thing?"

Dobrucki's group collaborated with bioengineering professor Michael Insana, chemistry professor Jefferson Chan and senior research scientist Iwona Dobrucka, the director of the Molecular Imaging Laboratory in the Beckman Institute for Advanced Science and Technology, to monitor the mice with a suite of imaging technologies that could be found in hospitals or clinics, including ultrasound, laser speckle contrast, photoacoustics, PET and more. Each method documented a different aspect of the mouse's response to the artery narrowing - anatomy, metabolism, muscle function, the formation of new blood vessels, oxygen perfusion and genetic activity.

By serially imaging the mice over time, the researchers identified key features and events over four phases of recovery.

"Each imaging method gives us a different aspect of the recovery of PAD that the other tools will not. So instead of looking at only one thing, now we're looking at a whole spectrum of the recovery," Hedhli said. "By looking at these landmarks, we're allowing scientists to use them as a tool to say 'At this point, I should see this happening, and if we add this kind of therapy, there should be an enhancement in recovery.'"

Though mice are an imperfect model for human PAD, each of the imaging platforms the researchers used can translate to human PAD patients, as well as to other diseases, Dobrucki said. Next, the researchers plan to map the landmarks of PAD in larger animals often used in preclinical studies, such as pigs, and ultimately in human patients.

"We are very interested in improving diagnosis and treatment," Hedhli said. "Many people are working to develop early diagnosis and treatment options for patients. Having standard landmarks for researchers to refer to can facilitate all of these findings, move them forward to clinic and, we hope, result in successful clinical trials."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Researchers identify protein critical for wound healing after spinal cord injury

Plexin-B2, an axon guidance protein in the central nervous system (CNS), plays an important role in wound healing and neural repair following spinal cord injury (SCI), according to research conducted at the Icahn School of Medicine at Mount Sinai and published today in Nature Neuroscience. The study's findings could aid the development of therapies that target axon guidance pathways for more effective treatment of SCI patients.

Tissue repair after SCI requires the mobilization of immune and glial cells to form a protective barrier that seals the wound, facilitates debris clearing and contains inflammation. Building this barrier involves a process called corralling wherein microglia (immune cells in the CNS) and macrophages (immune cells that originate from blood) form a barrier around the lesion that separates healthy and necrotic tissue. In this study, researchers found that this corralling begins early in the healing process and requires Plexin-B2, a protein that facilitates the movement of immune cells by steering them away from colliding cells.

Researchers found that the deletion of Plexin-B2 in microglia and macrophages impaired corralling, which led to tissue damage, inflammatory spillover, and hindered the regeneration of axons (slender part of a nerve cell where impulses are conducted).

"The role of microglia and macrophages in the spatial organization of glial cells around the injury site via an axon guidance receptor is quite unexpected" said lead investigator Hongyan Jenny Zou, MD, PhD, Professor of Neurosurgery and Neuroscience at the Icahn School of Medicine at Mount Sinai.

Tissue repair in the CNS relies on a coordinated response from diverse cell types in overlapping phases. This complexity makes it difficult to distinguish specific roles of glial cell populations. Previously, astrocytes (supporting glial cells) were presumed to be the main driver for corralling. However, this study identified the critical contribution of injury-activated microglia/macrophages, as well as the role of Plexin-B2 in corralling. Understanding the signaling pathways and interactions of glial cells with each other and the injury environment is fundamental to improving neural repair after a traumatic brain or spinal cord injury.

Credit: 
The Mount Sinai Hospital / Mount Sinai School of Medicine

Child access prevention laws spare gun deaths in children

U.S. states with laws regulating the storage of firearms in households with minors had a 13 percent reduction in firearm fatalities in children under 15 compared to states with no such regulations, finds a study from Boston Children's Hospital. States with the most restrictive laws had the greatest reduction: 59 percent reduction as compared to states with no laws. Results of this analysis, spanning 26 years, were published in a paper on March 2, 2020 in JAMA Pediatrics.

Child access prevention (CAP) laws are on the books in half of U.S. states. They are meant to protect children from accessing firearms by holding a parent or guardian responsible for the actions or potential actions a child takes with a firearm.

There are many types of CAP laws ranging from simple recklessness laws to at least three levels of negligence laws. In this analysis, the researchers ranked the laws from least to most restrictive based upon the obligations imposed on the gun owner/parent:

Recklessness laws criminalize providing firearms to children.

The least restrictive negligence "Child Uses" laws, holds the parent responsible if a child accesses and uses an improperly stored firearm.

The second most stringent negligence law, "Child Accesses" law, applies to cases where a child accesses an improperly stored firearm but does not use it.

The third, and most stringent negligence law or "Child Could Access" law, applies if a child could potentially access an improperly-stored firearm.

With all four categories of laws, the gun owner, parent, or guardian is subject to legal consequences should a child be provided with, use, gain access, or even possibly access a firearm.

4,000 pediatric deaths could have been prevented

The researchers reviewed 13,967 deaths from firearms in children ages 0 to 14 from 1991-2017 after most CAP laws were enacted. Researchers then paired that information with the type of CAP law in place in each U.S. state during that time.

"Looking at all these laws, the negligence laws seem to have the best effect," says senior investigator Eric Fleegler, MD, MPH, a pediatric emergency physician and health services researcher at Boston Children's Hospital. "And as the negligence laws get more stringent in terms of holding a gun owner legally responsible for a child actually accessing a gun, or even potentially accessing a gun, the death rates in children decrease."

The authors estimate that 4,000 deaths -- or 29 percent of all pediatric firearm deaths in children between ages 0 and 14 -- could have been prevented if all 50 states had passed the strongest type of negligence laws during the time period covering the analysis.
"The message is clear," he adds, "Had all of the states had some types of negligence law we would have expected thousands of children not to have died."

Nearly 30 years of firearm fatality history

The first CAP law was passed in Florida in 1989. Over the next decade, most of the CAP laws that are on the books were enacted, mostly in the early 1990s. Currently, 25 states have CAP laws, including nine with recklessness laws, the lowest level of legislation.

In 2013, Dr. Fleegler and colleagues published a paper in JAMA Internal Medicine showing that states with the highest number of firearm laws have the lowest rate of firearm fatalities overall and for suicides and homicides individually.

In this study, they wanted to dig deeper into the role CAP laws have on reducing firearm fatalities in the vulnerable pediatric population. They reviewed a national database of CAP laws, the State Firearm Laws Database, an online resource of all firearm-related laws by state and by year. They also assessed whether the pediatric deaths in the study time period were a result of homicide, suicide, or were unintentional.

Recklessness laws did not reduce pediatric firearm fatality rates.

However, the most stringent "Child Could Access" laws were associated with a 29% reduction in all-intent firearm deaths and a 59% reduction in unintentional firearm deaths.

"The reduction in firearm fatalities is greater in those states with stronger negligence laws compared with states with weaker laws," says first author Hooman Azad, a second-year medical student at Northwestern University. "While it does not absolutely mean causation, there are very strong associations between the type of CAP law and the number of firearm fatalities in children."

Largest Study to Show CAP Laws Reduce Homicides

Of 13,967 pediatric deaths from firearms during the study period, 56 percent were homicides, 22 percent were suicides, 19 percent were unintentional, and 3 percent were due to legal intervention or unknown intent.

The authors determined that along with the overall 13 % reduction in firearm fatalities regardless of intent (homicide, suicide, unintentional), negligence-specific CAP laws reduced deaths from gun homicide by 15 percent, from firearm suicides by 12 percent, and unintentional firearm fatalities by 13 percent among children 0 to 14 years old.

Previous studies have not been able to establish a link between CAP laws and reductions in homicide, making this the first study to show a strong association. "This finding is critically important, as 56 percent of firearm-related deaths in children 0 to 14 years old were due to homicide during the study period," says Fleegler.

While this study is focused exclusively on fatalities, the authors comment that for every fatality there is somewhere on the order of 3 to 5 injuries.

"So hopefully we are not just seeing reductions in fatalities; we are seeing reductions in injury," adds Fleegler.

Safe gun storage is the guiding principle of CAP laws. Proper safe storage of a firearm includes securing the gun unloaded in a locked container with the ammunition locked separately in a different container.

"We know proper storage happens in a very small percentage of households with a firearm," says Fleegler, "but this analysis shows that the passage of negligence CAP laws has the potential to reduce firearm fatalities in children."

Credit: 
Boston Children's Hospital

New tools show a way forward for large-scale storage of renewable energy

A technique based on the principles of MRI has allowed researchers to observe not only how next-generation batteries for large-scale energy storage work, but also how they fail, which will assist in the development of strategies to extend battery lifetimes in support of the transition to a zero-carbon future.

The new tools, developed by researchers at the University of Cambridge, will help scientists design more efficient and safer battery systems for grid-scale energy storage. In addition, the technique may be applied to other types of batteries and electrochemical cells to untangle the complex reaction mechanisms that occur in these systems, and to detect and diagnose faults.

The researchers tested their techniques on organic redox flow batteries, promising candidates to store enough renewable energy to power towns and cities, but which degrade too quickly for commercial applications. The researchers found that by charging the batteries at a lower voltage, they were able to significantly slow the rate of degradation, extending the batteries' lifespan. The results are reported in the journal Nature.

Batteries are a vital piece of the transition away from fossil fuel-based sources of energy. Without batteries capable of grid-scale storage, it will be impossible to power the economy using solely renewable energy. And lithium-ion batteries, while suitable for consumer electronics, don't easily scale up to a sufficient size to store enough energy to power an entire city, for instance. Flammable materials in lithium-ion batteries also pose potential safety hazards. The bigger the battery, the more potential damage it could cause if it catches fire.

Redox flow batteries are one possible solution to this technological puzzle. They consist of two tanks of electrolyte liquid, one positive and one negative, and can be scaled up just by increasing the size of the tanks, making them highly suitable for renewable energy storage. These room-sized, or even building-sized, non-flammable batteries may play a key role in future green energy grids.

Several companies are currently developing redox flow batteries for commercial applications, most of which use vanadium as the electrolyte. However, vanadium is expensive and toxic, so battery researchers are working to develop a redox flow battery based on organic materials which are cheaper and more sustainable. However, these molecules tend to degrade quickly.

"Since the organic molecules tend to break down quickly, it means that most batteries using them as electrolytes won't last very long, making them unsuitable for commercial applications," said Dr Evan Wenbo Zhao from Cambridge's Department of Chemistry, and the paper's first author. "While we've known this for a while, what we haven't always understood is why this is happening."

Now, Zhao and his colleagues in Professor Clare Grey's research group in Cambridge, along with collaborators from the UK, Sweden and Spain, have developed two new techniques to peer inside organic redox flow batteries in order to understand why the electrolyte breaks down and improve their performance.

Using 'real time' nuclear magnetic resonance (NMR) studies, a sort of functional 'MRI for batteries', and methods developed by Professor Grey's group, the researchers were able to read resonance signals from the organic molecules, both in their original states and as they degraded into other molecules. These 'operando' NMR studies of the degradation and self-discharge in redox flow batteries provide insights into the internal underlying mechanisms of the reactions, such as radical formation and electron transfers between the different redox-active species in the solutions.

"There are few in situ mechanistic studies of organic redox flow batteries, systems that are currently limited by degradation issues," said Grey. "We need to understand both how these systems function and also how they fail if we are going to make progress in this field."

The researchers found that under certain conditions, the organic molecules tended to degrade more quickly. "If we change the charge conditions by charging at a lower voltage, the electrolyte lasts longer," said Zhao. "We can also change the structure of the organic molecules so that they degrade more slowly. We now understand better why the charge conditions and molecular structures matter."

The researchers now want to apply their NMR setup on other types of organic redox flow batteries, as well as on other types of next-generation batteries, such as lithium-air batteries.

"We are excited by the wide range of potential applications of this method to monitor a variety of electrochemical systems while they are being operated," said Grey.

For example, the NMR technique will be used to develop a portable battery 'health check' device to diagnose its condition.

"Using such a device, it could be possible to check the condition of the electrolyte in a functioning organic redox flow battery and replace it if necessary," said Zhao. "Since the electrolyte for these batteries is inexpensive and non-toxic, this would be a relatively straightforward process, prolonging the life of these batteries."

Credit: 
University of Cambridge

Evaluating association of state firearm laws to prevent child access with pediatric firearm fatalities

What The Study Did: This research letter looked at two categories of firearm laws to prevent child access and their association with pediatric firearm fatalities throughout the United States from 1991 to 2016.

Author: Eric W. Fleegler, M.D., M.P.H., of Boston Children's Hospital, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/ 

(doi:10.1001/jamapediatrics.2019.6227)

Editor's Note: The article includes conflict of interest disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

Sinking sea mountains make and muffle earthquakes

image: T?ranganui Knoll is an underwater mountain (seamount) off the coast of New Zealand that was the site of an International Ocean Discovery Program drilling expedition. The seamount will one day collide with the Hikurangi subduction zone, leading to conditions that both generate and dampen earthquakes.

Image: 
Andrew Gase/National Institute of Water and Atmospheric Research

Subduction zones -- places where one tectonic plate dives beneath another -- are where the world's largest and most damaging earthquakes occur. A new study has found that when underwater mountains -- also known as seamounts -- are pulled into subduction zones, not only do they set the stage for these powerful quakes, but also create conditions that end up dampening them.

The findings mean that scientists should more carefully monitor particular areas around a subducting seamount, researchers said. The practice could help scientists better understand and predict where future earthquakes are most likely to occur.

"The Earth ahead of the subducting seamount becomes brittle, favoring powerful earthquakes while the material behind it remains soft and weak, allowing stress to be released more gently," said co-author Demian Saffer, director of the University of Texas Institute for Geophysics (UTIG), a research unit of The University of Texas at Austin Jackson School of Geosciences.

The study was published on March 2 in Nature Geoscience and was led by Tian Sun, who is currently a research scientist at the Geological Survey of Canada. Other co-authors include Susan Ellis, a scientist at the New Zealand research institute GNS Science. Saffer supervised the project and was Sun's postdoctoral advisor at Penn State when they began the study.

The researchers used a computer model to simulate what happens when seamounts enter ocean trenches created by subduction zones. According to the model, when a seamount sinks into a trench, the ground ahead of it becomes brittle, as its slow advance squeezes out water and compacts the Earth. But in its wake, the seamount leaves a trail of softer wet sediment. The hard, brittle rock can be a source for powerful earthquakes, as forces generated by the subducting plate build up in it - but the weakened, wet material behind the seamount creates an opposite, dampening effect on these quakes and tremors.

Although seamounts are found all over the ocean floor, the extraordinary depths at which subduction occurs means that studying or imaging a subducting seamount is extremely difficult. This is why until now, scientists were not sure whether seamounts could affect the style and magnitude of subduction zone earthquakes.

The current research tackled the problem by creating a realistic computer simulation of a subducting seamount and measuring the effects on the surrounding rock and sediment, including the complex interactions between stresses in the Earth and fluid pressure in the surrounding material. Getting realistic data for the model involved conducting experiments on rock samples collected from subduction zones by scientific ocean drilling offshore Japan.

The scientists said the model's results took them completely by surprise. They had expected water pressure and stress to break up material at the head of the seamount and thus weaken the rocks, not strengthen them.

"The seamount creates a feedback loop in the way fluids get squeezed out and the mechanical response of the rock to changes fluid pressure," said Ellis, who co-developed the numerical code at the heart of the study.

The scientists are satisfied their model is robust because the earthquake behavior it predicts consistently matches the behavior of real earthquakes.

While the weakened rock left in the wake of seamounts may dampen large earthquakes, the researchers believe that it could be an important factor in a type of earthquake known as a slow slip event. These slow-motion quakes are unique because they can take days, weeks and even months to unfold.

Laura Wallace, a research scientist at UTIG and GNS Science, who was the first to document New Zealand slow slip events, said that the research was a demonstration of how geological structures in the Earth's crust, such as seamounts, could influence a whole spectrum of seismic activity.

"The predictions from the model agree very nicely with what we are seeing in New Zealand in terms of where small earthquakes and tremors are happening relative to the seamount," said Wallace, who was not part of the current study.

Sun believes that their investigations have helped address a knowledge gap about seamounts, but that research will benefit from more measurements.

"We still need high resolution geophysical imaging and offshore earthquake monitoring to better understand patterns of seismic activity," said Sun.

Credit: 
University of Texas at Austin

A current map for improving circuit design

image: The flow of an electric current between two electrodes on a magnetic thin film are imaged by measuring the strip domains.

Image: 
© 2020 KAUST

A practical method for mapping the flow of a current in devices with complex geometries that could be used to optimize circuit design has been developed at KAUST.

A traditional high-school physics experiment is to place iron filings on a piece of paper above a permanent magnet. The small metal particles will arrange themselves into a series of lines connecting the two ends, or poles, of the magnet. This enables students to visualize the otherwise invisible field lines that mediate magnetic attraction and repulsion.

Achieving this same type of map for the flow of an electrical current is particularly important in tiny electronic components. These components can have odd geometric arrangements, a result of the need for each element of the device to be packed into as small a space as possible. This means the current does not necessarily flow in a homogenous way.

Senfu Zhang and Xixiang Zhang, working with colleagues from KAUST, China and the United States, have now devised a method for visualizing the magnitude and direction of current flow through a magnetic thin film.

Several experimental methods have previously been developed to map current density in electronic materials. But these only do so indirectly, measuring stray fields rather than the currents themselves. Furthermore, they can be very expensive, or work only at very low temperatures. Computer simulations offer a cheaper alternative; however, they tend to oversimplify actual devices, ignoring nonuniformities or cracks in the material.

Instead, Zhang's team directly mapped the nonuniform electrical current distribution in layered platinum, cobalt and tantalum using the existence of so-called skyrmions. These "magnetic bubbles" can be imaged by a technique known as magneto-optical Kerr microscopy, which measures changes in the intensity and polarization of light reflected from a surface as a result of magnetic disturbances.

The skyrmions appear as round bubbles in the microscope images. "We found that when we passed a current through the material, only the front end of the bubbles moved forward, forming narrow, parallel strip domains," explains Senfu Zhang. The researchers showed that it was simple to extract the current flow from the growth direction of these patterns.

"This approach is not suitable for use in an actual device because it requires the deposition of Pt/Co/Ta on the device, but it is useful in the design phase," says Zhang. "Knowing the direction and magnitude of the electric current in each part of the device helps improve the design and performance."

Credit: 
King Abdullah University of Science & Technology (KAUST)

New 'organ-on-a-chip' system holds promise for drug toxicity screening

Researchers in the US have developed a new multi-organ-on-a-chip to test how new drugs affect the human body's vital organs.

Developing new drugs can come at enormous financial cost, which can be wasted if the drug must be withdrawn due to unforeseen side effects.

The research team believes their new system - containing representations of liver, heart, vasculature, lungs, testis, and either colon or brain tissues - could help avoid such cases.

In their study, published in the journal Biofabrication, they demonstrated its effectiveness by using it to screen a selection of drugs that were recalled from the market by the US Federal Drug Administration (FDA).

Professor Anthony Atala, from the Wake Forest Institute for Regenerative Medicine (WFIRM), Winston-Salem, US, is the study's senior author. He said: "The development of new drugs can take a decade and a half, from preclinical studies to reaching the market. Around one in 5,000 drug candidates successfully completes this journey. Additionally, the cost for bringing a single drug to market, with all direct and indirect expenses accounted for, can climb as high as $2.6 billion.

"Unfortunately, the human and financial costs can be even more dramatic if a drug is later found to be harmful and must be withdrawn. For example, Merck & Co. paid $4.85 billion to settle 27,000 cases and another $830 million dollars to settle shareholder lawsuits after one of its drugs caused adverse effects. The human costs of adverse drug reactions, meanwhile, manifest themselves as a leading cause of hospitalization in the United States, with up to 5.3 per cent of hospitalizations related to adverse drug reactions. The rate of fatal adverse drug reactions is difficult to determine, and it is probably underreported. As both adverse human effects and drug development costs increase, access to more reliable and affordable drug screening tools is increasingly critical."

Co-author Dr Aleksander Skardal, formerly of WFIRM and now at Ohio State University, said: "This increasing need to comprehensively screen new drugs for adverse effects is the driving force behind our research. In this context, we demonstrated our platform by screening a panel of FDA-recalled drugs for toxic effects.

"To model the integrated nature of the human body, we designed an integrated platform, or chip, supporting six tissue types under a common recirculating media. When combinations of organoids are combined into a single platform, more complex integrated responses can be seen, where the functionality of one organoid influenced the response of another."

To test their system, the researchers used it to screen six drugs that had been recalled due to adverse effects in humans: pergolide, rofecoxib, valdecoxib, bromfenac, tienilic acid and troglitazone. For many of these compounds, the 3D organoid system was able to demonstrate toxicity.

Professor Atala said: "These compounds were tested by the pharmaceutical industry and toxicity was not noted using standard 2D cell culture systems, rodent models, or during human Phase I, II and III clinical trials. However, after the drugs were released to market and administered to larger numbers of patients, toxicity was noted, leading the FDA to withdraw regulatory approval. In almost all these compounds, the 3D organoid system was able to readily demonstrate toxicity at a human-relevant dose."

As a control, they also tested the system with commonly-used drugs still on the market - aspirin, ibuprofen, ascorbic acid, loratadine, and quercetin. As well as not showing any toxicity, the organoids exposed to these non-toxic compounds remained viable at clinically-relevant doses.

Dr Skardal said: "Further study will be needed. But based on these results our system, and others like it, using 3D human-based tissue models with nuanced and complex response capabilities, has a great potential for influencing how in-vitro drug and toxicology screening and disease modelling will be performed in the near future."

Credit: 
IOP Publishing

The magnet that didn't exist

video: The child's puzzle can be used to explain Nagaoka ferromagnetism. The puzzle on the left shows that every shuffle changes the spin configuration. The puzzle on the right shows all the spins aligned, which lowers the energy of the system.

Image: 
Scixel de Groot for QuTech

In 1966, Japanese physicist Yosuke Nagaoka predicted the existence of a rather striking phenomenon: Nagaoka's ferromagnetism. His rigorous theory explains how materials can become magnetic, with one caveat: the specific conditions he described do not arise naturally in any material. Researchers from QuTech, a collaboration between TU Delft and TNO, have now observed experimental signatures of Nagaoka ferromagnetism using an engineered quantum system. The results were published today in Nature.

Familiar magnets such as the ones on your refrigerator are an everyday example of a phenomenon called ferromagnetism. Each electron has a property called 'spin', which causes it to behave like a miniscule magnet itself. In a ferromagnet, the spins of many electrons align, combining into one large magnetic field. This seems like a simple concept, but Nagaoka predicted a novel and surprising mechanism by which ferromagnetism could occur - one that had not been observed in any system before.

Child's puzzle

"To understand Nagaoka's prediction, picture the simple mechanical children's game called the sliding puzzle," said JP Dehollain, who performed the experiments together with Uditendu Mukhopadhyay. "This puzzle consists of a four-by-four grid of tiles, with a single empty slot to allow the tiles to slide around to solve the puzzle. Next, think of the Nagaoka magnet as a similar two-dimensional square lattice, where each tile is an electron. The electrons then behave like the tiles in the children's game, shuffling around in the lattice."

If the electron spins are not aligned (i.e. each tile has an arrow pointing in a different direction in our analogy) then the electrons will form a different arrangement after every shuffle. In contrast, if all the electrons are aligned (all the tiles have arrows pointing in the same direction), the puzzle always stays the same, no matter how the electrons are shuffled. "Nagaoka found that alignment of electron spins results in a lower energy of the system," Dehollain said. "As a consequence, the system of a square 2D lattice which has one missing electron will naturally prefer to be in a state in which all electron spins are aligned - a Nagaoka ferromagnetic state."

DIY magnet

The researchers observed, for the first time ever, experimental signatures of Nagaoka ferromagnetism. Mukhopadhyay: "We achieved this by engineering an electronic device with the capability to 'trap' single electrons. These so-called quantum dot devices have been used in science experiments for a while now, but our challenge was to make a 2D lattice of four quantum dots that is highly controllable. To make these devices work, we need to build an electric circuit at a nanometre scale, cool it down to nearly absolute zero (-272.99°C), and measure tiny electrical signals."

"Our next step was to trap three electrons and allow them to move around within the two-by-two lattice, creating the specific conditions required for Nagaoka ferromagnetism," said Mukhopadhyay. "We then had to demonstrate that this lattice indeed behaves like a magnet. The magnetic field generated by three electrons is too small to detect with conventional methods, so instead we used a very sensitive electric sensor which could 'decipher' the spin orientation of the electrons and convert it into an electrical signal that we could measure in the lab. In this way we were able to determine whether or not the electron spins were aligned as expected."

The puzzle solved

"The results were crystal clear: we demonstrated Nagaoka ferromagnetism," said Lieven Vandersypen, lead investigator and co-director of the Kavli Institute of Nanoscience. "When we started working on this project, I wasn't sure whether the experiment would be possible, because the physics is so different from anything else that we have ever studied in our lab. But our team managed to create the right experimental conditions for Nagaoka ferromagnetism, and we have demonstrated the robustness of the quantum dot system."

While this small-scale system is far from having implications in everyday life, it is an important milestone towards realising larger-scale systems such as quantum computers and quantum simulators. Vandersypen: "Such systems permit the study of problems that are too complex to solve with today's most advanced supercomputer, for example complex chemical processes. Proof-of-principle experiments, such as the realisation of Nagaoka ferromagnetism, provide important guidance towards developing quantum computers and simulators of the future."

Credit: 
Delft University of Technology

Widely used weed killer harming biodiversity

image: Experimental ponds in Gault Nature Reserve.

Image: 
Photo Vincent Fugère

One of the world's most widely used glyphosate-based herbicides, Roundup, can trigger loss of biodiversity, making ecosystems more vulnerable to pollution and climate change, say researchers from McGill University.

The widespread use of Roundup on farms has sparked concerns over potential health and environmental effects globally. Since the 1990s use of the herbicide boomed, as the farming industry adopted "Roundup Ready" genetically modified crop seeds that are resistant to the herbicide. "Farmers spray their corn and soy fields to eliminate weeds and boost production, but this has led to glyphosate leaching into the surrounding environment. In Quebec, for example, traces of glyphosate have been found in Montérégie rivers," says Andrew Gonzalez, a McGill biology professor and Liber Ero Chair in Conservation Biology.

To test how freshwater ecosystems respond to environmental contamination by glyphosate, researchers used experimental ponds to expose phytoplankton communities (algae) to the herbicide. "These tiny species at the bottom of the food chain play an important role in the balance of a lake's ecosystem and are a key source of food for microscopic animals. Our experiments allow us to observe, in real time, how algae can acquire resistance to glyphosate in freshwater ecosystems," says post-doctoral researcher Vincent Fugère.

Ecosystems adapt but at the cost of biodiversity

The researchers found that freshwater ecosystems that experience moderate contamination from the herbicide became more resistant when later exposed to a very high level of it - working as a form of "evolutionary vaccination." According to the researchers, the results are consistent with what scientists call "evolutionary rescue," which until recently had only been tested in the laboratory. Previous experiments by the Gonzalez group had shown that evolutionary rescue can prevent the extinction of an entire population when exposed to severe environmental contamination by a pesticide thanks to the rapid evolution.

However, the researchers note that the resistance to the herbicide came at a cost of plankton diversity. "We observed significant loss of biodiversity in communities contaminated with glyphosate. This could have a profound impact on the proper functioning of ecosystems and lower the chance that they can adapt to new pollutants or stressors. This is particularly concerning as many ecosystems are grappling with the increasing threat of pollution and climate change," says Gonzalez.

The researchers point out that it is still unclear how rapid evolution contributes to herbicide resistance in these aquatic ecosystems. Scientist already know that some plants have acquired genetic resistance to glyphosate in crop fields that are sprayed heavily with the herbicide. Finding out more will require genetic analyses that are currently under way by the team.

Credit: 
McGill University