Tech

Predicting earthquake hazards from wastewater injection

image: Oil and gas production requires the disposal of wastewater, which is injected into rock formations far underground through wells such as this. To minimize earthquake hazards from this process, an ASU-led team has created a new model for forecasting induced seismicity from wastewater injection.

Image: 
KFOR-TV, Oklahoma City

A byproduct of oil and gas production is a large quantity of toxic wastewater called brine. Well-drillers dispose of brine by injecting it into deep rock formations, where its injection can cause earthquakes. Most quakes are relatively small, but some of them have been large and damaging.

Yet predicting the amount of seismic activity from wastewater injection is difficult because it involves numerous variables. These include the quantity of brine injected, how easily brine can move through the rock, the presence of existing geological faults, and the regional stresses on those faults.

Now a team of Arizona State University-led geoscientists, working under a Department of Energy grant, has developed a method to predict seismic activity from wastewater disposal. The team's study area is in Oklahoma, a state where much fracking activity has been carried out with a lot of wastewater injection, and where there have been several induced earthquakes producing damage.

The team's paper reporting their findings appeared in the Proceedings of the National Academy of Sciences on July 29, 2019.

"Overall, earthquake hazards increase with background seismic activity, and that results from changes in the crustal stress," says Guang Zhai, a postdoctoral research scientist in ASU's School of Earth and Space Exploration and a visiting assistant researcher at the University of California, Berkeley. "Our focus has been to model the physics of such changes that result from wastewater injection."

Zhai is lead author for the paper, and the other scientists are Manoochehr Shirzaei, associate professor in the School, plus Michael Manga, of UC Berkeley, and Xiaowei Chen, of the University of Oklahoma.

"Seismic activity soared in one area for several years after wastewater injection was greatly reduced," says Shirzaei. "That told us that existing prediction methods were inadequate."

Back to basics

To address the problem, his team went back to basics, looking at how varying amounts of injected brine perturbed the crustal stresses and how these lead to earthquakes on a given fault.

"Fluids such as brine (and natural groundwater) can both be stored and move through rocks that are porous," says Zhai.

The key was building a physics-based model that combined the rock's ability to transport injected brine, and the rock's elastic response to fluid pressure. Explains Shirzaei, "Our model includes the records collected for the past 23 years of brine injected at more than 700 Oklahoma wells into the Arbuckle formation."

He adds that to make the scenario realistic, the model also includes the mechanical properties of the rocks in Oklahoma. The result was that the model successfully predicted changes in the crustal stress that come from brine injection.

For the final step, Shirzaei says, "We used a well-established physical model of how earthquakes begin so we could relate stress perturbations to the number and size of earthquakes."

The team found that the physics-based framework does a good job of reproducing the distribution of actual earthquakes by frequency, magnitude, and time.

"An interesting finding," says Zhai, "was that a tiny change in the rocks' elastic response to changes in fluid pressure can amplify the number of earthquakes by several times. It's a very sensitive factor."

Making production safer

While wastewater injection can cause earthquakes, all major oil and gas production creates a large amount of wastewater that needs to be disposed of, and injection is the method the industry uses.

"So to make this safer in the future," says Shirzaei, "our approach offers a way to forecast injection-caused earthquakes. This provides the industry with a tool for managing the injection of brine after fracking operations."

Knowing the volume of brine to be injected and the location of the disposal well, authorities can estimate the probability that an earthquake of given size will result. Such probabilities can be used for short-term earthquake hazard assessment.

Alternatively, the team says, given the probability that an earthquake of certain size will happen, oil and gas operators can manage the injected brine volume to keep the probability of large earthquakes below a chosen value.

The end result, says Zhai, "is that this process will allow a safer practice, benefiting both the general public and the energy industry."

Credit: 
Arizona State University

Simultaneous infection by 2 viruses the key to studying rare lymphoma

MADISON - New research has found that a rare blood cancer can be simulated in the lab only by simultaneously infecting white blood cells with two viruses typically found in the tumors.

The successful creation of stable, cancer-like cells in the lab opens up opportunities for understanding the progression of this and related cancers and, perhaps, developing treatments.

Primary effusion lymphoma, PEL, is a rare blood cancer that primarily affects those with compromised immune systems, such as people infected with HIV. It is aggressive, and patients typically live just six months after being diagnosed.

The cancerous B cells that make up PEL nearly always harbor two viruses, Epstein-Barr Virus, or EBV, and Kaposi's sarcoma-associated herpesvirus, or KSHV, which are both known to induce other types of cancer. Previous research has largely failed to stably infect B cells with both viruses in the lab, the key to developing a productive model of the cancer for further research.

University of Wisconsin-Madison professor of oncology Bill Sugden and his lab report this week in the Proceedings of the National Academy of Sciences that the two viruses support one another. When healthy B cells are exposed to both viruses within a day of one another, a small fraction of the cells remains infected for months.

In their work, the researchers discovered a rare population of EBV- and KSHV-infected cells that outcompeted other cells in the lab, behaving much like cells from PEL tumors. This population of quickly growing cells will help researchers untangle how this rare cancer forms in the body and how viruses can trigger cells to grow uncontrollably.

"All of a sudden we now have a tool to understand for the first time: how does a lymphoma possibly arise from infection of two tumor viruses and what are those two viruses contributing?" says Sugden, a member of the McArdle Laboratory for Cancer Research at the UW-Madison School of Medicine and Public Health.

In the past, researchers had tried to induce PEL-like behavior in B cells by infecting them with KSHV, the principal marker of the cancer. But those infections failed. Cells would slough off KSHV after days or weeks and did not continue to grow.

For Sugden, who has long studied EBV -- a herpes virus that is the cause of the common infection mononucleosis and of several cancers -- the solution made sense: use the viruses together. After all, they're typically found together in cancer cells isolated from patients.

"Everything I know about this virus says that it will not be retained in these lymphomas unless it was doing something to contribute," says Sugden.

In early attempts, Aurélia Faure, who recently received her doctorate in Sugden's lab, found KSHV better infected B cells when they were activated by inflammatory chemicals to start dividing. Still, the infections were transient.

Because EBV infection also strongly activates B cells, Faure and Sugden reasoned the virus might promote infection by KSHV. So Faure introduced both viruses to the cells, staggering infection by one or more days. EBV greatly increased the fraction of B cells that were infected with KSHV, although it was still limited to about 2 percent of the original population.

Most importantly, EBV allowed the cells to grow and maintain the KSHV virus stably for months. Faure found that KSHV infection was most successful when cells were infected with EBV one day before being exposed to KSHV. That narrow window for success is likely one reason that dual infection and subsequent development into PEL is rare in people.

While the small fraction of doubly infected B cells was often overgrown by the larger population of partially infected cells, the researchers discovered one population that could hold its own. What they termed the "fast" group of cells overgrew all others in the experiment, acting in many ways like the aggressive PEL cancer. Mitch Hayes, working with Faure and Sugden, found the cells had a form of antibody associated with PEL cells and shared patterns of gene expression in common with PEL tumors.

In all, the fast-growing population Sugden's lab developed appeared to be the first good model of PEL in the lab.

"Now we can go back with these cells and try to understand: What's the expression of EBV genes? What's the expression of KSHV genes in them that allow the cells to proliferate? What's the expression of EBV genes that allows KSHV be retained and vice versa?" says Sugden.
Sugden is intent on teaching other cancer researchers how to produce this useful population of growing, stably and doubly infected cells. He is also offering up the original lines of cells his lab uncovered. That community resource will drive a better understanding of how these viruses manipulate cells to form a lethal cancer.
"I'm going to do anything I possibly can to build on this work so that we attract other people to work on it," says Sugden.

Credit: 
University of Wisconsin-Madison

Introduced species dilute the effects of evolution on diversity

image: Ko'olau Mountain range on Oahu -- the third largest of the Hawaiian Islands. Researchers investigated the effects on biodiversity of introduced species and island age.

Image: 
William Weaver

Understanding how biodiversity is shaped by multiple forces is crucial to protect rare species and unique ecosystems. Now an international research team led by the University of Göttingen, German Centre for In-tegrative Biodiversity Research (iDiv) and the Helmholtz Centre for Environmental Research (UFZ), together with the University of Hawai'i at Mānoa, has found that biodiversity is higher on older islands than on younger ones. Furthermore, they found that introduced species are diluting the effects of island age on patterns of local biodiversity. The findings were published in PNAS (Proceedings of the National Academy of Sciences).

Oceanic islands, such as the Hawaiian archipelago, have long been a natural laboratory for scientists to analyse evolutionary and ecological processes. In such archipelagos, islands formed by undersea volca-noes often differ in age by several millions of years, allowing scientists to look at the long-term impacts of geology and evolution on biodiversity. In this study, researchers used data from more than 500 forest plots across the archipelago to explore how historical and recent ecological processes influence the number of species that coexist - whether at the scale of an island or a much smaller area such as a typical backyard. Their analysis showed that even within small plots, older islands had a greater number of both rare species as well as native species when compared with islands formed more recently. The researchers were able to compare data from older islands such as Kau'I (which is around 5 million years old) with islands like the Big Island of Hawai'i (which is only around 500,000 years old and still growing). "To be honest, I was a bit sur-prised by the results. I expected that ecological mechanisms would outweigh the macroevolutionary forces at the scales of these small plots, and that there'd be no differences in local-scale diversity among the is-lands," says Jonathan Chase (iDiv and Martin Luther University Halle-Wittenberg), senior author of this study. "So, to me, this is the coolest kind of discovery--one that challenges your assumptions".

They also showed that widespread introduced species weakened the effect of island age on biodiversity, by making Hawaiian forests more similar to one another. Dylan Craven from the University of Göttingen and lead author of the study, says, "We're seeing evidence that human activity - such as planting introduced species in our gardens and parks - is starting to erase millions of years of history, of plants and animals interacting with one another and their environment."

Credit: 
University of Göttingen

Imaging of exotic quantum particles as building blocks for quantum computing

image: A. A monolayer of iron atoms assembled on a rhenium surface. B. Image of a Majorana fermion as a bright line along the edge of the iron using a scanning tunneling microscope.

Image: 
UIC/Dirk Morr

Researchers at the University of Illinois at Chicago, in collaboration with their colleagues at the University of Hamburg in Germany, have imaged an exotic quantum particle -- called a Majorana fermion -- that can be used as a building block for future qubits and eventually the realization of quantum computers. Their findings are reported in the journal Science Advances.

More than 50 years ago, Gordon Moore, the former CEO of Intel, observed that the number of transistors on a computer chip doubles every 18 to 24 months. This trend, now known as Moore's Law, has continued to the present day, leading to transistors that are only a few nanometers -- one-billionth of a meter -- in size. At this scale, the classical laws of physics, which form the basis on which our current computers work, cease to function, and they are replaced by the laws of quantum mechanics. Making transistors even smaller, which has been used in the past to increase computing speed and data storage, is, therefore, no longer possible.

Unless researchers can figure out how to use quantum mechanics as the new foundation for the next generation of computers.

This was the basic idea formulated in 1982 by Richard Feynman, one of the most influential theoretical physicists of the 20th century. Rather than using classical computer bits that store information encoded in zeros and ones, one would devise "quantum bits" -- or qubits for short -- that would utilize the laws of quantum mechanics to store any number between 0 and 1, thereby exponentially increasing computing speed and leading to the birth of quantum computers.

"Usually, when you drop your cell phone, it doesn't erase the information on your phone," said Dirk Morr, professor of physics at UIC and corresponding author on the paper. "That's because the chips on which information is stored in bits of ones and zeros are fairly stable. It takes a lot of messing around to turn a one into a zero and vice versa. In quantum computers, however, because there is an infinite number of possible states for the qubit to be in, information can get lost much more easily."

To form more robust and reliable qubits, researchers have turned to Majorana fermions -- quantum particles that occur only in pairs.

"We only need one Majorana fermion per qubit, and so we have to separate them from each other," Morr said.

By building qubits from a pair of Majorana fermions, information can be reliably encoded, as long as the Majoranas remain sufficiently far apart.

To achieve this separation, and to "image" a single Majorana fermion, it is necessary to create a "topological superconductor" -- a system that can conduct currents without any energy losses, and at the same time, is tied into a "topological knot."

"This topological knot is similar to the hole in a donut: you can deform the donut into a coffee mug without losing the hole, but if you want to destroy the hole, you have to do something pretty dramatic, such as eating the donut," Morr said.

To build topological superconductors, Morr's colleagues at the University of Hamburg placed an island of magnetic iron atoms, only tens of nanometers in diameter, on the surface of rhenium, a superconductor. Morr's group had predicted that by using a scanning tunneling microscope, one should be able to image a Majorana fermion as a bright line along the edge of the island of iron atoms. And this is exactly what the experimental group observed.

"Being able to actually visualize these exotic quantum particles takes us another step closer to building robust qubits, and ultimately quantum computers," Morr said. "The next step will be to figure out how we can quantum engineer these Majorana qubits on quantum chips and manipulate them to obtain an exponential increase in our computing power. This will allow us to address many problems we face today, from fighting global warming and forecasting earthquakes to alleviating traffic congestion through driverless cars and creating a more reliable energy grid."

Credit: 
University of Illinois Chicago

African smoke is fertilizing Amazon rainforest and oceans, new study finds

MIAMI--A new study led by researchers at the University of Miami's (UM) Rosenstiel School of Marine and Atmospheric Science found that smoke from fires in Africa may be the most important source of a key nutrient--phosphorus--that acts as a fertilizer in the Amazon rainforest, Tropical Atlantic and Southern oceans.

Nutrients found in atmospheric particles, called aerosols, are transported by winds and deposited to the ocean and on land where they stimulate the productivity of marine phytoplankton and terrestrial plants leading to the sequestration of atmospheric carbon dioxide.

"It had been assumed that Saharan dust was the main fertilizer to the Amazon Basin and Tropical Atlantic Ocean by supplying phosphorus to both of these ecosystems," said the study's senior author Cassandra Gaston, an assistant professor in the Department of Atmospheric Sciences at the UM Rosenstiel School. "Our findings reveal that biomass burning emissions transported from Africa are potentially a more important source of phosphorus to these ecosystems than dust."

To conduct the study, the researchers analyzed aerosols collected on filters from a hilltop in French Guiana, at the northern edge of the Amazon Basin, for mass concentrations of windborne dust and their total and soluble phosphorus content. They then tracked the smoke moving through the atmosphere using satellite remote sensing tools to understand the long-range transport of smoke from Africa during time periods when elevated levels of soluble phosphorus were detected. They were then able to estimate the amount of phosphorus deposited to the Amazon Basin and the global oceans from African biomass burning aerosols using a transport model.

The analysis concluded that the smoke from widespread biomass burning in Africa, mostly the result of land clearing, brush fires and industrial combustion emissions, is potentially a more important source of phosphorus to the Amazon rainforest and Tropic Atlantic and Southern oceans than dust from the Sahara Desert.

"To our surprise, we discovered that phosphorus associated with smoke from southern Africa can be blown all the way to the Amazon and, potentially, out over the Southern Ocean where it can impact primary productivity and the drawdown of carbon dioxide in both ecosystems," said UM Rosenstiel School graduate student Anne Barkley, lead author of the study.

"Aerosols play a major role in Earth's climate, however, there is a lot that we don't understand regarding how they affect radiation, clouds, and biogeochemical cycles, which impedes our ability to accurately predict future increases in global temperature," said Gaston. "These new findings have implications for how this process might look in the future as combustion and fire emissions in Africa and dust transport patterns and amounts change with a changing climate and an increasing human population."

This study builds on more than 50 years of ground-breaking aerosol research in the Caribbean and Latin America by UM Rosenstiel School Professor Emeritus Joe Prospero that is being continued by Gaston. In addition, it represents an interdisciplinary collaboration across the UM Rosenstiel School in which Professor Paquita Zuidema Department of Atmospheric Sciences published extensive measurements of biomass burning to corroborate the seasonality of smoke transport, Associate Professor Ali Pourmand and Assistant Professor Amanda Oehlert, Department of Marine Geosciences analyzed samples in the Neptune Mass Spectrometer, Assistant Professor Kim Popendorf, Department of Ocean Sciences, aided in the measurement of soluble phosphorus in aerosols, and Pat Blackwelder, Assistant Director of the University of Miami College of Arts and Sciences Department of Chemistry Center for Advanced Microscopy, provided her expertise with scanning electron microscopy to image the filter samples on a micrometer scale.

Credit: 
University of Miami Rosenstiel School of Marine, Atmospheric, and Earth Science

Energy from seawater

image: The Hyperion Water Reclamation Plant on Santa Monica Bay in Los Angeles is an example of a coastal wastewater treatment operation that could potentially recover energy from the mixing of seawater and treated effluent.

Image: 
Doc Searls / Flickr

Salt is power. It might sound like alchemy, but the energy in places where salty ocean water and freshwater mingle could provide a massive source of renewable power. Stanford researchers have developed an affordable, durable technology that could harness this so-called blue energy.

The paper, recently published in American Chemical Society's ACS Omega, describes the battery and suggests using it to make coastal wastewater treatment plants energy-independent.

"Blue energy is an immense and untapped source of renewable energy," said study coauthor Kristian Dubrawski, a postdoctoral scholar in civil and environmental engineering at Stanford. "Our battery is a major step toward practically capturing that energy without membranes, moving parts or energy input."

Dubrawski works in the lab of study co-author Craig Criddle, a professor of civil and environmental engineering known for interdisciplinary field projects of energy-efficient technologies. The idea of developing a battery that taps into salt gradients originated with study coauthors Yi Cui, a professor of materials science and engineering, and Mauro Pasta, a postdoctoral scholar in materials science and engineering at the time of the research. Applying that concept to coastal wastewater treatment plants was Criddle's twist, born of his long experience developing technologies for wastewater treatment.

The researchers tested a prototype of the battery, monitoring its energy production while flushing it with alternating hourly exchanges of wastewater effluent from the Palo Alto Regional Water Quality Control Plant and seawater collected nearby from Half Moon Bay. Over 180 cycles, battery materials maintained 97 percent effectiveness in capturing the salinity gradient energy.

The technology could work any place where fresh and saltwater intermix, but wastewater treatment plants offer a particularly valuable case study. Wastewater treatment is energy-intensive, accounting for about three percent of the total U.S. electrical load. The process - essential to community health - is also vulnerable to power grid shutdowns. Making wastewater treatment plants energy independent would not only cut electricity use and emissions but also make them immune to blackouts - a major advantage in places such as California, where recent wildfires have led to large-scale outages.

Water power

Every cubic meter of freshwater that mixes with seawater produces about .65 kilowatt-hours of energy - enough to power the average American house for about 30 minutes. Globally, the theoretically recoverable energy from coastal wastewater treatment plants is about 18 gigawatts - enough to power more than 1,700 homes for a year.

The Stanford group's battery isn't the first technology to succeed in capturing blue energy, but it's the first to use battery electrochemistry instead of pressure or membranes. If it works at scale, the technology would offer a more simple, robust and cost-effective solution.

The process first releases sodium and chloride ions from the battery electrodes into the solution, making the current flow from one electrode to the other. Then, a rapid exchange of wastewater effluent with seawater leads the electrode to reincorporate sodium and chloride ions and reverse the current flow. Energy is recovered during both the freshwater and seawater flushes, with no upfront energy investment and no need for charging. This means that the battery is constantly discharging and recharging without needing any input of energy.

Durable and affordable technology

While lab tests showed power output is still low per electrode area, the battery's scale-up potential is considered more feasible than previous technologies due to its small footprint, simplicity, constant energy creation and lack of membranes or instruments to control charge and voltage. The electrodes are made with Prussian Blue, a material widely used as a pigment and medicine, that costs less than $1 a kilogram, and polypyrrole, a material used experimentally in batteries and other devices, which sells for less than $3 a kilogram in bulk.

There's also little need for backup batteries, as the materials are relatively robust, a polyvinyl alcohol and sulfosuccinic acid coating protects the electrodes from corrosion and there are no moving parts involved. If scaled up, the technology could provide adequate voltage and current for any coastal treatment plant. Surplus power production could even be diverted to a nearby industrial operation, such as a desalination plant.

"It is a scientifically elegant solution to a complex problem," Dubrawski said. "It needs to be tested at scale, and it doesn't address the challenge of tapping blue energy at the global scale - rivers running into the ocean - but it is a good starting point that could spur these advances."

To assess the battery's full potential in municipal wastewater plants, the researchers are working on a scaled version to see how the system functions with multiple batteries working simultaneously.

Credit: 
Stanford University

Whole-tree harvesting could boost biomass production

image: The researchers sampled reforested aspen stands located in Baraga, Delta, Dickinson and Menominee Counties that are owned and actively managed by Weyerhaeuser Company (formerly Plum Creek Timber Company) and for which a 40-year commercial logging record exists, providing the scientists with a data set to verify against field measurements.

Image: 
Robert Froese/Michigan Tech

This is a story of carbon choices: As societies around the world continue to move toward increased renewable energy portfolios, which energy sources do we choose?

In the U.S., coal plants are closing, but carbon dioxide concentration in the atmosphere continues to rise; pivoting toward renewable energy sources like wind, solar and biofuels is a necessary step toward halting the worst effects of climate change. Forest biomass is expected to be one of the key energy sources, but many people have wondered how to feed biofuel plants the materials they need without destroying forest resources.

Whole-tree aspen logging promotes renewable biomass energy from tops and branches, parts of the tree that are often left in the forest during logging in favor of the tree's trunk -- using the residual that remains after a sustainable harvest for logs. It has long been assumed that removing the leaves and branches of trees, rather than allowing them to decompose in the woods, will deplete the soil and lead to a weaker forest ecosystem. New research from Michigan Technological University's School of Forest Resources and Environmental Science challenges that hypothesis.

"Many far-reaching energy development decisions have been made based on assumptions," said Robert E. Froese, associate professor and director of the Ford Center and Forest.

Authors Michael I. Premer, who earned his doctorate in quantitative silviculture at Michigan Tech and now works as a research silviculturist for Rayonier, Froese and Eric D. Vance, recently retired from the National Council for Air and Stream Improvement, Inc., have recently published, "Whole-tree harvest and residue recovery in commercial aspen: Implications to forest growth and soil productivity across a rotation," in the journal Forest Ecology and Management, the third of a series of related articles about logging residue in managed aspen forests in Michigan's Upper Peninsula.

The three studies synergistically address the same question: What are the effects of removing logging residues -- the tops, branches and defective parts of tree trunks (boles) -- in Great Lakes aspen forests on forest productivity and environmental sustainability?

In the first study, the researchers examined the effect of residue removal on understory plant communities rather than allowing it to decompose and theoretically provide nutrients to the herbaceous and shrubby vegetation underneath the tree canopy.

In the second study, Premer, Froese and Vance delved deeper into an effect they noticed while the study was underway: "Cut to length" logging systems used intentionally to reduce soil compaction might not be effective in this regard, creating long-lasting patterns of reduced growth within regenerating stands.

The third paper examines the persistence of residues and differences in carbon sequestration and macronutrients between sites where residues were removed and where they were retained. Collectively, the three papers address site impacts in Great Lakes aspen forests and demonstrate that residue removal has few effects on forest ecology in managed stands.

"It seems obvious: logging, especially when tops and branches are removed for bioenergy production, must remove nutrients and wood that should remain to nourish the regenerating forest. So Midwestern states adopted guidelines for forest biomass harvest to protect forest lands," Froese said. "We studied the difference in soil nutrients, carbon and the rate of growth of regenerating aspen forests in the upper Midwest and we found there is no difference in aspen stand productivity when whole trees are removed. It turns out branches just don't appear to play much of a role in the ecology of aspen forest productivity after all."

Operating under a mistaken assumption, Midwestern states adopted guidelines to protect soils and ecosystems without really understanding the need for guidelines. This action has added complexity and cost, which disincentivize the adoption of bioenergy.

"The states' actions convey the notion that all biomass removal is 'damage' and therefore we must limit the 'damage' to some level that we can tolerate," Froese said. "It may be that recovering biomass doesn't damage the forest at all. If it reduces fossil fuel use, it may contribute to reducing damage to the global climate."

Froese adds that despite its moniker, whole-tree logging doesn't actually remove every last leaf and twig from the forest; Froese said in their study, 64% of residues remained despite trying to pick up as much of the logged study trees as possible.

The researchers sampled reforested aspen stands located in Baraga, Delta, Dickinson and Menominee Counties that are owned and actively managed by Weyerhaeuser Company (formerly Plum Creek Timber Company), and for which a 40-year commercial logging record exists, providing the scientists with a data set to verify against field measurements.

"Whether removed or not, aspen branches disappear fast by rotting away," Froese said. "For aspen forests, it appears that even the most intensive harvest is perfectly sustainable."

Froese argues that incorrect assumptions about soil productivity have created negative opinions about biomass energy sources. Coupled with low natural gas prices, Midwestern energy policies have a natural gas emphasis.

"We demonized the use of a renewable resource and we've shifted our energy generation from coal toward a different fossil fuel. Better, but still a fossil fuel," Froese said. "As the climate changes, we need to turn to renewables and we're missing an opportunity to pick up a renewable that's plentiful, natural and as we've demonstrated can be sustainably managed."

Froese argues that the Upper Peninsula is populated by a sustainable supply of biomass -- aspens, jack pines and other conifers -- and that the upper Midwest is poised to choose renewable energy sources rather than continuing to install natural gas plants with 30-year use lifespans that make uncoupling from fossil fuels any time soon a tricky economic proposition.

"For better or for worse we have term limits on fossil fuels," Froese said. "We know CO2 levels crested an important milestone this year. And we know we need every possible renewable energy source to contribute if we want to get off of fossil fuels.

"This has an essential practical application: If we can't demonstrate that feedstock for biomaterials can be produced from forests, then we can't engineer advanced materials and supply chains," Froese said. "It's about resiliency -- forests, in particular, and rural landscapes, in general. It's about sustainability of managed landscapes."

Journal

Forest Ecology and Management

DOI

10.1016/j.foreco.2019.05.002

Credit: 
Michigan Technological University

Midwives and nurse-midwives may underestimate the dangers of prenatal alcohol use

image: This is Dr. John Hannigan, professor, Merrill Palmer Skillman Institute, Wayne State University.

Image: 
Wayne State University

DETROIT - Alcohol use during pregnancy can have harmful consequences on the fetus including restricted growth, facial anomalies, and neurobehavioral problems. No amount of alcohol use during pregnancy has been proven safe. Yet a recent survey of midwives and nurses who provide prenatal care showed that 44% think one drink per occasion is acceptable while pregnant, and 38% think it is safe to drink alcohol during at least one trimester of pregnancy.

"Many prenatal care providers remain inadequately informed of the risks of drinking during pregnancy," said John Hannigan, PhD, one of the study's authors and a professor of at Wayne State University's Merrill Palmer Skillman Institute. "They fail to screen actively for alcohol use and miss opportunities for intervention." The research team analyzed 578 survey responses from professional members of the American College of Nurse Midwives. In collaboration with researchers at University of Massachusetts, the survey assessed knowledge of the effects of prenatal alcohol use, attitudes toward and perceived barriers to screening for alcohol use, and the use of standard screening tools in clinical practice.

"Only about one in three respondents said they screen for alcohol use at least some of the time," Hannigan said, "and many screening tools aren't validated for use in pregnant women." Midwives and nurses who believed alcohol was safe at some point in pregnancy were significantly less likely to screen their patients.

These results expand previous research that found prenatal care providers are often inadequately informed of the risks of drinking during pregnancy and fail to actively screen for alcohol use. The study recommends more comprehensive training for providers of care during pregnancy. "Midwives need to understand the health effects of alcohol use during pregnancy, the importance of screening, and the most reliable screening tools to use," Hannigan said. "The good news is this problem can be fixed."

Credit: 
Wayne State University - Office of the Vice President for Research

Standard vs. intensive blood pressure control to reduce the risk of stroke recurrence

What The Study Did: This randomized clinical trial and meta-analysis focused on intensive blood pressure control compared with a standard control regimen on the risk of stroke in patients who had had a previous stroke.

Authors: Kazuo Kitagawa, M.D., Ph.D., of the Tokyo Women's Medical University, Shinjukuku, Tokyo, is the corresponding author.

(doi:10.1001/jamaneurol.2019.2167)

Editor's Note: The article includes conflict of interest and funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

SLAP microscope smashes speed records

video: SLAP microscopy allows scientists to watch the activity of hundreds of synapses at a time in the brains of living animals. Here, red bursts mark synapses of a mouse neuron being excited by the neurotransmitter glutamate.

Image: 
Podgorski Lab

A new microscope breaks a long-standing speed limit, recording footage of brain activity 15 times faster than scientists once believed possible. It gathers data quickly enough to record neurons' voltage spikes and release of chemical messengers over large areas, monitoring hundreds of synapses simultaneously - a giant leap for the powerful imaging technique called two-photon microscopy.

The trick lies not in bending the laws of physics, but in using knowledge about a sample to compress the same information into fewer measurements. Scientists at the Howard Hughes Medical Institute's Janelia Research Campus have used the new microscope to watch patterns of neurotransmitter release onto mouse neurons, they report July 29 in Nature Methods. Until now, it's been impossible to capture these millisecond-timescale patterns in the brains of living animals.

Scientists use two-photon imaging to peer inside opaque samples - like living brains - that are impenetrable with regular light microscopy. These microscopes use a laser to excite fluorescent molecules and then measure the light emitted. In classic two-photon microscopy, each measurement takes a few nanoseconds; making a video requires taking measurements for every pixel in the image in every frame.

That, in theory, limits how fast one can capture an image, says study lead author Kaspar Podgorski, a fellow at Janelia. "You'd think that'd be a fundamental limit -¬ the number of pixels multiplied by the minimum time per pixel," he says. "But we've broken this limit by compressing the measurements." Previously, that kind of speed could only be achieved over tiny areas.

The new tool - Scanned Line Angular Projection microscopy, or SLAP - makes the time-consuming data-collection part more efficient in a few ways. It compresses multiple pixels into one measurement and scans only pixels in areas of interest, thanks to a device that can control which parts of the image are illuminated. A high-resolution picture of the sample, captured before the two-photon imaging begins, guides the scope and allows scientists to decompress the data to create detailed videos.

Much like a CT scanner, which builds up an image by scanning a patient from different angles, SLAP sweeps a beam of light across a sample along four different planes. Instead of recording each pixel in the beam's path as an individual data point, the scope compresses the points in that line together into one number. Then, computer programs unscramble the lines of pixels to get data for every point in the sample - sort of like solving a giant Sudoku puzzle.

In the time it takes SLAP to scan the whole sample, a traditional scope going pixel-by-pixel would cover just a small fraction of an image. This speed allowed Podgorski's team to watch in detail how glutamate, an important neurotransmitter, is released onto different parts of mouse neurons. In the mouse visual cortex, for example, they identified regions on neurons' dendrites where many synapses seem to be active at the same time. And they tracked neural activity patterns migrating across the mouse's cortex as an object moved across its visual field.

Podgorski's ultimate goal is image all of the signals coming into a single neuron, to understand how neurons transform incoming signals into outgoing signals. This current scope is "only a step along the way - but we're already building a second generation. Once we have that, we won't be limited by the microscope anymore," he says.

His team is upgrading the scope's scanners to increase its speed. They're also seeking ways to track other neurotransmitters so they can fully tap into the symphony of neural communication.

Credit: 
Howard Hughes Medical Institute

AI-powered tool predicts cell behaviors during disease and treatment

image: Predicting cellular behavior in silico: Trained on data that capture stimulation effects for a set of cell types, scGen can be used to model cellular responses in a new cell type.

Image: 
© Helmholtz Zentrum München

Large-scale atlases of organs in a healthy state are soon going to be available, in particular, within the Human Cell Atlas. This is a significant step in better understanding cells, tissues and organs in healthy state and provides a reference when diagnosing, monitoring, and treating disease. However, due to the sheer number of possible combinations of treatment and disease conditions, expanding these data to characterize disease and disease treatment in traditional life science laboratories is labor intensive and costly and, hence, not scalable.

Accurately modeling cellular response to perturbations (e.g. disease, compounds, genetic interventions) is a central goal of computational biology. Although models based on statistical and mechanistic approaches exist, no machine-learning based solution viable for unobserved, high-dimensional phenomena has yet been available. In addition, scGen is the first tool that predicts cellular response out-of-sample. This means that scGen, if trained on data that capture the effect of perturbations for a given system, is able to make reliable predictions for a different system. "For the first time, we have the opportunity to use data generated in one model system such as mouse and use the data to predict disease or therapy response in human patients," said Mohammad Lotfollahi, PhD student (Helmholtz Zentrum München and Technische Universität München).

scGen is a generative deep learning model that leverages ideas from image, sequence and language processing, and, for the first time, applies these ideas to model the behavior of a cell in silico. The next step for the team concerns the improving scGen to a fully data-driven formulation, increasing its predictive power to enable the study of combinations of perturbations. "We can now start optimizing scGen to answer more and more complex questions about diseases," said Alex Wolf, Team Leader, and Fabian Theis, Director of the Institute of Computational Biology and Chair of Mathematical Modeling of Biological Systems at Technische Universität München.

Credit: 
Helmholtz Munich (Helmholtz Zentrum München Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH))

DIY balloon pump takes microfluidics to the people

video: DIY balloon pump operates lab on chip technology, costs just $2 to make from stockings and a balloon.

Image: 
RMIT University and the Walter and Eliza Hall Institute of Medical Research

A simple pressure pump, made from balloons and nylon stockings, means more people in more places will be able to test water contaminants and blood samples.

The ingenious device unveiled in the prestigious Lab on a Chip journal cost just $2 to make, yet works almost as well as its expensive and cumbersome lab counterparts.

Pumps are used to make biological samples flow through microfluidic devices while their contents are identified beneath a microscope.

This DIY pump came from a collaboration between researchers at RMIT University and the Walter and Eliza Hall Institute of Medical Research in Melbourne, Australia, who demonstrated its viability in tests to detect aquatic parasites and cancer cells and to study vascular diseases.

Inspired by football

Study lead author and RMIT engineer, Dr Peter Thurgood, said the team took inspiration for the simple invention from footballs, which hold large pressures when reinforced.

"We started with basic latex balloons, then realised that regular stockings made from nylon and elastane could be a perfect match to reinforce them, allowing them to hold significantly higher pressure and function as pumps," Thurgood said.

"By simply wrapping three layers of stockings around the latex balloon we were able to increase its internal pressure by a factor of 10 - enough to run many water or blood analyses that would usually require large, expensive pumps."

Experiments showed the reinforced balloon pump could be used to operate microfluidic devices for several hours without a significant pressure loss.

The pump also fits easily within an incubator and can be left overnight.

A low-cost field tool where it's needed most

Study co-author and parasitologist at the Walter and Eliza Hall Institute, Associate Professor Aaron Jex, is a leading researcher in global water quality and public health interventions.

He said this simple innovation opened exciting opportunities in field water testing and the ability to test and diagnose patients for infectious pathogens and aquatic micro-organisms at the point-of-care.

"Parasitic micro-organisms have a major impact in impoverished communities in tropical and subtropical regions globally, but also in developed countries including Australia," Jex said.

"In order to address this there is an urgent need for field-based, low-cost diagnostic tools that work in challenging, sometimes remote and often complex environments very different from a pristine laboratory."

"As simple as it may look, this device suits those needs really well and could have a big impact."

Co-author and RMIT biologist, Dr Sara Baratchi, said it also had promising applications for early diagnosis of diseases at home or in the doctor's surgery.

The balloon pump was tested as a point-of-care diagnostic device for detection of very low concentrations of target cancer cells in liquid samples, and found to work.

"The hydrodynamic force of liquid produced by the reinforced balloon was enough to isolate cells for study, which was really amazing for a $2 pump!"

Baratchi is now working on applying the simplified pump technology to develop organ-on-chip systems that mimic the flow conditions in dysfunctional vessels, to better understand diseases like atherosclerosis that lead to heart attack and stroke.

An opportunity for outreach

RMIT engineer and project leader, Dr Khashayar Khoshmanesh, is a leading researcher in the field of microfluidic based lab-on-a-chip technologies.

He said while microfluidics had made significant progress over the past decade, their widespread application had been limited by the cost and bulk of pumps required to operate them.

"Simplicity is at the heart of our entire research program. By redesigning sophisticated microfluidic devices into simplified ones, we can maximise their outreach and applications for use in teaching or research in the field, not just in sophisticated labs," he said.

"We envisage these types of pumps also being suitable for student classwork experiments to support capability development in this important area of research from an earlier stage."

Credit: 
RMIT University

Who dominates the discourse of the past?

image: Pictured here: Shannon Tushingham and Tiffany Fulkerson, Washington State University archaeologists.

Image: 
WSU

Male academics, who comprise less than 10 percent of North American archaeologists, write the vast majority of the field's high impact, peer-reviewed literature.

That's according to a new study in American Antiquity by Washington State University archaeologists Tiffany Fulkerson and Shannon Tushingham.

The two scientists set out to determine how a rapidly evolving demographic and professional landscape is influencing the production and dissemination of knowledge in American archaeology.

They found that women, who now make up half of all archaeologists in North America, and professionals working outside of a university setting, who account for 90 percent of the total workforce, were far less likely to publish in peer-reviewed journals.

"The underrepresentation of women and non-academics in peer-reviewed publications is in stark contrast to the landscape of archaeology as a whole, which is rich in gender and occupational diversity," said Fulkerson, a graduate student in the WSU Department of Anthropology and lead author of the study. "In effect, you have a very narrow demographic dominating the discourse of the past in North America."

An evolving field

North American archaeology has considerably evolved over the last several decades. Women now surpass men in the number of archaeology PhDs awarded. Additionally, the overwhelming majority of archaeologists in the U.S. are no longer employed at universities but rather work for government, Native American tribes and private agencies that specialize in cultural resource management (CRM).

In order to determine how well represented women and CRM archaeologists are in the field's literature, Fulkerson and Tushingham analyzed data on the gender and occupational affiliation of lead and secondary authors in three regional and three national publishing venues from 2000-2017. In total, data for 5,010 authors in 2,445 articles were compiled and analyzed.

The results of their analysis reveal a considerable dearth of women authors in the regional and national peer-reviewed literature.

For example, in American Antiquity, the flagship journal of the largest archaeology society in the world, women accounted for a mere 26.6 percent of lead authors, a proportion comparable to the two regional, peer-reviewed journals analyzed in the study.

CRM professionals were similarly under-represented in the peer-reviewed literature. The researchers found less than 20 percent of first authors in the two national journals analyzed in the study, American Antiquity and the American Archaeological Record, worked outside of a university.

Fulkerson and Tushingham write that one of the main reasons for the stark publishing gap is few women and CRM professionals have positions that afford the time and opportunity to publish.

"Similar to other STEM fields, there is an attrition of women as they advance through the professional ranks," said Tushingham, assistant professor in the WSU Department of Anthropology. "The loss of women throughout the career pipeline results in fewer women securing positions that offer the time and opportunity to publish. For CRM archaeologists, there is a similar lack of time, resources and incentives to publish."

Rather than disseminating their research in peer-reviewed forums, many women and CRM archaeologists pursue alternative strategies of communicating their work, namely through non-peer-reviewed venues that require less time to publish.

The two non-peer reviewed journals included in the study, The SAA Archaeological Recordand The SCA Proceedings, had far greater gender and occupational parity than the peer-reviewed publications.

Why does it matter?

Fulkerson and Tushingham write that limiting who tells narratives of the past to a select group of people is problematic because it inhibits the multitude of voices and knowledge that make up North America's archaeological heritage.

Additionally, under the current system, the vast majority of work being done by North American archeologists is hard for the broader archaeology community and public to access.

"This is because the vast majority of archaeology practitioners are working in the compliance sector, and we have established these people are far less likely to publish than those in academic settings," Fulkerson said. "Archaeology work that is conducted in compliance settings is frequently only reported on in technical reports, which are often not widely accessible to other archaeologists, let alone the public."

Moving forward

Similar to other STEM fields, progress towards a more multivocal archaeology will require a dissolution of the structural, institutional, and ideological barriers that impede not only women and non-academics but also marginalized groups, including People of Color and LGBTQ people, from being strong participants in publishing.

In their paper, Fulkerson and Tushingham explore in detail numerous solutions that range from increased leadership opportunities for women and marginalized peoples to encouraging CRM professionals to submit technical reports and other non-published works to digital repositories to increase the visibility of their work.

"Online repositories such as the Digital Archaeological Record (tDAR) and The Grey Literature Network Service (GreyNet) are excellent venues for women and non-academics to get their research out," Fulkerson said. "Professional societies and other organizations might also consider creating regionally specific online resources in order to facilitate engagement with the broader community."

Credit: 
Washington State University

Oddball edge wins nanotube faceoff

image: Rice University researchers have determined that an odd, two-faced 'Janus' edge is more common than previously thought for carbon nanotubes growing on a rigid catalyst. The conventional nanotube at left has facets that form a circle, allowing the nanotube to grow straight up from the catalyst. But they discovered the nanotube at right, with a tilted Janus edge that has segregated sections of zigzag and armchair configurations, is far more energetically favored when growing carbon nanotubes via chemical vapor deposition.

Image: 
Illustration by Evgeni Penev/Rice University

HOUSTON - (July 29, 2019) - When is a circle less stable than a jagged loop? Apparently when you're talking about carbon nanotubes.

Rice University theoretical researchers have discovered that nanotubes with segregated sections of "zigzag" and "armchair" facets growing from a solid catalyst are far more energetically stable than a circular arrangement would be.

Under the right circumstances, they reported, the interface between a growing nanotube and its catalyst can reach its lowest-known energy state via the two-faced "Janus" configuration, with a half-circle of zigzags opposite six armchairs.

The terms refer to the shape of the nanotube's edge: A zigzag nanotube's end looks like a saw tooth, while an armchair is like a row of seats with armrests. They are the basic edge configurations of the two-dimensional honeycomb of carbon atoms known as graphene (as well as other 2D materials) and determine many of the materials' properties, especially electrical conductivity.

The Brown School of Engineering team of materials theorist Boris Yakobson, researcher and lead author Ksenia Bets and assistant research professor Evgeni Penev reported their results in the American Chemical Society journal ACS Nano.

The theory is a continuation of the team's discovery last year that Janus interfaces are likely to form on a catalyst of tungsten and cobalt, leading to a single chirality, called (12,6), that other labs had reported growing in 2014.

The Rice team now shows such structures aren't unique to a specific catalyst, but are a general characteristic of a number of rigid catalysts. That's because the atoms attaching themselves to the nanotube edge always seek their lowest energy states, and happen to find it in the Janus configuration they named AZ.

"People have assumed in studies that the geometry of the edge is a circle," Penev said. "That's intuitive -- it's normal to assume that the shortest edge is the best. But we found for chiral tubes the slightly elongated Janus edge allows it to be in much better contact with solid catalysts. The energy for this edge can be quite low."

In the circle configuration, the flat armchair bottoms rest on the substrate, providing the maximum number of contacts between the catalyst and the nanotube, which grows straight up. (Janus edges force them to grow at an angle.)

Carbon nanotubes -- long, rolled-up tubes of graphene -- are difficult enough to see with an electron microscope. As yet there's no way to observe the base of a nanotube as it grows from the bottom up in a chemical vapor deposition furnace. But theoretical calculations of the atom-level energy that passes between the catalyst and the nanotube at the interface can tell researchers a lot about how they grow.

That's a path the Rice lab has pursued for more than a decade, pulling at the thread that reveals how minute adjustments in nanotube growth can change the kinetics, and ultimately how nanotubes can be used in applications.

"Generally, the insertion of new atoms at the nanotube edge requires breaking the interface between the nanotube and the substrate," Bets said. "If the interface is tight, it would cost too much energy. That is why the screw dislocation growth theory proposed by Professor Yakobson in 2009 was able to connect the growth rate with the presence of kinks, the sites on the nanotube edge that disrupt the tight carbon nanotube-substrate contact.

"Curiously, even though Janus edge configuration allows very tight contact with the substrate it still preserves a single kink that would allow continuous nanotube growth, as we demonstrated last year for the cobalt tungsten catalyst," Bets said.

Bets ran extensive computer simulations to model nanotubes growing on three rigid catalysts that showed evidence of Janus growth and one more "fluid" catalyst, tungsten carbide, that did not. "The surface of that catalyst is very mobile, so the atoms can move a lot," Penev said. "For that one, we did not observe a clear segregation."

Yakobson compared Janus nanotubes to the Wulff shape of crystals. "It's somewhat surprising that our analysis suggests a restructured, faceted edge is energetically favored for chiral tubes," he said. "Assuming that the lowest energy edge must be a minimal-length circle is like assuming that a crystal shape must be a minimal-surface sphere but we know well that 3D shapes have facets and 2D shapes are polygons, as epitomized by the Wulff construction. 

"Graphene has by necessity several 'sides,' but a nanotube cylinder has one rim, making the energy analysis different," he said. "This raises fundamentally interesting and practically important questions about the relevant structure of the nanotube edges."

The Rice researchers hope their discovery will advance them along the path toward those answers. "The immediate implication of this finding is a paradigm shift in our understanding of growth mechanisms," Yakobson said. "That may become important in how one practically designs the catalyst for efficient growth, especially of controlled nanotube symmetry type, for electronic and optical utility."

Credit: 
Rice University

A catalyst for sustainable methanol

image: The technology makes it possible to recycle CO2 and produce methanol from it.

Image: 
ETH Zurich / Matthias Frei

The global economy still relies on the fossil carbon sources of petroleum, natural gas and coal, not just to produce fuel, but also as a raw material used by the chemical industry to manufacture plastics and countless other chemical compounds. Although efforts have been made for some time to find ways of manufacturing liquid fuels and chemical products from alternative, sustainable resources, these have not yet progressed beyond niche applications.

Scientists at ETH Zurich have now teamed up with the French oil and gas company Total to develop a new technology that efficiently converts CO2 and hydrogen directly into methanol. Methanol is regarded as a commodity or bulk chemical. It is possible to convert it into fuels and a wide variety of chemical products, including those that today are mainly based on fossil resources. Moreover, methanol itself has the potential to be utilised as a propellant, in methanol fuel cells, for example.

Nanotechnology

The core of the new approach is a chemical catalyst based on indium oxide, which was developed by Javier Pérez-Ramírez, Professor of Catalysis Engineering at ETH Zurich, and his team. Just a few years ago, the team successfully demonstrated in experiments that indium oxide was capable of catalysing the necessary chemical reaction. Even at the time, it was encouraging that doing so generated virtually only methanol and almost no by-products other than water. The catalyst also proved to be highly stable. However, indium oxide was not sufficiently active as a catalyst; the large quantities needed prevent it from being a commercially viable option.

The team of scientists have now succeeded in boosting the activity of the catalyst significantly, without affecting its selectivity or stability. They achieved this by treating the indium oxide with a small quantity of palladium. "More specifically, we insert some single palladium atoms into the crystal lattice structure of the indium oxide, which anchor further palladium atoms to its surface, generating tiny clusters that are essential for the remarkable performance," explains Cecilia Mondelli, a lecturer in Pérez-Ramírez's group. Pérez-Ramírez points out that, with the aid of advanced analytical and theoretical methods, catalysis may now be considered nanotechnology, and in fact, the project clearly shows this to be the case.

The closed carbon cycle

"Nowadays, deriving methanol on an industrial scale is done exclusively from fossil fuels, with a correspondingly high carbon footprint," Pérez-Ramírez says. "Our technology uses CO2 to produce methanol." This CO2 may be extracted from the atmosphere or - more simply and efficiently - from the exhaust discharged by combustion power plants. Even if fuels are synthesised from the methanol and subsequently combusted, the CO2 is recycled and thus the carbon cycle is closed.

Producing the second raw material, hydrogen, requires electricity. However, the scientists point out that if this electricity comes from renewable sources such as wind, solar or hydropower energy, it can be used to make sustainable methanol and thus sustainable chemicals and fuels.

Compared to other methods that are currently being applied to produce green fuels, Pérez-Ramírez continues, this technology has the great advantage that it is almost ready for the market. ETH Zurich and Total have jointly filed a patent for the technology. Total now plans to scale up the approach and potentially implement the technology in a demonstration unit over the next few years.

Credit: 
ETH Zurich