Tech

When kinetics and thermodynamics should play together

image: Young-Shin Jun, professor in the Department of Energy, Environmental & Chemical Engineering at the McKelvey School of Engineering

Image: 
Washington University in St. Louis

The formation of calcium carbonate (CaCO3) in water has ramifications for everything from food and energy production to human health and the availability of drinkable water. But in the context of today's environment, simply studying how calcium carbonate forms in pure water isn't helpful.

Researchers at Washington University in St. Louis's McKelvey School of Engineering have pioneered cutting edge methods to study the formation of calcium carbonate in saline water. Their results, recently published in the Journal of Physical Chemistry C, suggest that, without considering kinetic factors, we may have been overestimating how fast calcium carbonate forms in saline environments.

"Now more than ever, it is important to understand how minerals form under highly saline conditions," said Young-Shin Jun, professor in the Department of Energy, Environmental & Chemical Engineering. As urban areas spread, more and more fresh water is lost to the oceans through runoff.  An increased production of briny water is also being seen in industrial and energy harvesting processes, such as desalination and hydraulic fracturing.

Jun's group began with a philosophical question: At what point in the coming together of calcium and carbonate ions does calcium carbonate actually "form"?

"People often casually say 'formation' when they refer to the 'growth' of solids, but formation actually starts earlier, at the nucleation stage," Jun said. "Nucleation begins at the moment when all of the precursor parts have fallen into place, reaching a critical mass that creates a nucleus that is big enough and stable enough to continue to grow as calcium carbonate solids."

Nucleation is, unsurprisingly, difficult to observe because it happens at nanoscale. Hence, this process is often simply assumed to have taken place. Rather than paying attention to nucleation as a separate phenomenon, researchers have traditionally put more effort into understanding growth.

Working in northern Illinois at the Advanced Photon Source in the Argonne National Laboratory with a highly powerful synchrotron-based X-ray scattering method known as grazing incidence small angle X-ray scattering (GISAXS), Jun's lab has created unique environmental reaction cells and observed real time nucleation events in aqueous environments. They can see the moment of nucleation, which allows them to closely compare rates of nucleation in waters of different salinities.

The concentration of salt in water varies widely; seawater has about 35 grams of salt per liter, while water used in hydraulic fracturing (or fracking) contains even higher concentrations of salts. However, without considering salinity, most studies have explored how the mineral interacts with the substrate on which it grows -- for instance, what is a water pipe or a membrane made of, and how does that material affect the formation of calcium scales?

But those aren't the only important interactions.

"We need to add salinity into this matrix," Jun said. "How does saline water chemistry affect nucleation? It doesn't happen in a vacuum."

An important relationship in determining the likelihood of nucleation is the balance between the thermodynamics and kinetics of the particular system. Thermodynamically, a specific amount of energy is required to drive nucleation; if that energy (known as the interfacial energy) is sufficiently low, then nucleation can spontaneously occur.

Kinetics refers to the motions of the sub- and nanometer-sized building blocks (precursors) that may or may not reach that critical mass (called the critical nucleus size) and go on to grow as calcium carbonate. As with nucleation itself, observing the kinetics of these particles is difficult. Historically, the kinetic factor was considered to be less important than the thermodynamic parameter, and was assumed to be a constant. But is this true even for highly saline water?

"People have thought that kinetics is not important because it should be the same, no matter what," Jun said. But using GISAXS, Jun and her former doctoral student Qingyun Li, now at Stanford University, were able to quantitatively describe the relationship between the kinetic factor (J0) and thermodynamic parameter (interfacial energy, α) of calcium carbonate nucleation, using quartz as the substrate. Critically, they were able to test it in water with varying salinities.

It turns out that in water with high salinity, interfacial energy is lower than in pure water, which means nucleation can happen easier. However, the kinetic factor -- related to how fast the building blocks are being delivered -- is slow.

"If we account only for thermodynamics when we predict the system, we're overestimating the rate of nucleation. The impact of kinetic factors should be included," Jun said.

This impact is important for a host of reasons beyond simply having a better basic understanding of mineral formation.

"Unprecedented socioeconomic development has accelerated our fresh water needs," Jun said. "Also, a large volume of super-saline water is generated from water and energy recovery sites, such as desalination plants and conventional/unconventional oil and gas recovery using hydraulic fracturing.

"Thus, to design sustainable water and energy production systems, we urgently need a good understanding of how highly saline water can affect calcium carbonate nucleation, which can reduce their process efficiencies," Jun said.

"It is an exciting finding. By changing the kinetics and thermodynamics, we can design a surface to prevent nucleation. By knowing when and where the nucleation happens, we can prevent or reduce it, extending the lifetime of pipelines or water purification membranes.

"Conversely, we can also increase nucleation where we need it, such as in geologic CO2 storage," she said. "This basic understanding gives us power and control."

Credit: 
Washington University in St. Louis

Stanford study shows how to improve production at wind farms

image: Four of the turbines on a TransAlta Renewables wind farm in Alberta, Canada that were used for the wake-steering experiment. The truck in the lower left corner of the photo gives a sense of the wind turbines' size.

Image: 
Calgary Drone Photography for Stanford University

What's good for one is not always best for all.

Solitary wind turbines produce the most power when pointing directly into the wind. But when tightly packed lines of turbines face the wind on wind farms, wakes from upstream generators can interfere with those downstream. Like a speedboat slowed by choppy water from a boat in front, the wake from a wind turbine reduces the output of those behind it.

Pointing turbines slightly away from oncoming wind - called wake-steering - can reduce that interference and improve both the quantity and quality of power from wind farms, and probably lower operating costs, a new Stanford study shows.

"To meet global targets for renewable energy generation, we need to find ways to generate a lot more energy from existing wind farms," said John Dabiri, professor of civil and environmental engineering and of mechanical engineering and senior author of the paper. "The traditional focus has been on the performance of individual turbines in a wind farm, but we need to instead start thinking about the farm as a whole, and not just as the sum of its parts."

Turbine wakes can reduce the efficiency of downwind generators by more than 40 percent. Previously, researchers have used computer simulations to show that misaligning turbines from the prevailing winds could raise production of downstream turbines. However, showing this on a real wind farm has been hindered by challenges in finding a wind farm willing to halt normal operations for an experiment and in calculating best angles for the turbine - until now.

First, the Stanford group developed a faster way to calculate the optimal misalignment angles for turbines, which they described in a study, published July 1 in Proceedings of the National Academy of Sciences.

Then, they tested their calculations on a wind farm in Alberta, Canada in collaboration with operator TransAlta Renewables. The overall power output of the farm increased by up to 47 percent in low wind speeds - depending on the angle of the turbines - and by 7 to 13 percent in average wind speeds. Wake steering also reduced the ebbs and flows of power that are normally a challenge with wind power.

"Through wake steering, the front turbine produced less power as we expected," said mechanical engineering PhD student Michael Howland, lead author on the study. "But we found that because of decreased wake effects, the downstream turbines generated significantly more power."

Variability

Variable output by wind farms makes managing the grid more difficult in two important ways.

One is the need for backup power supplies, like natural gas-fired power plants and large, expensive batteries. In the new study, the power improvement at low wind speeds was particularly high because turbines typically stop spinning below a minimum speed, cutting production entirely and forcing grid managers to rely on backup power. In slow winds, wake-steering reduced the amount of time that speeds dropped below this minimum, the researchers found. Notably, the biggest gains were at night, when wind energy is typically most valuable as a complement to solar power.

The other is the need to match exactly the amount of electricity supplied and used in a region every moment to keep the grid reliable. Air turbulence from wakes can make wind farm production erratic minute by minute - a time period too short to fire up a gas generator. This makes matching supply and demand more challenging for system operators in the very short term. They have tools to do so, but the tools can be expensive. In the study, wake steering reduced the very short-term variability of power production by up to 72 percent.

Additionally, reducing variability can help wind farm owners lower their operating costs. Turbulence in wakes can strain turbine blades and raise repair costs. Although the experiment did not last long enough to prove that wake steering reduces turbine fatigue, the researchers suggested this would happen.

"The first question that a lot of operators ask us is how this will affect the long-term structural health of their turbines," Dabiri said. "We're working on pinpointing the exact effects, but so far we have seen that you can actually decrease mechanical fatigue through wake steering."

Modeling and long-term viability

To calculate the best angles of misalignment for this study, the researchers developed a new model based on historical data from the wind farm.

"Designing wind farms is typically a very data and computationally intensive task," said Sanjiva Lele, a professor of aeronautics and astronautics, and of mechanical engineering. "Instead, we established simplified mathematical representations that not only worked but also reduced the computational load by at least two orders of magnitude."

This faster computation could help wind farm operators use wake steering widely.

"Our model is essentially plug-and-play because it can use the site-specific data on wind farm performance," Howland said. "Different farm locations will be able to use the model and continuously adjust their turbine angles based on wind conditions."

Although the researchers were unable to measure a change in annual power production because of the limited 10-day duration of this field test, the next step, said Dabiri, is to run field tests for an entire year.

"If we can get to the point where we can deploy this strategy on a large-scale for long periods of time, we can potentially optimize aerodynamics, power production and even land-use for wind farms everywhere," said Dabiri.

Credit: 
Stanford University

New data resource reveals highly variable staffing at nursing homes

Researchers who analyzed payroll-based staffing data for U.S. nursing homes discovered large daily staffing fluctuations, low weekend staffing and daily staffing levels that often fall well below the expectations of the Centers for Medicare and Medicaid Services (CMS), all of which can increase the risk of adverse events for residents.

A study published in the July issue of Health Affairs paints a clear picture of the staffing levels of nurses and direct care staff at nursing homes based on a new CMS data resource, the Payroll-Based Journal (PBJ). CMS has been collecting data from nursing homes since 2016 to meet a requirement of the Affordable Care Act, and PBJ data have been used in the federal Five-Star Quality Rating System for Nursing Homes since April 2018.

Medicare has no minimum staff-to-resident ratio standard for nursing homes, and the only staffing requirements are that a registered nurse (RN) must be present for eight hours a day, or the equivalent of one shift, and an RN or licensed practical nurse (LPN) must be present at a facility at all times.

As part of its quality rating system, CMS compares nursing homes' actual staffing to expected levels based on the acuity of residents in the facility. Using PBJ data from more than 15,000 nursing homes, the research team discovered that 54% of facilities met the expected level of staffing less than 20% of the time during the one-year study period. For registered nurse staffing, 91% of facilities met the expected staffing level less than 60% of the time.

"Staffing in the nursing home is one of the most tangible and important elements to ensure high quality care," said study co-author David Stevenson, PhD, a Health Policy professor at Vanderbilt University Medical Center. "Anyone who has ever set foot in a nursing home knows how important it is to have sufficient staffing, something the research literature has affirmed again and again. As soon as these new data became available, researchers and journalists started investigating them, and the government now uses the PBJ data in its quality rating system."

Stevenson, along with Harvard University colleagues David Grabowski, PhD, professor of Health Care Policy, and lead author Fangli Geng, a Health Policy PhD student, also looked at day-to-day staffing fluctuations over the one-year period, and the findings were troubling, he said.

Relative to weekday staffing, the PBJ data showed a large drop in weekend staffing in every staffing category. On average, weekend staffing time per resident day was just 17 minutes for RNs, nine minutes for LPNs and 12 minutes for nurse aides (NAs).

Unlike previous nursing home staffing data that was self-reported by facilities and covered only a narrow window of time around a facility's annual recertification survey, PBJ data are linked to daily payroll information for several staff categories and cover the entire year. This distinction is critical, the researchers wrote, because the older data were "subject to reporting bias" and "rarely audited to ensure accuracy."

"We found that the newer payroll data showed lower staffing levels than the previous self-reported data," said Grabowski. "The lower levels in the PBJ data likely reflect both the fact that they are based on payroll records as opposed to self-report, and also that staffing levels were abnormally high around the time of the inspection. In fact, the PBJ data clearly show this bump, followed by a return to normal after inspectors leave."

The new PBJ data offer a more transparent and accurate view of nursing home staffing, and Grabowski is hopeful future research will be better positioned to understand the implications of staffing fluctuations on residents' well-being. Further, he noted that "these new staffing data also offer tools for regulators and other oversight agencies to monitor what nursing homes are doing day in and day out."

Although current and future residents and their families might not be inclined to dive into the PBJ data, staffing information for all Medicare- and Medicaid-certified nursing homes is available on Medicare's Nursing Home Compare website, as well as a wealth of other information searchable by city and by facility, such as inspection reports and other quality measures.

"Hopefully, the general public will gain a broader awareness of the information that is available, not only on staffing but on other aspects of nursing home care," Stevenson said. "The only way nursing homes will change their behavior is if there is value in doing so. Some of that can come through the pressure of regulators, but it also needs to come from incentives in the marketplace, notably from expectations of current and future residents and their families."

Credit: 
Vanderbilt University Medical Center

NETRF-funded research may help predict pancreatic neuroendocrine tumor (pNET) recurrence

image: Dr. Ramesh Shivdasani finds a clinical marker for non-functional pancreatic neuroendocrine tumor metastasis

Image: 
Dana-Farber Cancer Center

A group of researchers funded by Neuroendocrine Tumor Research Foundation (NETRF) has discovered molecular information that may help predict recurrence of non-functional pancreatic neuroendocrine tumors (pNETs), which do not release excess hormones into the bloodstream. In a paper published today in Nature Medicine, the researchers describe new subtypes of pNETs with vastly different risks of recurrence.

Lead investigator Ramesh Shivdasani, MD, PhD, Dana-Farber Cancer Institute, Harvard Medical School, said the finding moves us closer to being able to identify patients with a high risk for metastasis at diagnosis and initial treatment. "These patients can be monitored vigilantly for recurrent cancers, which may be treatable if detected early, while patients with the less aggressive kind of pNET can be advised that the prognosis is excellent."

"This significant research is a result of the collaborative spirit NETRF fosters among our funded researchers. Drs. Ramesh Shivdasani and Bradley Bernstein assembled a top team of scientists who shared knowledge and resources, to advance our understanding of NETs that can help us improve care for those facing the highest risks," said Elyse Gellerman, NETRF chief executive officer.

Currently, physicians predict a patient's risk of pNET recurrence using tumor size. Non-functional pNETs larger than 2 centimeters are considered the most likely to metastasize following surgery.

Building on molecular findings in about a dozen pNETs, Shivdasani and colleagues analyzed the molecular profiles of another 142 pNET tumor specimens using new laboratory tests for expression of specific proteins. Shivdasani notes that the findings divided pNETs, sharply and unexpectedly, into roughly equal fractions of those that resemble normal alpha cells and express the regulatory protein ARX and others that resemble beta cells and express the regulatory protein PDX1.

The researchers were able to analyze data on tumor relapse for most patients whose tumor specimens were included in the study. Tumors with exclusive ARX expression had more than 35% risk of recurrence following surgery, compared to less than 5% risk if the tumor lacked ARX. Among study participants whose tumors showed high ARX levels, cancer recurred in the liver within 1 to 4 years, compared to the rare recurrence of tumors that expressed PDX1.

ARX and PDX1 levels can be measured using immunohistochemistry (IHC), a test that stains tumor tissue and is routine in clinical laboratories. Current IHC assays do not test for these proteins, but the researchers note that they could easily be brought into routine diagnostic testing in a matter of months. Should the findings of this study be corroborated in future clinical research, the prognostic impact for patients with a new pNET diagnosis will be significant.

The next steps are to make the distinction of the new pNET subtypes possible in clinical laboratories and to confirm the findings in larger groups of patients.

This laboratory study is an early step in identifying prognostic markers for non-functioning pNETs. Medical research often starts in the laboratory and then takes years to move into clinical testing in humans. Because IHC assays for ARX and PDX1 can be developed readily, the new findings could be implemented into routine patient care considerably faster.

Credit: 
Neuroendocrine Tumor Research Foundation

Scientists discover processes to lower methane emissions from animals

image: This is professor Greg Cook.

Image: 
University of Otago

University of Otago scientists are part of an international research collaboration which has made an important discovery in the quest to lower global agricultural methane emissions.

Professor Greg Cook, Dr Sergio Morales, Dr Xochitl Morgan, Rowena Rushton-Green and PhD student Cecilia Wang, all from the Department of Microbiology and Immunology, are members of the Global Research Alliance on Agricultural Greenhouse Gases that has identified new processes that control methane production in the stomach of sheep and similar animals like cattle and deer.

Specifically, they determined the microbes and enzymes that control supply of hydrogen, the main energy source for methane producing microbes (methanogens).

Professor Cook explains the discovery is important because methane emissions from animals account for about a third of New Zealand's emissions.

"Much of our work to date has focused on the development of small molecule inhibitors and vaccines to specifically target the production of methane by methanogens.

"This new work provides an alternative strategy where we can now begin to target the supply of hydrogen to methanogens as a new way of reducing animal methane emissions."

While the breakthrough research was recently published in scientific journal International Society for Microbial Ecology Journal, Professor Cook says both he and Dr Morales have been working since 2012 with the Ministry for Primary Industries in support of the Global Research Alliance on a number of programmes to control greenhouse gas emissions.

The international collaboration also involved researchers from AgResearch (New Zealand) and the Universities of Monash (Australia), Illinois (USA) and Hokkaido (Japan). Former Otago PhD student, now Associate Professor of Monash University's School of Biological Sciences, Chris Greening, led the study.

Dr Morales says previous research had already shown that microbes play an important part in controlling methane levels. Now for the first time researchers understand why.

The researchers studied two types of sheep - those producing high amounts of methane and those producing less. They found the most active hydrogen-consuming microbes differed between the sheep. Importantly, in the low methane emitting sheep hydrogen consuming bacteria dominated, which did not produce methane.

Their findings lay the foundation for strategies to reduce methane emissions by controlling hydrogen supply. One strategy is to introduce feed supplements that encourage non-methane producers to outcompete methanogens.

"Controlling the supply of hydrogen to the methanogens will lead to reduced methane emissions and allow us to divert the hydrogen towards other microbes that we know do not make methane," Dr Morales explains.

Credit: 
University of Otago

International team of comet and asteroid experts agrees on natural origin for Oumuamua

image: This artist's impression shows the first interstellar object discovered in the Solar System, Oumuamua. Observations made with the NASA/ESA Hubble Space Telescope, CFHT, and others, show that the object is moving faster than predicted while leaving the Solar System.

The inset shows a color composite produced by combining 192 images obtained through three visible and two near-infrared filters totaling 1.6 hours of integration on October 27, 2017, at the Gemini South telescope.

Image: 
ESA/Hubble, NASA, ESO/M. Kornmesser, Gemini Observatory/AURA/NSF

A team of international asteroid and comet experts now agree that Oumuamua, the first recorded interstellar visitor, has natural origins, despite previous speculation by some other astronomers that the object could be an alien spacecraft sent from a distant civilization to examine our star system.

A review of all the available evidence by an international team of 14 experts, including Robert Jedicke and Karen Meech of the University of Hawaii's Institute for Astronomy (IfA), strongly suggests that Oumuamua has a purely natural origin. The research team reported its findings in the July 1, 2019, issue of Nature Astronomy.

"While Oumuamua's interstellar origin makes it unique, many of its other properties are perfectly consistent with objects in our own solar system," said Jedicke. Oumuamua's orbit and its path through our solar system matches a prediction published in a scientific journal by Jedicke and his colleagues half a year before Oumuamua's discovery.

"It was exciting and exhausting to coordinate all the Oumuamua observations with my co-authors from all around the world," said Meech. "It really was a 24-hour-a-day effort for the better part of two months. In that paper we established that Oumuamua rotates once in about seven hours and that it had a red color similar to many objects locked within our own solar system."

The work showed that Oumuamua must have an extremely elongated shape, like a cigar or maybe a frisbee, unlike any known object in our solar system based on changes in its apparent brightness while it rotated.

Meech and other UH researchers were essential to another paper published in Nature a year ago that indicated Oumuamua is accelerating along its trajectory as it leaves our solar system. This behavior is typical of comets but astronomers have found no other visual evidence of the gas or dust emissions that create this acceleration.

"While it is disappointing that we could not confirm the cometary activity with telescopic observations, it is consistent with the fact that Oumuamua's acceleration is very small and must therefore be due to the ejection of just a small amount of gas and dust," Meech explained."

"We have never seen anything like Oumuamua in our solar system," said Matthew Knight of the University of Maryland. "Our preference is to stick with analogs we know, unless, or until we find something unique. The alien spacecraft hypothesis is a fun idea, but our analysis suggests there is a whole host of natural phenomena that could explain it."

The team of astronomers hailing from the U.S. and Europe met late last year at the International Space Science Institute (ISSI) in Bern, Switzerland, to critically assess all the available research and observations on Oumuamua and will meet again later this year. Their first priority was to determine whether there is any evidence to support the hypothesis that Oumuamua is a spacecraft built by an alien civilization.

"We put together a strong team of experts in various different areas of work on Oumuamua. This cross-pollination led to the first comprehensive analysis and the best big-picture summary to date of what we know about the object," Knight explained. "We tend to assume that the physical processes we observe here, close to home, are universal. And we haven't yet seen anything like Oumuamua in our solar system. This thing is weird and admittedly hard to explain, but that doesn't exclude other natural phenomena that could explain it."

The ISSI team considered all the available information in peer-reviewed scientific journals and paid special attention to the research published by IfA researchers. In particular, Meech's research paper in the journal Nature first reported on Oumuamua's discovery and characteristics in December 2017, just two months after the unusual object was identified by Pan-STARRS1 (Panoramic Survey Telescope and Rapid Response System) on Haleakala.

The ISSI team considered a number of mechanisms by which Oumuamua could have escaped from its home system. For example, the object could have been ejected by a gas giant planet orbiting another star. According to this theory, Jupiter created our own solar system's Oort cloud, a population of small objects only loosely gravitationally bound to our Sun in a gigantic shell extending to about a third of the distance to the nearest star. Some of the objects in our Oort cloud eventually make it back into our solar system as long period comets while others may have slipped past the influence of the Sun's gravity to become interstellar travelers themselves.

The research team expects that Oumuamua is just the first of many interstellar visitors discovered passing through our solar system, and they are collectively looking forward to data from the Large Synoptic Survey Telescope (LSST) that is scheduled to be operational in 2022. The LSST, located in Chile, may detect one interstellar object every year and allow astronomers to study the properties of objects from many other solar systems.

While ISSI team members hope that LSST will detect more interstellar objects, they think it is unlikely that astronomers will ever detect an alien spacecraft passing through our solar system and they are convinced that Oumuamua was a unique and extremely interesting but completely natural object.

Credit: 
University of Hawaii at Manoa

UCI, UC Merced: California forest die-off caused by depletion of deep-soil water

image: A massive tree die-off struck California's Sierra Nevada mountain range at the end of the 2012-2015 drought. Numerous stands, such as the one near Soaproot Saddle (pictured), suffered significant losses caused by the exhaustion of soil water supplies tapped by the trees' deep roots, a recent UCI, UC Merced study concluded.

Image: 
Margot Wholey / UC Merced

Irvine, Calif., July 1, 2019 - A catastrophic forest die-off in California's Sierra Nevada mountain range in 2015-2016 was caused by the inability of trees to reach diminishing supplies of subsurface water following years of severe drought and abnormally warm temperatures. That's the conclusion by researchers from the University of California, Irvine and UC Merced outlined in a study published today in Nature Geoscience.

"In California's mixed-conifer mountain forests, roots extend from five to 15 meters deep, giving trees access to deep-soil water," said co-author Michael Goulden, UCI professor of Earth system science. "This is what has historically protected trees against even the worst multi-year droughts."

But Goulden said the severity of California's 2012-2015 dry-spell exceeded this safety margin. Many forest stands exhausted accessible subsurface moisture, leading to widespread tree death.

From 2012 to 2015 the entire state of California experienced a crippling drought, and it was especially severe in the southern Sierra Nevada. With a multi-year combination of below-average precipitation and above-average warmth, the resulting drought was considered to be the most extreme in hundreds of years.

Observations by the U.S. Forest Service Aerial Detection Survey showed that many tree stands suffered nearly complete losses of mature conifers. Pines were especially hard hit, due to an infestation of bark beetles. A post-drought survey found that the tree mortality was greatest at lower elevations, with nearly 80 percent loss in 2016 compared to 2010.

Goulden and co-author Roger Bales, director of the Sierra Nevada Research Institute and Distinguished Professor of engineering at UC Merced, used field and remote sensing observations to examine tree communities at a variety of elevations and latitudes in the sprawling mountain range using.

Their study outlines one of the key factors in the die-off - the coincidence of unusually dense vegetation, a prolonged period with well-below average precipitation and warmer than usual temperatures. The heat and density of trees and plants accelerated evapotranspiration (moisture evaporating from leaves), which caused thirsty trees to draw even more water from the ground.

The scientists concluded that there was a four-year period of moisture overdraft, meaning more water was being taken out of the soil than was being replaced by rain or snowfall.

"We expect climate change to further amplify evapotranspiration and ground moisture overdraft during drought," said Goulden. "This effect could result in a 15 to 20 percent increase in tree death during drought for each additional degree of warming."

With their improved understanding of the contributions of factors such as elevation, vegetation density, heat, precipitation and soil water amounts, the researchers said they now have a framework to diagnose and predict forest die-offs brought on by drought.

"Using readily available data, we can now predict where in mountain forests multi-year droughts are likely to have the greatest impact, and the threshold that those impacts are expected to occur," said Bales.

Credit: 
University of California - Irvine

Copper compound shows further potential as therapy for slowing ALS

CORVALLIS, Ore. - A compound with potential as a treatment for ALS has gained further promise in a new study that showed it improved the condition of mice whose motor neurons had been damaged by an environmental toxin known to cause features of ALS.

ALS patients are categorized either as familial - meaning two or more people in their family have had the disease, which in their case is linked to inherited genetic mutations - or sporadic, which accounts for about 90% of the cases. Sporadic means the cause or causes are unknown.

The research by Joe Beckman at Oregon State University and collaborators at the University of British Columbia builds on a 2016 study by Beckman in which the compound, copper-ATSM, halted familial ALS progression in transgenic mice for nearly two years, allowing them to approach their normal lifespan.

The animals had been genetically engineered to produce a mutation of an antioxidant protein, SOD, that's essential to life when functioning properly but kills motor neurons when it lacks its zinc and copper co-factors and "unfolds." SOD mutations are present in 3% of ALS patients.

ALS, short for amyotrophic lateral sclerosis and also known as Lou Gehrig's disease, is caused by the deterioration and death of motor neurons in the spinal cord. It is progressive, debilitating and fatal.

ALS was first identified in the late 1800s and gained international recognition in 1939 when it was diagnosed in a mysteriously declining Gehrig, ending the Hall of Fame baseball career of the New York Yankees first baseman. Known as the Iron Horse for his durability - he hadn't missed a game in 15 seasons - Gehrig died two years later at age 37.

Scientists have developed an approach to treating ALS that's based on using copper-ATSM to deliver copper to specific cells in the spinal cord. Copper is a metal that helps stabilize the SOD protein and can also help improve mitochondria weakened by the disease.

The entire human body contains only about 100 milligrams of copper, the equivalent of 5 millimeters of household wiring.

"The damage from ALS is happening primarily in the spinal cord, one of the most difficult places in the body to absorb copper," said Beckman, distinguished professor of biochemistry and biophysics in the College of Science and principal investigator and holder of the Burgess and Elizabeth Jamieson Chair at OSU's Linus Pauling Institute. "Copper can be toxic, so its levels are tightly controlled in the body. The therapy we're working toward delivers copper selectively into the cells in the spinal cord that actually need it. Otherwise, the compound keeps copper inert."

In the mid-20th century, it was discovered that indigenous residents of Guam frequently developed an ALS-like disease, known as ALS-Parkinsonism dementia complex (ALS-PDC), and its onset was linked to an environmental toxin produced by cycad trees, whose seeds provided food for animals the sickened people had hunted and ate.

In the new research, Michael Kuo and Chris Shaw at the University of British Columbia along with Beckman used a similar toxin to induce ALS-PDC symptoms in mice, then treated the mice with copper-ATSM.

"With the treatment, the behavior of the sick animals was improved on par with the control animals," Beckman said. "Treatment prevented the extensive motor neuron degeneration seen in the untreated animals. These outcomes support a broader neuroprotective role for copper-ATSM beyond mutant SOD models of ALS with implications for sporadic ALS. It means the copper is doing more than just helping to fix the SOD. One result after another shows the compound is working pretty good."

Credit: 
Oregon State University

Using facts to promote cancer prevention on social media is more effective than anecdotes

When it comes to cancer prevention messaging, clear information from trusted organizations has greater reach on social media than personal accounts of patients, new University of California, Davis, research suggests.

Researchers looked at thousands of Twitter messages to identify the effects of the type of sender (individuals or organizations) and content type (basic information and facts or personal stories). They found that people shared informational tweets about cervical cancer prevention significantly more than personal-experience tweets. Furthermore, people shared information from organization senders, regardless of the content, rather than individual senders.

The findings were published in Preventive Medicine.

The paper's lead author, Jingwen Zhang, assistant professor of communication at UC Davis, said the research shows that hospitals, public health organizations and other reputable entities may be able to use social media effectively for preventive care.

"Public health organizations may find social media an effective tool to raise awareness and deliver information," she said. "If they make their messages simple and clear, people are more likely to share it."

Patient uptake of cervical cancer prevention is low

Although early detection and treatment of cervical cancer and its precursors decrease cervical cancer mortality, people's participation in prevention measures is low. Only about 83 percent of women receive preventive screenings, and a mere 43 percent of girls ages 13 to 17 receive the recommended number of vaccines for the human papillomavirus, or HPV, which causes cervical cancer, according to the research.

Women get the most cervical cancer prevention information through patient-doctor communication, but many women don't have primary care physicians or a regular source of health care, researchers said. Public health campaigns traditionally use posters, websites and advertising, but with limited results. More than 4,000 deaths a year result from cervical cancer. About 13,000 new cervical cancer diagnoses are made each year, according to the researchers.

Social media, used by 88 percent of young adults and 78 percent of adults, might help spread prevention messaging, Zhang said.

To complete the research project, researchers first observed an archived Twitter dataset of almost 100,000 tweets mentioning such key words and phrases as "HPV vaccination," "pap test" and "Gardasil," which is the trade name for a common HPV vaccine. From that data, they obtained the most shared 3,000 tweets. Among those, 462 promoted cancer prevention and showed the sender type (individual or organization) and content type (anecdote or fact).

Researchers then created a controlled social media environment using an anonymous online discussion platform for U.S. women to discuss risks and prevention for five days in 2017. They gave the groups example tweets that consisted of both personal experience, such as women sharing on social media that they'd just had their first pap smear, as well as factual information, such as: "Most cervical cancers could be prevented by screening & HPV vaccination. Learn more... ."

The results showed that while a good anecdotal story can be shared many times, and in one case was the top tweet, most of the multiply tweeted messages contained factual information.

"Across these tweets, our consistent finding was that tweets were significantly more likely to be shared when they came from organization senders and contained factual information," Zhang said.

"These findings suggest that practitioners can effectively design social media-based messages for cervical cancer prevention that significantly increase the reach of the messages to social media users," Zhang concluded. "The findings reinforce the importance of public trust in organizations rather than individuals to share cancer prevention messages.

"The key strategy is to boost the credibility of the accounts and to develop messages that directly convey new factual information and resources."

Credit: 
University of California - Davis

NASA looks at Tropical Storm Barbara's heavy rainfall

image: The GPM core satellite passed over Tropical Storm Barbara at 3:31 a.m. EDT (0731 UTC) on July 1, 2019. GPM found the heaviest rainfall rate (pink) was northeast of the center of circulation. There, rain was falling at a rate of 41 mm (about 1.6 inches) per hour.

Image: 
NASA/JAXA/NRL

Tropical Storm Barbara formed on Sunday, June 30 in the Eastern Pacific Ocean over 800 miles from the coast of western Mexico. The Global Precipitation Measurement mission or GPM core satellite passed over the storm and measured the rate in which rain was falling throughout it.

Barbara formed as a tropical storm around 11 a.m. EDT (1500) on June 30, and slowly intensified.

The GPM core satellite passed over Tropical Storm Barbara at 3:31 a.m. EDT (0731 UTC) on July 1, 2019. GPM found the heaviest rainfall rates were occurring northeast of the center of circulation. There, rain was falling at a rate of 41 mm (about 1.6 inches) per hour. The rainfall in that area are part of a band of thunderstorms wrapping into the low-level center, and there were a couple of other areas in that same band with the same rainfall rate. GPM is a joint mission between NASA and the Japan Aerospace Exploration Agency, JAXA.

NOAA's National Hurricane Center noted at 5 a.m. EDT (0900 UTC) on July 1, the center of Tropical Storm Barbara was located near latitude 11.2 degrees north, longitude 115.8 degrees west. That's about 895 miles (1,445 km) south-southwest of the southern tip of Baja California, Mexico. Barbara is moving toward the west near 21 mph (33 kph). A westward to west-northwestward motion at a slower forward speed is expected over the next few days. The estimated minimum central pressure is 998 millibars (29.47 inches). Maximum sustained winds have increased to near 65 mph (100 kph) with higher gusts.

Additional strengthening is forecast during the next couple of days, and Barbara is expected to become a hurricane later today.

For forecast updates on Barbara, visit: http://www.nhc.noaa.gov.

Credit: 
NASA/Goddard Space Flight Center

NASA finds winds tore Tropical Storm 04W apart

image: On July 1 at 12:40 a.m. EDT (4:40 p.m. EDT), the MODIS instrument aboard NASA's Aqua satellite provided a visible image of 04W's remnants. Clouds associated with the former tropical storm appeared fragmented between Luzon, the northern Philippines, and Taiwan.

Image: 
NASA/NRL

Visible imagery from NASA's Aqua satellite showed Tropical Cyclone 04W had been torn apart from wind shear in the Northwestern Pacific Ocean.

On Saturday, June 29, Tropical Storm 04W developed east of the Philippines and was moving toward the northwest. 04W formed near 15.3 north latitude and 130.7 east longitude, about 564 miles east of Manila, Philippines. At that time, 04W had maximum sustained winds near 35 knots (40 mph)

The next day, the Joint Typhoon Warning Center issued their final bulletin on 04W. This short-lived storm weakened to a depression just over 24 hours from its development. Although it moved close enough to the Philippines to get the local name of Egay, and to trigger a tropical cyclone warning signal #1 for the Batanes province in Luzon, 04W weakened to a depression by 11 a.m. EDT (1500 UTC) on June 30.

At the time of the final warning, 04W was located near 18.4 north latitude and 126.6 east longitude, about 395 nautical miles northeast of Manila, Philippines.

Northeasterly vertical wind shear was tearing the storm apart. In general, wind shear is a measure of how the speed and direction of winds change with altitude. Tropical cyclones are like rotating cylinders of winds. Each level needs to be stacked on top each other vertically in order for the storm to maintain strength or intensify. Wind shear occurs when winds at different levels of the atmosphere push against the rotating cylinder of winds, weakening the rotation by pushing it apart at different levels.

On July 1 at 12:40 a.m. EDT (4:40 p.m. EDT), the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Aqua satellite provided a visible image of 04W's remnants. Clouds associated with the former tropical storm appeared fragmented between Luzon, the northern Philippines, and Taiwan. Satellite imagery showed the low level circulation had unraveled into a broad, weak circulation.

The remnants are expected to continue moving northwest and dissipate.

Credit: 
NASA/Goddard Space Flight Center

Physicists use light waves to accelerate supercurrents, enable ultrafast quantum computing

image: Researchers have demonstrated light-induced acceleration of supercurrents, which could enable practical applications of quantum mechanics such as computing, sensing and communicating.

Image: 
Jigang Wang/Iowa State University

AMES, Iowa - Jigang Wang patiently explained his latest discovery in quantum control that could lead to superfast computing based on quantum mechanics: He mentioned light-induced superconductivity without energy gap. He brought up forbidden supercurrent quantum beats. And he mentioned terahertz-speed symmetry breaking.

Then he backed up and clarified all that. After all, the quantum world of matter and energy at terahertz and nanometer scales - trillions of cycles per second and billionths of meters - is still a mystery to most of us.

"I like to study quantum control of superconductivity exceeding the gigahertz, or billions of cycles per second, bottleneck in current state-of-the-art quantum computation applications," said Wang, a professor of physics and astronomy at Iowa State University whose research has been supported by the Army Research Office. "We're using terahertz light as a control knob to accelerate supercurrents."

Superconductivity is the movement of electricity through certain materials without resistance. It typically occurs at very, very cold temperatures. Think -400 Fahrenheit for "high-temperature" superconductors.

Terahertz light is light at very, very high frequencies. Think trillions of cycles per second. It's essentially extremely strong and powerful microwave bursts firing at very short time frames.

Wang and a team of researchers demonstrated such light can be used to control some of the essential quantum properties of superconducting states, including macroscopic supercurrent flowing, broken symmetry and accessing certain very high frequency quantum oscillations thought to be forbidden by symmetry.

It all sounds esoteric and strange. But it could have very practical applications.

"Light-induced supercurrents chart a path forward for electromagnetic design of emergent materials properties and collective coherent oscillations for quantum engineering applications," Wang and several co-authors wrote in a research paper just published online by the journal Nature Photonics.

In other words, the discovery could help physicists "create crazy-fast quantum computers by nudging supercurrents," Wang wrote in a summary of the research team's findings.

Finding ways to control, access and manipulate the special characteristics of the quantum world and connect them to real-world problems is a major scientific push these days. The National Science Foundation has included the "Quantum Leap" in its "10 big ideas" for future research and development.

"By exploiting interactions of these quantum systems, next-generation technologies for sensing, computing, modeling and communicating will be more accurate and efficient," says a summary of the science foundation's support of quantum studies. "To reach these capabilities, researchers need understanding of quantum mechanics to observe, manipulate and control the behavior of particles and energy at dimensions at least a million times smaller than the width of a human hair."

Wang and his collaborators - Xu Yang, Chirag Vaswani and Liang Luo from Iowa State, responsible for terahertz instrumentation and experiments; Chris Sundahl, Jong-Hoon Kang and Chang-Beom Eom from the University of Wisconsin-Madison, responsible for high-quality superconducting materials and their characterizations; Martin Mootz and Ilias E. Perakis from the University of Alabama at Birmingham, responsible for model building and theoretical simulations - are advancing the quantum frontier by finding new macroscopic supercurrent flowing states and developing quantum controls for switching and modulating them.

A summary of the research team's study says experimental data obtained from a terahertz spectroscopy instrument indicates terahertz light-wave tuning of supercurrents is a universal tool "and is key for pushing quantum functionalities to reach their ultimate limits in many cross-cutting disciplines" such as those mentioned by the science foundation.

And so, the researchers wrote, "We believe that it is fair to say that the present study opens a new arena of light-wave superconducting electronics via terahertz quantum control for many years to come."

Credit: 
Iowa State University

Combing nanowire noodles

image: U-shaped nanowires can record electrical chatter inside a brain or heart cell without causing any damage. The devices are 100 times smaller than their biggest competitors, which kill a cell after recording.

Image: 
Lieber Group, Harvard University

Machines are getting cozy with our cells. Embeddable sensors record how and when neurons fire; electrodes spark heart cells to beat or brain cells to fire; neuron-like devices could even encourage faster regrowth after implantation in the brain.

Soon, so-called brain-machine interfaces could do even more: monitor and treat symptoms of neurological disorders like Parkinson's disease, provide a blueprint to design artificial intelligence, or even enable brain-to-brain communication.

To achieve the reachable and the quixotic, devices need a way to literally dive deeper into our cells to perform reconnaissance. The more we know about how neurons work, the more we can emulate, replicate, and treat them with our machines.

Now, in a paper published in Nature Nanotechnology, Charles M. Lieber, the Joshua and Beth Friedman University Professor, presents an update to his original nanoscale devices for intracellular recording, the first nanotechnology developed to record electrical chatter inside a live cell. Nine years later, Lieber and his team have designed a way to make thousands of these devices at once, creating a nanoscale army that could speed efforts to find out what's happening inside our cells.

Prior to Lieber's work, similar devices faced a Goldilocks conundrum: Too big, and they would record internal signals but kill the cell. Too small, and they failed to cross the cell's membrane--recordings ended up noisy and imprecise.

Lieber's new nanowires were just right. Designed and reported in 2010, the originals had a nanoscale "V" shaped tip with a transistor at the bottom of the "V." This design could pierce cell membranes and send accurate data back to the team without destroying the cell.

But there was a problem. The silicon nanowires are far longer than they are wide, making them wobbly and hard to wrangle. "They're as flexible as cooked noodles," says Anqi Zhuang, a graduate student in the Lieber Lab and one of the authors on the team's latest work.

To create the original devices, lab members had to ensnare one nanowire noodle at a time, find each arm of the "V," and then weave the wires into the recording device. A couple devices took 2 to 3 weeks to make. "It was very tedious work," says Zhuang.

But nanowires are not made one at a time; they're made en masse like the very things they resemble: cooked spaghetti. Using the nanocluster catalyzed vapor-liquid-solid method, which Lieber used to create the first nanowires, the team builds an environment where the wires can germinate on their own. They can pre-determine each wire's diameter and length but not how the wires are positioned once ready. Even though they grow thousands or even millions of nanowires at a time, the end result is a mess of invisible spaghetti.

To untangle the mess, Lieber and his team designed a trap for their loose cooked noodles: They make U-shaped trenches on a silicon wafer and then comb the nanowires across the surface. This "combing" process untangles the mess and deposits each nanowire into a neat U-shaped hole. Then, each "U" curve gets a tiny transistor, similar to the bottom of their "V" shaped devices.

With the "combing" method, Lieber and his team complete hundreds of nanowire devices in the same amount of time they used to make just a couple. "Because they're very well-aligned, they're very easy to control," Zhang says.

So far, Zhang and her colleagues have used the "U" shaped nanoscale devices to record intracellular signals in both neural and cardiac cells in cultures. Coated with a substance that mimics the feel of a cell membrane, the nanowires can cross this barrier with minimal effort or damage to the cell. And, they can record intracellular chatter with the same level of precision as their biggest competitor: patch clamp electrodes.

Patch clamp electrodes are about 100 times bigger than nanowires. As the name suggests, the tool clamps down on a cell's membrane, causing irreversible damage. The patch clamp electrode can capture stable recording of the electrical signals inside the cells. But, Zhang says, "after recording, the cell dies."

The Lieber team's "U" shaped nanoscale devices are friendlier to their cell hosts. "They can be inserted into multiple cells in parallel without causing damage," Zhang says.

Right now, the devices are so gentle that the cell membrane nudges them out after about 10 minutes of recording. To extend this window with their next design, the team might add a bit of biochemical glue to the tip or roughen the edges so the wire catches against the membrane.

The nanoscale devices have another advantage over the patch clamp: They can record more cells in parallel. With the clamps, researchers can collect just a few cell recordings at a time. For this study, Zhang recorded up to ten cells at once. "Potentially, that can be much greater," she says. The more cells they can record at a time, the more they can see how networks of cells interact with each other as they do in living creatures.

In the process of scaling their nanowire design, the team also happened to confirm a long-standing theory, called the curvature hypothesis. After Lieber invented the first nanowires, researchers speculated that the width of a nanowire's tip (the bottom of the "V" or "U") can affect a cell's response to the wire. For this study, the team experimented with multiple "U" curves and transistor sizes. The results confirmed the original hypothesis: Cells like a narrow tip and a small transistor.

"The beauty of science to many, ourselves included, is having such challenges to drive hypotheses and future work," Lieber says. With the scalability challenge behind them, the team hopes to capture even more precise recordings, perhaps within subcellular structures, and record cells in living creatures.

But for Lieber, one brain-machine challenge is more enticing than all others: "bringing cyborgs to reality."

Credit: 
Harvard University

Current pledges to phase out coal power are critically insufficient to slow climate change

image: 'To keep global warming below 1.5°C, as aimed for in the Paris climate agreement, we need to phase-out unabated coal -- that is, coal without capturing the carbon emissions -- by the middle of this century. The Powering Past Coal Alliance is a good start but so far, only wealthy countries which don't use much coal, and some countries which don't use any coal power, have joined,' says Jessica Jewell, Assistant Professor at the Department of Space, Earth and the Environment at Chalmers University of Technology

Image: 
Udo Schlög

The Powering Past Coal Alliance, or PPCA, is a coalition of 30 countries and 22 cities and states, that aims to phase out unabated coal power. But analysis led by Chalmers University of Technology, Sweden, published in Nature Climate Change, shows that members mainly pledge to close older plants near the end of their lifetimes, resulting in limited emissions reductions. The research also shows that expansion of the PPCA to major coal consuming countries would face economic and political difficulties.

By analysing a worldwide database of coal power plants, the researchers have shown that pledges from PPCA members will result in a reduction of about 1.6 gigatonnes of C02 from now until 2050. This represents only around 1/150th of projected C02 emissions over the same time period from all coal power plants which are already operating globally.

"To keep global warming below 1.5°C, as aimed for in the Paris climate agreement, we need to phase-out unabated coal power - that is, when the carbon emissions are not captured - by the middle of this century. The Powering Past Coal Alliance is a good start but so far, only wealthy countries which don't use much coal, and some countries which don't use any coal power, have joined," says Jessica Jewell, Assistant Professor at the Department of Space, Earth and the Environment at Chalmers University of Technology, and lead researcher on the article.

To investigate the likelihood of expanding the PPCA, Jessica Jewell and her colleagues compared its current members with countries which are not part of the Alliance. They found that PPCA members are wealthy nations with small electricity demand growth, older power plants and low coal extraction and use. Most strikingly, these countries invariably rank higher in terms of government openness and transparency, with democratically elected politicians, independence from private interests and strong safeguards against corruption

These characteristics are dramatically different from major coal users such as China, where electricity demand is rapidly growing, coal power plants are young and responsible for a large share of electricity production, and which ranks lower on government transparency and independence.

The researchers predict therefore, that while countries like Spain, Japan, Germany, and several other smaller European countries may sign up in the near future, countries like China - which alone accounts for about half of all coal power usage worldwide - and India, with expanding electricity sectors and domestic coal mining are unlikely to join the PPCA any time soon.

And recent developments confirm these predictions. Germany recently announced plans to phase out coal power, which could lead to a further reduction of 1.6 gigatonnes of C02 - a doubling of the PPCA's reductions. On the other hand, the USA and Australia illustrate the difficulties of managing the coal sector in countries with persistent and powerful mining interests. The recent election in Australia resulted in the victory of a pro-coal candidate, supportive of expanding coal mining and upgrading coal power plants.

More generally, the research suggests that coal phase-out is feasible when it does not incur large-scale losses, such as closing down newly constructed power plants or coal mines. Moreover, countries need the economic and political capacity to withstand these losses. Germany, for instance, has earmarked 40 billion Euros for compensating affected regions.

"Not all countries have the resources to make such commitments. It is important to evaluate the costs of and capacities for climate action, to understand the political feasibility of climate targets," explains Jessica Jewell.

Credit: 
Chalmers University of Technology

Scientists alarmed by bark beetle boom

image: The European spruce bark beetle is a formidable insect in German timberlands. The bark beetle species is capable of killing large spruce populations in a short period of time.

Image: 
Rainer Simonis / Nationalpark Bayerischer Wald

"Bark beetles lay waste to forests" - "Climate change sends beetles into overdrive" - "Bark beetles: can the spruce be saved?": These newspaper headlines of the past weeks covered the explosive growth of bark beetle populations and its devastating impact on timberlands. The problem is not limited to Germany. A comparable situation is encountered in many forests across Central Europe and North America. The consequences of this major infestation are massive: In 2018, the beetles were responsible for ruining around 40 million cubic metres of wood just in Central Europe.

Mass outbreaks of bark beetles usually last a couple of months to years and are followed by sudden declines in the beetle populations. Little is known about this natural phenomenon. In the current issue of the scientific journal Trends in Ecology and Evolution, researchers are therefore calling to step up research into the life cycle of the harmful insects. "We have taken a number of elaborate measures to protect our forests against bark beetles. But we still know very little about what triggers the variations in bark beetle populations," says Peter Biedermann, the lead author of the recently published study.

Biedermann is a researcher at the Department of Animal Ecology and Tropical Biology of the University of Würzburg. Together with colleagues from the Max Planck Institute for Chemical Ecology in Jena and the Bavarian Forest National Park, who contributed to the study, he therefore demands: "It is an urgency to create the scientific foundation now to enable forestry officials and politicians to respond more efficiently to bark beetle outbreaks in the future." The results from these studies could serve as blueprints to fight other harmful insect pests plaguing forests. According to Biedermann, the most important question is whether it might be a practicable approach in natural forests or timberlands to just do nothing in the event of a population boom of insect pests. Scientists in the Bavarian Forest National Park have observed that bark beetle populations collapsed after a few years when no counter-measures were taken.

Climate change exacerbates the problem

The scientists believe that more needs to be known about the life cycle particularly of the spruce bark beetle also in view of the climate crisis. "The expected increases in the frequency and intensity of extreme weather events will additionally weaken German timberlands. Therefore, we will have to be prepared to tackle growing problems with the spruce bark beetle," says Jörg Müller, a professor at the Department of Animal Ecology and Tropical Biology at the University of Würzburg and Deputy Manager of the Bavarian Forest National Park.

Higher temperatures and intensifying summer droughts put trees and especially spruces under great pressure. Spruces are originally from mountain regions and it was only when they were widely planted out of economic interest that the conifer species also populated lower elevations. Spruces are not very resistant to heat and drought. Long-term water shortage weakens the tree's defences against the bark beetle - chemical substances that harm the beetles and increased resin release which clogs up the beetle tunnels.

There are countless factors that influence the population size of insects such as the bark beetle. Natural enemies, pathogens, interspecific and intraspecific competition, landscape structures, tree population, resilience of the preferred host, temperature and precipitation. According to Jörg Müller, it is largely unknown which role each factor plays in the population dynamics of forest insects.

To remedy this lack of knowledge, the scientists suggest pooling the worldwide data, identifying knowledge gaps on the population dynamics of spruce bark beetles and other forest insects and using this as the basis to answer key questions on the interaction of various factors through new data surveys. In a second step, the insights gleaned from the results will be tested during experimental field studies to derive recommendations for action.

Support is crucial

The scientists believe that support from forestry officials and public bodies as well as funding are essential. In their view, this support is necessary to achieve the ambitious goal of shedding light on the population development of bark beetles and other forest insects. This new approach could contribute to initiating efficient pest control management in forests.

Credit: 
University of Würzburg