Tech

Breaking bread with rivals leads to more fish on coral reefs

Cooperation is key to most successful endeavours. And, scientists find, when fishermen and women cooperate with other fishers, this can boost fish stocks on coral reefs.

Dr Michele Barnes, a senior research fellow from the ARC Centre of Excellence for Coral Reef Studies (Coral CoE) at James Cook University (JCU), is the lead author of a study published today that looks at the relationships between competing fishers, the fish species they hunt, and their local reefs.

"Relationships between people have important consequences for the long-term availability of the natural resources we depend on," Dr Barnes says.

"Our results suggest that when fishers--specifically those in competition with one another--communicate and cooperate over local environmental problems, they can improve the quality and quantity of fish on coral reefs."

Co-author Prof Nick Graham, from Lancaster University (previously at JCU), adds: "Coral reefs across the world are severely degraded by climate change, the pervasive impacts of poor water quality, and heavy fishing pressure. Our findings provide important insights on how fish communities can be improved, even on the reefs where they are sought."

Dr Barnes and her team interviewed 648 fishers and gathered underwater visual data of reef conditions across five coral reef fishing communities in Kenya.

They found that in the places where fishers communicated with their competitors about the fishing gear they use, hunting locations, and fishing rules, there were more fish in the sea--and of higher quality.

Co-author Dr Jack Kittinger, Senior Director at Conservation International's Center for Oceans, says this is likely because such cooperative relationships among those who compete for a shared resource--such as fish--create opportunities to engage in mutually beneficial activities. These relationships also help build trust, which enables people to develop a shared commitment to managing resources sustainably.

"This is why communication is so critical," says Dr Kittinger. "Developing sustained commitments, such as agreements on rules, and setting up conflict resolution mechanisms, are key to the local management of reefs."

"The study demonstrates that the positive effect of communication does not necessarily appear when just anyone in a fishing community communicates - this only applies to fishers competing over the same fish species," adds co-author Dr Örjan Bodin, from the Stockholm Resilience Centre at Stockholm University.

The study advances a framework that can be applied to other complex environmental problems where environmental conditions depend on the relationships between people and nature.

Co-author Dr Orou Gaoue, from the University of Tennessee Knoxville, emphasises this broad appeal.

"Although this study is on coral reefs, the results are also relevant for terrestrial ecosystems where, in the absence of cooperation, competition for non-timber forest products can quickly lead to depletion even when locals have detailed ecological knowledge of their environment."

"Environmental problems are messy," explains Dr Barnes. "They often involve multiple, interconnected resources and a lot of different people--each with their own unique relationship to nature."

"Understanding who should cooperate with whom in different contexts and to address different types of environmental problems is thus becoming increasingly important," she concludes.

Credit: 
ARC Centre of Excellence for Coral Reef Studies

Making the invisible visible: New method opens unexplored realms for liquid biopsies

image: This is Dr. Muneesh Tewari, M.D., Ph.D., in his lab.

Image: 
Joseph Xu, U-M College of Engineering

ANN ARBOR, Michigan -- Advancing technology is allowing scientists increasingly to search for tiny signs of cancer and other health issues in samples of patients' blood and urine. These "liquid biopsies" are less invasive than a traditional biopsy, and can provide information about what's happening throughout the body instead of just at a single site.

Now researchers at the University of Michigan Rogel Cancer Center have developed a new method for lifting the genetic fingerprints of tiny fragments of RNA found in blood plasma that are invisible to traditional methods of RNA sequencing.

These messenger RNAs and long non-coding RNAs can provide important clues about the activity of genes throughout the body -- including genes that are active in particular organs or that are associated with certain diseases, like cancer -- and thus could serve as potential biomarkers for a host of conditions.

"We believe that there are a wide variety of potential clinical applications," says Muneesh Tewari, M.D., Ph.D., professor of internal medicine at the U-M Medical School, and of biomedical engineering, a joint department of the Medical School and College of Engineering. "For example, in cancer, we're excited about applying this approach to try to detect the earliest signs of autoimmune side-effects from immunotherapies. There's also the potential for early detection of cancer because there are long non-coding RNAs that are fairly specific to certain cancer types."

Tewari is the senior author of a study published May 3 in the EMBO Journal that describes the new method and demonstrates its effectiveness in a study of bone marrow transplant patients.

From idea to proof-of-concept

The research -- which was led by postdoctoral fellows Maria Giraldez, M.D., Ph.D., now a faculty member at the Institute of Biomedicine of Seville in Spain, and Ryan Spengler, Ph.D. -- is the culmination of more than a decade of work.

In 2008, Tewari, who was then at the Fred Hutchinson Cancer Research Center in Seattle, Washington, and his colleagues published a paper describing a breakthrough for detecting microRNAs from tumors in blood plasma.

The method's shortcoming, however, was that it wasn't able to detect more prevalent and organ-specific types of RNA, which are often found in fragmentary form.

"The real innovation in this new study was recognizing that these other types of RNA were being missed because they had simple but critical differences that prevented them from showing up in the blood plasma sequencing results," Tewari explains. "We used an enzyme to tailor the ends of these fragments so they would show up in the sequencing. And that relatively simple step revealed that, yes, there are thousands of these additional gene transcripts in the bloodstream."

The second critical piece of the puzzle was developing a method for reliably sorting through the flood of sequencing data to filter out false positives and ensure an accurate result -- what the scientists call a "high-stringency bioinformatic analysis pipeline."

"We had to figure out how to separate signal from noise -- how to remove bits of irrelevant genetic material from bacterial and viral RNAs as well as from our own genome, which add noise to the data," says Spengler, who led the data analysis. "When the sequences are really short, they can match to multiple places in the human genome by chance and it's difficult to say which gene they're really coming from."

The new method, called phospho-RNA-seq because of the way the fragment ends are tailored, was first validated in experiments using a curated pool of RNA -- so the scientists knew ahead of time what accurate results should look like. Then, to demonstrate that it could work in a real-world setting, the method was tested on plasma samples collected weekly from two patients who underwent bone marrow transplants at U-M.

"We could track the markers of the reconstitution of their bone marrow after the transplant, as well as changes in the blood plasma RNA that indicated injury to the liver -- which lined up with what we knew was happening from their medical records," Tewari says.

A new tool for clinicians and researchers

Phospho-RNA-seq has potential applications for discovery sciences as well as more direct applications, Tewari notes.

"On the basic science side, now that we know there are thousands of these RNAs floating around in the bloodstream, it raises questions about why they're there and what function they may have," he says. "But the more immediate application is that we are now better able to read the human transcriptome -- the activity of genes throughout the body -- in plasma samples, which can give us new information about states of health and disease.

"I look at this as proof-of-concept research, but I expect that as we continue to refine the technology and make it even more accessible for other researchers, it's likely to be applied to many different disease areas and body systems," Tewari adds.

Tewari also stresses the collaborative and cross-disciplinary nature of the work, which required laboratory, computational and clinical expertise.

"This is exactly why I came to Michigan," he says, "to be able to do bench research that reaches into the clinic through exciting collaborations across medical specialties and scientific disciplines."

Credit: 
Michigan Medicine - University of Michigan

Sculpting super-fast light pulses

image: Schematic shows a novel technique to reshape the properties of an ultrafast light pulse. An incoming pulse of light (left) is dispersed into its various constituent frequencies, or colors, and directed into a metasurface composed of millions of tiny silicon pillars and an integrated polarizer. The nanopillars are specifically designed to simultaneously and independently shape such properties of each frequency component as its amplitude, phase or polarization. The transmitted beam is then recombined to achieve a new shape-modified pulse (right).

Image: 
S. Kelley/NIST

Imagine being able to shape a pulse of light in any conceivable manner--compressing it, stretching it, splitting it in two, changing its intensity or altering the direction of its electric field.

Controlling the properties of ultrafast light pulses is essential for sending information through high-speed optical circuits and in probing atoms and molecules that vibrate thousands of trillions of times a second. But the standard method of pulse shaping--using devices known as spatial light modulators--is costly, bulky and lacks the fine control scientists increasingly need. In addition, these devices are typically based on liquid crystals that can be damaged by the very same pulses of high intensity laser light they were designed to shape.

Now researchers at the National Institute of Standards and Technology (NIST) and the University of Maryland's NanoCenter in College Park have developed a novel and compact method of sculpting light. They first deposited a layer of ultrathin silicon on glass, just a few hundred nanometers (billionths of a meter) thick, and then covered an array of millions of tiny squares of the silicon with a protective material. By etching away the silicon surrounding each square, the team created millions of tiny pillars, which played a key role in the light sculpting technique.

The flat, ultrathin device is an example of a metasurface, which is used to change the properties of a light wave traveling through it. By carefully designing the shape, size, density and distribution of the nanopillars, multiple properties of each light pulse can now be tailored simultaneously and independently with nanoscale precision. These properties include the amplitude, phase and polarization of the wave.

A light wave, a set of oscillating electric and magnetic fields oriented at right angles to each other, has peaks and troughs similar to an ocean wave. If you're standing in the ocean, the frequency of the wave is how often the peaks or troughs travel past you, the amplitude is the height of the waves (trough to peak), and the phase is where you are relative to the peaks and troughs.

"We figured out how to independently and simultaneously manipulate the phase and amplitude of each frequency component of an ultrafast laser pulse," said Amit Agrawal, of NIST and the NanoCenter. "To achieve this, we used carefully designed sets of silicon nanopillars, one for each constituent color in the pulse, and an integrated polarizer fabricated on the back of the device."

When a light wave travels through a set of the silicon nanopillars, the wave slows down compared with its speed in air and its phase is delayed--the moment when the wave reaches its next peak is slightly later than the time at which the wave would have reached its next peak in air. The size of the nanopillars determines the amount by which the phase changes, whereas the orientation of the nanopillars changes the light wave's polarization. When a device known as a polarizer is attached to the back of the silicon, the change in polarization can be translated to a corresponding change in amplitude.

Altering the phase, amplitude or polarization of a light wave in a highly controlled manner can be used to encode information. The rapid, finely tuned changes can also be used to study and change the outcome of chemical or biological processes. For instance, alterations in an incoming light pulse could increase or decrease the product of a chemical reaction. In these ways, the nanopillar method promises to open new vistas in the study of ultrafast phenomenon and high-speed communication.

Agrawal, along with Henri Lezec of NIST and their collaborators, describe the findings online today in the journal Science.

"We wanted to extend the impact of metasurfaces beyond their typical application--changing the shape of an optical wavefront spatially--and use them instead to change how the light pulse varies in time," said Lezec.

A typical ultrafast laser light pulse lasts for only a few femtoseconds, or one thousandth of a trillionth of a second, too short for any device to shape the light at one particular instant. Instead, Agrawal, Lezec and their colleagues devised a strategy to shape the individual frequency components or colors that make up the pulse by first separating the light into those components with an optical device called a diffraction grating.

Each color has a different intensity or amplitude--similar to the way a musical overtone is composed of many individual notes that have different volumes. When directed into the nanopillar-etched silicon surface, different frequency components struck different sets of nanopillars. Each set of nanopillars was tailored to alter the phase, intensity or electric field orientation (polarization) of components in a particular way. A second diffraction grating then recombined all the components to create the newly shaped pulse.

The researchers designed their nanopillar system to work with ultrafast light pulses (10 femtoseconds or less, equivalent to one hundredth of a trillionth of a second) composed of a broad range of frequency components that span wavelengths from 700 nanometers (visible red light) to 900 nanometers (near-infrared). By simultaneously and independently altering the amplitude and phase of these frequency components, the scientists demonstrated that their method could compress, split and distort pulses in a controllable manner.

Further refinements in the device will give scientists additional control over the time evolution of light pulses and may enable researchers to shape in exquisite detail individual lines in a frequency comb, a precise tool for measuring the frequencies of light used in such devices as atomic clocks and for identifying planets around distant stars.

Credit: 
National Institute of Standards and Technology (NIST)

What happens when schools go solar?

image: For the 10 states that stand to avoid the most damages by switching schools to on-site solar power, blue bars show avoided damages from CO2, SO2, NOX and direct PM2.5 emissions. Red and orange bars show the rebates and cross-subsidy paid by the public when excess generation is valued at the retail rate. All values are reported in millions of dollars.

Image: 
Nichole Hanus, Gabrielle Wong-Parodi, Parth Vaishnav and Inês L Azevedo

Sunshine splashing onto school rooftops and campuses across the country is an undertapped resource that could help shrink electricity bills, new research suggests.

The study, published in the April issue of the peer-reviewed journal Environmental Research Letters, shows taking advantage of all viable space for solar panels could allow schools to meet up to 75 percent of their electricity needs and reduce the education sector's carbon footprint by as much as 28 percent.

At the same time, solar panels could help schools unplug from grids fed by natural gas and coal power plants that produce particulate matter, sulfur dioxide and nitrogen oxides - air pollutants that can contribute to smog and acid rain as well as serious health consequences including heart attacks and reduced lung function. "This is an action we can take that benefits the environment and human health in a real, meaningful way," said Stanford behavioral scientist Gabrielle Wong-Parodi, an author of the study.

New solar projects may easily slip down the list of priorities in a time of widespread protests by teachers calling for increased school funding, smaller class sizes and higher wages. But the U.S. Department of Energy estimates K-12 school spend more than $6 billion per year on energy, and energy costs in many districts are second only to salaries. In the higher education sector, yearly energy costs add up to more than $14 billion.

The current paper suggests investments in the right solar projects - with the right incentives from states - could free up much-needed money in schools' budgets. "Schools are paying for electricity anyway," said Wong-Parodi, an assistant professor of Earth system science at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "This is a way, in some cases, that they can reduce their costs. If there's a rebate or a subsidy, it can happen more quickly."

Overlooked benefits

Educational institutions account for approximately 11 percent of energy consumption by U.S. buildings and 4 percent of the nation's carbon emissions. But while the potential for solar panels on homes and businesses has been widely studied, previous research has largely skipped over school buildings.

The new estimates are based on data for 132,592 schools, including more than 99,700 public and 25,700 private K-12 schools, as well as nearly 7,100 colleges and universities. The researchers began by estimating the rooftop area available for solar panels at each institution, the hourly electricity output given the amount of sunshine at the site and the hourly electricity demand of each institution.

Not surprisingly, the study finds three large, sunny states - Texas, California and Florida - have the greatest potential for generating electricity from solar panels on school rooftops, with nearly 90 percent of institutions having at least some roof space suitable for installations. Meanwhile, residents in midwestern states including Wisconsin and Ohio stand to see the biggest reductions in key air pollutants - and costs associated with addressing related health effects - if schools switch from the grid to solar power.

Beyond measurable effects on air pollution and electricity bills, solar installations can also provide new learning opportunities for students. Some schools are already using data from their on-site solar energy systems to help students grapple with fractions, for example, or see firsthand how shifting panel angles can affect power production. "It takes this abstract idea of renewables as something that can reduce greenhouse gas emissions and brings it home," Wong-Parodi said.

Big savings

According to the study, it's not economically viable for educational institutions to purchase rooftop solar systems outright in any state. Rather, the projects can make financial sense for schools if they contract a company to install, own and operate the system and sell electricity to the school at a set rate.

Nationwide, the researchers project benefits stemming from an all-out push for solar installations on school buildings could be worth as much as $4 billion per year, if each ton of carbon released to the air is assumed to cost society $40 and the value of a statistical human life - in the way that regulators and economists calculate it - is pegged at $10 million. The estimated benefits capture the cost of premature deaths and other health impacts linked to air pollution from power plants.

The group's estimates do not account for environmental and health impacts tied to international mining and transport of raw materials, or to manufacturing and disposal of solar panels. Such a holistic view, they write, "may yield quite different results."

Zeroing in on likely impacts within the United States, the researchers conclude that nearly all states could reap value from school solar projects far greater than the amount they're spending on subsidies and rebates. The study shows that's true even when factoring in typical costs for installation, maintenance, operation and routine hardware replacements.

"There is an argument for increasing the level of incentives to increase adoption of solar panels by the educational sector," said study author Inês Azevedo, who co-directs Carnegie Mellon University's Center for Climate and Energy Decision Making and will be joining Stanford Earth's faculty in July 2019.

California and New York, however, are exceptions. In those two states, the researchers concluded that currently available rebates exceed the financial, health, environmental and climate change benefits provided to society by rooftop solar systems on schools - at least at today's prices for offsetting carbon emissions through other means.

"California and New York are doing a fantastic job of incentivizing solar, but we still don't see 100 percent penetration," Wong-Parodi said. "A good use of their time and resources may be to evaluate all the schools that don't have it yet, and try to understand why."

Credit: 
Stanford's School of Earth, Energy & Environmental Sciences

Hearing loss weakens skills that young cancer survivors need to master reading

Researchers have identified factors that explain why severe hearing loss sets up pediatric brain tumor survivors for reading difficulties with far-reaching consequences. The findings lay the foundation for developing interventions to help survivors become better readers.

St. Jude Children's Research Hospital investigators led the international study, which appears today in the Journal of Clinical Oncology.

Researchers analyzed how 260 children and adolescent brain tumor survivors, including 64 with severe hearing loss, performed on skills that are the building blocks of reading. The list included information processing speed, working memory, letter-word identification and phonological skills, which include the ability to use units of sound (phonemes) to decode words.

Compared with other survivors, those with severe hearing loss experience significant declines during treatment on all eight measures included in this analysis. After accounting for the risk factors of age at diagnosis and treatment intensity, the analysis suggested that survivors with severe hearing loss struggled the most with slowed processing speed and phonological skills.

"Reading is a skill that takes a long time to learn and that we depend on for learning our entire life," said senior and corresponding author Heather Conklin, Ph.D., a member of the St. Jude Department of Psychology. "There had been hints in the scientific literature that reading was declining in pediatric brain tumor survivors and that hearing loss may be a contributor. But this is the first study to identify the key cognitive components that lead to reading problems."

The findings suggest that interventions should focus on improving neurocognitive and language-based skills like processing speed and phonemics before tackling more complex tasks like reading comprehension, said first author Traci Olivier, Psy.D., formerly a St. Jude postdoctoral fellow and now at Our Lady of the Lake Medical Center, Baton Rouge, Louisiana.

"Younger children, those less than 7 years old, were particularly vulnerable to declines in skills that are fundamental for reading mastery," she said. "These children may benefit most from interventions."

Brain tumors and hearing loss

Brain and spinal cord tumors are the second most common childhood cancers. These tumors account for about 1 in 4 newly diagnosed pediatric cancers annually.

A recent St. Jude study found that 32 percent of brain tumor patients developed severe hearing loss within several years of treatment despite treatment with a drug, amifostine, designed to protect hair cells in the inner ear that are essential for hearing.

The analysis involved 3- to 21-year-olds with medulloblastoma and other embryonal brain tumors. All patients were enrolled in a multi-site St. Jude clinical research trial and treatment that included surgery plus risk-adapted radiation treatment and chemotherapy. All had neurocognitive and hearing testing at least twice--early and later in treatment.

Next steps

The analysis proposed multiple factors, including damage to the hearing nerve caused by the tumor itself, that complicate reading mastery for pediatric brain tumor survivors with severe hearing loss. "That suggests we have an opportunity to significantly improve the quality of life for survivors by developing more effective interventions," Conklin said.

Research is needed to determine how and when to intervene to bolster reading skills in young cancer patients. That includes tracking how cochlear implants or hearing aids affect reading and neurocognitive skills in young cancer survivors. Data on hearing aid use in this study was incomplete.

"Compared to vision loss, hearing difficulties often go undetected for longer periods. This study demonstrates the need for close audiological monitoring early in treatment so we can recognize and intervene early," Olivier said. "Parents might not realize the impact of decreased hearing on educational outcomes."

Credit: 
St. Jude Children's Research Hospital

ESA tipsheet for May 6, 2019

Thursday, 2 May 2019
For Immediate Release

Contact: Zoe Gentes, 202-833-8773 ext. 211, ZGentes@esa.org

 

Get a sneak peek at these new scientific papers, publishing on May 6, 2019, in the Ecological Society of America's journal Frontiers in Ecology and the Environment.

Nature reserves and wilderness areas plagued by understaffing and budget shortfalls
Extremely old trees endure in China
Fuel breaks stop wildfire in its tracks - but may create new problems
Connecting species to possible future safe havens
Wildfires as an ecosystem service

 

Nature reserves and wilderness areas plagued by understaffing and budget shortfalls

Back in 2010, park rangers may have expected to spend the following ten years carrying out anti-poaching patrols in new nature reserves and responding to invasive species threats over larger ranges. A multilateral treaty - The Convention on Biological Diversity (CBD) - had just resolved to increase the worldwide coverage of protected areas (PAs), which are meant to safeguard habitats and species from human impacts. But a new study suggests that understaffing and major funding gaps (to the tune of US$70 billion in annual shortfalls, during some periods) are making the growing number and size of PAs less effective than they should be. An analysis of nature reserves, wilderness areas, and national parks shows that nearly half of PAs report inadequate staffing and budget resources, and that adequately funded PAs only protect sufficient amounts of habitat for fewer than 10% of their resident amphibian, bird, and mammal species. When measuring progress toward its biodiversity protection goals, the CBD risks sending a false message about conservation success in 2020 and beyond. 

Author Contact: Lauren Coad (lauren.coad@me.com)

Coad L, Watson JEM, Geldmann J, et al. 2019. Widespread shortfalls in protected area resourcing undermine efforts to conserve biodiversity. Frontiers in Ecology and the Environment DOI: 10.1002/fee.2042

 

Extremely old trees endure in China

Scattered across China's vast expanse of mountains and stony soils, a few ancient woody sentinels somehow manage to keep growing, year after year, as they have for more than a thousand years. But if certain areas are especially likely to harbor very old trees, it is not clear where those hot spots are. Researchers from Zhejiang University, The Chinese Academy of Sciences, and Australian National University mapped the locations of 187 trees across China that were at least 1000 years old and found a highly significant relationship between tree age and elevation. China's oldest known living tree, for example - a 2230-year-old Qilian juniper - grows on a mountain slope 4000 meters above sea level. These remote, high-elevation areas tend to be less likely to face human disturbance, but they do face continued threats from drought and climate change. Can China's oldest trees continue to "lie low" on the high mountain slopes?  

Author Contact: David Lindenmayer (david.lindenmayer@anu.edu.au)

Liu J, Yang B, and Lindenmayer DB. The oldest trees in China and where to find them. Frontiers in Ecology and the Environment 17. DOI: 10.1002/fee.2046.

 

Fuel breaks stop wildfire in its tracks - but may create new problems

Scrubby stands of sagebrush support all kinds of species - grouse, pygmy rabbits, badgers, coyotes - but in many sagebrush landscapes, wildfires have paved the way for invasive grass species to move in. In shrub-steppe landscapes of the western US, land managers have been mowing down vegetation along many thousands of kilometers of roads and fences, creating relatively barren strips of land that are supposed to make it harder for wildfires to spread. However, the effectiveness of this strategy in this particular ecosystem has not been widely studied, and researchers from USGS, USDA, and Colorado State University hope to highlight some of the uncertainties surrounding it. Open strips of land might make it easier for predators like coyotes to move around and hunt - but they can also make it harder for grouse to migrate to different areas to reproduce or forage. The fuel breaks might prevent grass invasions that are aided by wildfire, but mowed areas can also be more easily invaded. Whether the fuel breaks will ameliorate wildfire-related issues or create a new suite of problems remains to be seen. 

Author Contact: Douglas Shinneman (dshinneman@usgs.gov)

Shinneman DJ, Germino MJ, Pilliod DS, et al. 2019. The ecological uncertainty of wildfire fuel breaks: examples from the sagebrush steppe. Frontiers in Ecology and the Environment DOI: 10.1002/fee.2045

 

Connecting species to possible future safe havens

In the future, many species will find the climate conditions in their current habitat inhospitable - but if a population is surrounded by cities or degraded landscapes, where else can it go? Climate adaptation efforts have started to focus on ways to build movement routes for species that are "blocked in." But a group of researchers from the University of Washington have noticed a problem with many of these routes - they connect "environmental migrant" species to areas that currently experience the types of conditions they will need going forward, instead of identifying areas that are predicted to look that way in the future. Building pathways to destinations that don't yet exist is complicated, but several approaches for mapping new routes will allow researchers to make more accurate predictions about future safe havens.

Author Contact: Caitlin Littlefield (clittlef@uw.edu)

Littlefield CE, Krosby M, Michalak JL, and Lawler JJ. 2019. Connectivity for species on the move: supporting climate-driven range shifts. Frontiers in Ecology and the Environment DOI: 10.1002/fee.2043.

 

Wildfires as an ecosystem service

Ancestral hominids were exposed to fire on open savanna landscapes, and humans eventually learned to control fire. Since then, fire has been key in the evolution of our social behavior - it has been used to create farmland and open spaces for grazing and hunting, to control pests, and to enhance pollination activity, as well as other benefits. In a new paper, ecologists from the USGS and the University of Valencia's Desertification Research Center outline the importance of fires, both natural and prescribed, for the functioning of ecosystems and the support of biodiversity.

Author Contact: Juli Pausas (juli.g.pausas@uv.es)

Pausas JG and Keeley JE. Wildfires as an ecosystem service. Frontiers in Ecology and the Environment 17. DOI: 10.1002/fee.2044.

 

 

Credit: 
Ecological Society of America

Obstacles to overcome before operating fleets of drones becomes reality

image: Borzoo Bonakdarpour is working on ways to improve efficiency and maintain security when operating a fleet of drones.

Image: 
Iowa State University College of Liberal Arts and Sciences

AMES, Iowa - Search and rescue crews are already using drones to locate missing hikers. Farmers are flying them over fields to survey crops. And delivery companies will soon use drones to drop packages at your doorstep.

With so many applications for the technology, an Iowa State University researcher says the next step is to expand capacity by deploying fleets of drones. But making that happen is not as simple as launching multiple aircraft at once. Borzoo Bonakdarpour, an assistant professor of computer science, says unlike piloting a single drone by remote control, operating a fleet requires an automated system to coordinate the task, but allows drones to independently respond to weather, a crash or unexpected events.

"The operating system must be reliable and secure. The drones need to talk to one another without a central command telling each unit where to go and what to do when conditions change," Bonakdarpour said. "We also want to optimize the time and energy to complete the task, because drone batteries only last around 15 or 20 minutes."

To tackle this problem, Bonakdarpour and his colleagues developed a mathematical model to calculate the cost - time and energy - to complete a task based on the number of drones and recharging stations available. The model considers the energy required for each drone to complete its portion of the task and fly to a charging station as needed.

On paper the solution is relatively simple for a team of computer scientists, but Bonakdarpour says moving from theory to implementation is not as easy. "As we work on one problem, we actually find new problems we must solve. It's challenging, but that's also what makes it exciting," he said.

For example, if a battery lasts 15 minutes in the lab, it may drop to 10 minutes on a hot or cold day outside. Locating charging stations is another issue. The optimal placement may be in the middle of a lake and inaccessible in reality.

Managing tradeoff between energy and security

Based on their model, Bonakdarpour, Anh-Duy Vu with McMaster University, Canada; and Ramy Medhat with Google in Waterloo, Canada, developed four operating methods - three offline optimization techniques and one online algorithm. While an offline technique is limited because the preprogrammed flight paths do not allow drones to respond to unexpected events or changing conditions, Bonakdarpour says it provides the foundation for the online algorithm to operate.

The researchers conducted a series of simulations (see video) using four drones to test for efficiency and security. They found the online algorithm successfully managed the security-energy tradeoff within the energy limits of the drones. The fleet completed all assigned tasks and more than half of the authentication checks. The researchers recently presented the findings at the International Conference on Cyber-Physical Systems in Canada.

Defending against hackers, rogue drones

Operating an automated fleet of drones poses security risks that are less of a concern when piloting a single drone by remote control. Bonakdarpour says with automation drones need to receive GPS signal and position frequently. If the signal drops or the drones fly into an area that is GPS-denied, it can quickly become a problem.

"If you're driving your car and lose GPS, your driving skills don't depend on that signal. You may miss an exit, but loss of signal for a minute is usually not a big deal. However, with drones just a few seconds is not tolerable," Bonakdarpour said.

Software bugs or errors may cause a drone to fly off course and not follow direction to complete the mission. Bonakdarpour says hackers can also send the wrong signal or operate a drone to impersonate the fleet. While finding solutions will take time, Bonakdarpour says the technology exists to make it happen. However, it will also take industry support to build infrastructure and charging stations as well as regulatory changes to allow for the operation of a fleet of drones.

Credit: 
Iowa State University

Teaching happiness to dementia caregivers reduces their depression, anxiety

CHICAGO --- Caring for family members with dementia -- which is on the rise in the U.S. -- causes significant emotional and physical stress that increases caregivers' risk of depression, anxiety and death.

A new method of coping with that stress by teaching people how to focus on positive emotions reduced their anxiety and depression after six weeks, reports a new national Northwestern Medicine study. It also resulted in better self-reported physical health and positive attitudes toward caregiving.

"The caregivers who learned the skills had less depression, better self-reported physical health, more feelings of happiness and other positive emotions than the control group," said lead study author Judith Moskowitz, professor of medical social sciences at Northwestern University Feinberg School of Medicine. She also is director of research at the Osher Center for Integrative Medicine at Feinberg.

The positive-emotion intervention does not require a licensed therapist and can be widely implemented, making it accessible and affordable for busy caregivers.

The paper will be published May 2 in Health Psychology.

"Nationally we are having a huge increase in informal caregivers," Moskowitz said. "People are living longer with dementias like Alzheimer's disease, and their long-term care is falling to family members and friends. This intervention is one way we can help reduce the stress and burden and enable them to provide better care."

Most current approaches to helping caregivers focus on education about dementia or problem solving around challenging behaviors but haven't specifically addressed reducing the emotional burden of providing care.

The intervention designed by Moskowitz and colleagues included eight skills that evidence shows increase positive emotions. They include noticing and capitalizing on positive events, gratitude, mindfulness, positive reappraisal, personal strengths, attainable goals and acts of kindness. (More details below.)

Moskowitz wasn't sure how many caregivers would be able to complete the program because "they are such a stressed, burdened group. But they were engaged and committed, which speaks to how much they need programs like this," she said.

Currently there are 5.5 million people in the United States diagnosed with Alzheimer's disease, which could increase to 16 million by 2050. The average life expectancy post diagnosis is eight to 10 years, although some people live as long as 20 years.

In the trial, 170 dementia caregivers were randomly assigned to either the intervention group in which they learned positive emotion skills such as recognizing a daily positive event and keeping a gratitude journal, or to a control group in which they filled out a daily questionnaire about their emotions. The positive emotion skill sessions, called LEAF (Life Enhancing Activities for Family caregivers), were presented by a facilitator via web conference, reaching caregivers across the U.S. The web delivery is especially important for caregivers who live in rural areas without local caregiver support services, Moskowitz said.

In addition to Northwestern, the study was conducted out of the University of California San Francisco (UCSF). Dr. Glenna Dowling of the UCSF School of Nursing was the co-principal investigator on the study.

In six weekly sessions, caregivers reviewed positive emotion skills and then had daily homework to practice the skills, including audio recordings. If the topic was acts of kindness, for example, their homework was to go out and practice an act of kindness.

All participants filled out a questionnaire about their depression, anxiety, physical health and caregiver burden at the start and completion of the study.

LEAF participants had a 7 percent greater drop in depression and a 9 percent greater drop in anxiety compared to the control group. Participants in the intervention group decreased from showing moderate symptoms of depression relative to the population norm, to falling within the normal range of depressive symptoms by the post-intervention assessment. In contrast, participants in the control condition showed a smaller decrease in depression scores and remained within the mild to moderate range.

One participant wrote, "The LEAF study and the techniques I learned by participating in it have brought about a serenity and calmness to my life and to that of my husband. We have both benefitted from my changed attitude."

Another commented, "Doing this study helped me look at my life, not as a big neon sign that says, 'DEMENTIA' in front of me, but little bitty things like, 'We're having a meal with L's sister, and we'll have a great visit.' I'm seeing the trees are green, the wind is blowing. Yeah, dementia is out there, but I've kind of unplugged the neon sign and scaled down the size of the letters."

Skills taught to participants in the study:

1. Recognizing a positive event each day

2. Savoring that positive event and logging it in a journal or telling someone about it

3. Starting a daily gratitude journal

4. Listing a personal strength each day and noting how you used this strength recently

5. Setting an attainable goal each day and noting your progress

6. Reporting a relatively minor stressor each day, then listing ways in which the event can be positively reappraised or reframed

7. Understanding small acts of kindness can have a big impact on positive emotion and practicing a small act of kindness each day

8. Practicing mindfulness through paying attention to daily experiences and with a daily 10-minute breathing exercise, concentrating on the breath

Moskowitz will launch a new study funded by the National Institute of Aging where she will compare the facilitated version of the intervention (the one shown to be effective in this study) to a self-guided online version of the intervention (without a facilitator). If the self-guided version is as effective as the facilitated one, the LEAF program can be implemented widely at relatively low cost to help the growing number of dementia caregivers in the U.S., she said.

Credit: 
Northwestern University

Do additives help the soil?

image: UBC Okanagan soil scientist Miranda Hart says there may be potential environmental consequences from adding bio-fertilizers into soil.

Image: 
UBC Okanagan

A UBC researcher is using her latest study to question whether soil additives are worth their salt.

Miranda Hart, who teaches biology at UBC's Okanagan campus, says despite a decades-long practice, there could be environmental consequences of adding bio-fertilizers into soil. It's common practice for farmers to use bio-fertilizers as a method to improve crop production. These added microorganisms will live in the soil, creating a natural and healthy growing environment.

However, after a multi-year study on four different crop fields, Hart says the inoculants may not be doing much for the soil. The study, which involved researchers from Agriculture and Agri-Food Canada, was published recently in Science of The Total Environment.

"There are so many companies producing microbes and they are lobbying farmers to be part of a green revolution," says Hart. "These products are considered more environmentally friendly than fertilizers and pesticides, but there is no evidence they are working or that they are even able to establish, or grow, in the soil."

Arbuscular mycorrhizal (AM) fungi live in and around plant roots, helping the plants take up nutrients. Hart explains that many farmers will use commercially produced AM fungi to improve soil quality and increase yields. However, after the study, she says there is still little evidence that the inoculants work.

"It's very hard to determine if the microbes established in the soil," she says. "What we showed is that they often didn't establish. And even when they did, there was no difference in crop performance."

Hart's research team studied four fields during the course of two growing seasons in Saskatchewan and Alberta. For their study, a common commercial AM fungal inoculant was introduced into the fields.

The results showed extreme variation, she says. There were areas where the inoculant failed to establish in some fields, while it grew prolifically in others. In one site, it became invasive and took over the resident fungal community in less than a year.

"Bio-fertilizers have been sold for decades and it's an industry worth millions of dollars," says Hart. "An important take away from this study is that there seemed to be no effect on the crops. If the farmer invested thousands on the inoculate, it may have been a waste of money."

Hart's second takeaway is the general lack of knowledge of what these inoculates are actually doing to the land.

"I'm particularly concerned because there is no evidence that these inoculates are helping the environment," she adds. "What we're doing is releasing invasive species into the environment and we don't know the long-term effect of what's happening to the soil."

Credit: 
University of British Columbia Okanagan campus

Environmentalists call for global PFAS ban, including in firefighting foam

(Gothenburg, Sweden): Industry fire-safety experts from the oil and gas and aviation sectors are joining with firefighter trade unions to urge governments to protect human health and the environment with a global ban on the toxic chemical, PFOA, and to reject loopholes for its use in firefighting foams. The use of PFOA and other fluorinated organic compounds (PFAS) is widespread across many industrial and domestic applications including textiles, food packaging, stain and oil resistant treatments, and industrial processes. Fluorinated firefighting foam is a leading cause of water contamination with toxic chemicals that are associated with cancer, endocrine disruption, and harm to fetal development.

The upcoming 9th Conference of the Parties to the Stockholm Convention on Persistent Organic Pollutants is scheduled to address a global ban on PFOA as the UN meeting commences next week (April 29-May 10). A key issue will be whether an exemption should be granted for continued PFOA use in firefighting foams. Industry fire-safety experts assert that no exemption is needed because cost-effective fluorine-free alternatives work as well or better than PFOA- and other PFAS-containing foams. Unlike PFAS-containing foams, fluorine-free alternatives do not cause long-term harm to human health and the environment or incur the extremely high cleanup costs of PFAS-containing foams.

The Stockholm Convention's scientific expert body recommended global elimination of PFOA due to its toxicity, persistence, bioaccumulation in the food chain, and ability to travel long distances. They also recommended strengthening the listing of PFOS in the treaty by closing a large number of loopholes. Since PFOA and PFOS have been used in firefighting foams, the expert body addressed alternatives to them, warning against using the entire class of PFAS substances in firefighting foams, "due to their persistence and mobility, as well as their potential negative environmental, human health and socioeconomic impacts." (POPRC-14/2)

In their new report, the fire safety experts demonstrate that PFAS alternatives to PFOA and PFOS are similarly toxic and even harder to control, leading to increased pollution, exposure, and presence in the food chain. In contrast, world-class airports and major companies have thrown their weight behind fluorine-free firefighting foams.

All of the 27 major Australian airports have transitioned to fluorine-free firefighting foams, as have the following major hub airports: Dubai, Dortmund, Stuttgart, London Heathrow, Gatwick, Edinburgh, Manchester, London City, Leeds-Bradford, Copenhagen, and Auckland, and elsewhere in Europe such as Billund, Guernsey, Bristol, Blackpool, and Ko?ln-Bonn.

Kim T. Olsen, Head of Copenhagen Airport Rescue and Firefighting Academy, noted that Copenhagen Airport is still working on the clean-up and remediation of PFAS contamination from fluorinated (AFFF) firefighting foams and stated that, "Fluorine-free foam is the future. I see no reason to keep on polluting the environment with AFFF types of foam when the fluorine-free foam is just as efficient."

Major players in the oil and gas and transportation sectors have also shifted to fluorine-free foams including Equinor, BP, ExxonMobil, Total, Caltex, Gazprom, Bayern Oil, JO Tankers, and ODFJEL. Some military users, including the Danish and Norwegian Armed forces, have also moved to fluorine-free foams.

Lars Ystanes, Environmental Specialist at Equinor (formerly Statoil), noted that "We can remove a polluting chemical from use without compromising safety and at reasonable cost... [At Equinor] We have investigated and verified all aspects of the fluorine-free foam used, RF1-AG, with respect to operational firefighting efficiency, health, and safety, freeze protection, aging, etc. We regard the new fluorine-free foam as a fully acceptable and even better replacement for AFFF."

Nigel Holmes, a government regulator with the Department of Science and the Environment in Queensland Australia called out the fluorine chemical industry stating, "A significant failure by the fluorochemicals industry and those using their chemicals in products has been to neglect to meet their international obligations under the Precautionary Principle - one of the tenets at the heart of the Stockholm Convention and a major test of the need for concern and action on persistent organic pollutants. "

A recent PFAS study of a large cohort of Australian firefighters found significant elevations of PFAS blood levels, far in excess of the general population in Australia. The study underscores the urgent need for action to protect human health. Commander Mick Tisbury, President of the United Firefighters Union of Australia, and Commander of the Melbourne Metropolitan Fire Brigade (MFB) commented, "Melbourne Metropolitan Fire Brigade transitioned to non-persistent, fluorine-free firefighting foam in 2014, after extensive testing on live fire scenarios. Since then, every B Class fire that MFB has responded to have been extinguished with fluorine-free foam. Recently there has been misleading information circulated by various people with vested interests, regarding the effectiveness of fluorine-free foam (F3) on flammable liquid fires. Based on MFB's experience, Solberg RF3x6 foam concentrate has performed just as well as our previous fluorinated AFFF concentrate."

"Governments should listen carefully to industry fire safety professionals and firefighters who actually put out fires, not the polluting fluorine chemical industry who is lobbying for loopholes to continue selling their toxic products," said Pamela Miller, a convener of the expert group and Co-Chair of IPEN. "Water is a precious resource and clean water a fundamental human right; now is the time to fulfill the Stockholm Convention's protective objective and stop polluting it."

Credit: 
IPEN

Patients with diabetes are 40% more likely to be readmitted to the hospital

WASHINGTON-- Patients with diabetes and low blood glucose have higher rates of death following hospital discharge, according to a study published in the Endocrine Society's Journal of Clinical Endocrinology & Metabolism.

The cost for hospital readmissions within 30 days of discharge is estimated to be close to $25 billion per year in the U.S. Patients with diabetes are frequently admitted to the hospital. Unfortunately, many of them experience high risk of re-hospitalization or even death after discharge because of factors like hypoglycemia, or low blood glucose.

"In our novel nationwide study, we examined data of almost 1 million hospitalizations at the VA health care system," said the study's first author, Elias Spanakis, M.D., of the Baltimore VA Medical Center and the University of Maryland School of Medicine in Baltimore, Md. "We found that patients with diabetes who are discharged with low or even near normal glucose values during the last day of the hospital stay are at a higher risk of dying or being readmitted to the hospital."

In the nationwide cohort study, researchers examined 843,978 admissions of patients with diabetes at the Veteran Affairs hospitals over a 14-year period to determine the readmission and mortality rates. They found patients with diabetes experienced greater 30-day readmission rates, 30-, 90- and 180-day post-discharge mortality and higher combined 30-day readmission/mortality when they had blood sugar levels below 100 mg/dl.

"Although future studies are needed, physicians should avoid discharging patients with diabetes from the hospital until glucose values above 100 mg/dl are achieved during the last day of the hospitalization," Dr. Spanakis said.

Other authors of the study include: Guillermo E. Umpierrez of the Emory University School of Medicine in Atlanta, Ga.; Tariq Siddiqui, Min Zhan, Soren Snitker, and Jeffrey C. Fink of the University of Maryland School of Medicine; and John D. Sorkin of the Baltimore Veterans Affairs Medical Center GRECC (Geriatric Research, Education, and Clinical Center) in Baltimore, Md.

The study received funding support from the U.S. Department of Veterans Affairs Clinical Sciences Research and Development Service, the Baltimore VA Patient Safety Center of Inquiry, the U.S. Public Health Service, the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute on Aging, and the Baltimore VA Geriatric Research, Education, and Clinical Center.

The study, "Association of Glucose Concentrations at Hospital Discharge with Readmissions and Mortality: A Nationwide Cohort Study," will be published online, ahead of print.

The Society has embarked on a multi-year quality improvement project, the Hypoglycemia Prevention Initiative, to design and test clinical interventions in primary care settings that will aim to decrease the number of patients at high-risk and the frequency and severity of their episodes.

Credit: 
The Endocrine Society

Keto diet has potential in military, researchers say

COLUMBUS, Ohio - A new study has researchers hopeful that a ketogenic diet could prove useful in the military, where obesity is an ongoing challenge, both in terms of recruiting soldiers and keeping them fit for service.

The Ohio State University study included 29 people, most of whom were members of the campus ROTC. For three months, 15 of the participants followed a ketogenic diet and a comparison group of 14 peers ate their normal diet.

Ketogenic diets are low in carbohydrates and emphasize moderate consumption of protein, with fat consumed to satiety. They aim to create a state of nutritional ketosis - which occurs when the body burns fat, rather than carbohydrates, for energy. The ketogenic diet is often used to control seizures in epilepsy and also is being studied and applied in a variety of other areas, including endurance sports and diabetes management.

In the study, which appears in the journal Military Medicine, participants on the keto diet lost an average of almost 17 pounds and were able, with support of counselors, to maintain ketosis for 12 weeks. As a group, they lost more than 5 percent of their body fat, almost 44 percent of their belly, or visceral, fat and had a 48 percent improvement in insulin sensitivity - a marker that predicts risk of diabetes.

The comparison group of participants, who consumed diets that were at least 40 percent carbohydrates (based on food diaries they kept), experienced none of those changes.

Although a relatively small research project, this is the largest published study of a well-formulated ketogenic diet in military personnel, said study senior author Jeff Volek, a professor of human sciences.

The ketogenic diets in the study included no caloric restrictions, just guidance about what to eat and what to avoid. Carbs were restricted to about 30 to 50 grams daily, with an emphasis on nuts and non-starchy vegetables. Food was also provided, either as groceries the keto dieters could use to prepare meals themselves or as pre-prepared frozen meals.

Keto diet participants had near-daily check-ins during which they reported blood ketone measurements from a self-administered finger-prick test and received feedback, usually through text messages, from the research team. Ketosis was defined as a blood concentration of ketones, chemicals made in the liver, between 0.5 and 5.0 mM (millimolar).

"Depending on their readings, we would talk about their food and drink choices and suggest they adjust their diet to maintain ketosis," said lead author Richard LaFountain, a postdoctoral researcher at Ohio State.

Both groups, whose schedules included regular resistance training, showed comparable physical performance levels at the end of the study. This was important because it's difficult to lose weight without losing some lean muscle mass and physical function, Volek said.

"We showed that a group of people with military affiliation could accept a ketogenic diet and successfully lose weight, including visceral adipose tissue, a type of fat strongly associated with chronic disease. This could be the first step toward a bigger study looking at the potential benefits of ketogenic eating in the armed forces," said Volek, who has authored books on the benefits of low-carb diets and is a founder of a company seeking to help people with type 2 diabetes through ketogenic diets and a virtual health care model.

The study results come with caveats. The group that followed the ketogenic diet chose to be in the test group, something that scientists call self-selection. Studies in which participants are randomized are preferred, but the research team said they wanted to do this pilot study in a group eager to adhere to the diet. The keto group also had a higher average body mass index at the start of the study - 27.9 versus 24.9 in the comparison group - meaning they had more fat to lose.

About seven in 10 people who are otherwise eligible to enter military service in the United States are considered unfit because of their weight, LaFountain said.

Officers or trainees on military bases likely could maintain a ketogenic diet based on the various foods that are already offered at typical meals, but more options could be added to support this weight-loss strategy, he said.

Added Volek, "The military has called obesity a national security crisis. One of the potential benefits of this diet in the military is that you can lose weight without having to count calories, which could be difficult in training or while on active duty. In this study, they ate as much as they wanted - they just ate differently."

Credit: 
Ohio State University

A novel method for assessing combined risk of multiple tap water pollutants

WASHINGTON - The array of toxic pollutants in California drinking water could cause more than 15,000 cases of cancer, according to a peer-reviewed EWG study that is the first ever to assess the cumulative risk from all contaminants in the state's public water systems.
 

In a paper published today in the journal Environmental Health, EWG scientists used a novel analytical method that calculated the combined health impacts of carcinogens and other toxic contaminants in 2,737 community water systems in California.

"This cumulative approach is common in assessing the health impacts of exposure to air pollutants but has never before been applied to drinking water contaminants," said Tasha Stoiber, Ph.D., a senior scientist at EWG and the lead author. "Right now, policymakers set health limits one chemical at a time. This doesn't match reality. Multiple contaminants are often detected in drinking water across the U.S."

This lifetime cumulative cancer risk estimate for California should be considered conservative because mixtures of contaminants may be even more toxic than the sum of individual chemicals.

"This could and should be a big deal," said Olga Naidenko, Ph.D., EWG's senior science advisor. "We need to prioritize the treatment of our tap water. This novel approach to risk assessment offers a significant improvement over the current model and, if adopted, will be a huge step toward improving public health. It will help communities and policymakers evaluate the best options to treat drinking water."

Water systems in California with the highest risk serve smaller communities with fewer than 1,000 people. In these communities, exposure to arsenic is the biggest factor in increased cancer risk. These communities are in need of improved infrastructure and resources to provide safe drinking water to its residents.

This assessment is based on water quality reports published by the California State Water Resources Control Board and data published by the EPA's Unregulated Contaminant Monitoring Rule.

Credit: 
Environmental Working Group

Making glass more clear

image: Multiscale modeling of a polymer glass to predict its temperature dependent properties.

Image: 
Wenjie Xia/NIST

EVANSTON, Ill. -- Not everything about glass is clear. How its atoms are arranged and behave, in particular, is startlingly opaque.

The problem is that glass is an amorphous solid, a class of materials that lies in the mysterious realm between solid and liquid. Glassy materials also include polymers, or commonly used plastics. While it might appear to be stable and static, glass' atoms are constantly shuffling in a frustratingly futile search for equilibrium. This shifty behavior has made the physics of glass nearly impossible for researchers to pin down.

Now a multi-institutional team including Northwestern University, North Dakota State University and the National Institute of Standards and Technology (NIST) has designed an algorithm with the goal of giving polymeric glasses a little more clarity. The algorithm makes it possible for researchers to create coarse-grained models to design materials with dynamic properties and predict their continually changing behaviors. Called the "energy renormalization algorithm," it is the first to accurately predict glass' mechanical behavior at different temperatures and could result in the fast discovery of new materials, designed with optimal properties.

"The current process of materials discovery can take decades," said Northwestern's Sinan Keten, who co-led the research. "Our approach scales molecular simulations up by roughly a thousand times, so we can design materials faster and examine their behavior."

"Although glassy materials are all around us, scientists still struggle to understand their properties, such as their fluidity and diffusion as temperature or composition vary," said Jack F. Douglas, a NIST research fellow, who co-led the work with Keten. "This lack of understanding is a serious limitation in the rational design of new materials."

The study published recently in the journal Science Advances. Wenjie Xia, an assistant professor of civil and environmental engineering at North Dakota State University, was the paper's first author.

Glass' strange behavior stems from the way it is made. It starts as a hot pool of molten material that is then rapidly cooled. Although the final material wants to reach equilibrium in a cooled state, it is highly susceptible to changing temperatures. If the material is heated, its mechanical properties can change dramatically. This makes it difficult for researchers to efficiently predict the mechanical properties by using existing molecular simulation techniques.

"As simple as glass looks, it's a very strange material," said Keten, an associate professor of mechanical engineering and civil and environmental engineering in Northwestern's McCormick School of Engineering. "It is amorphous and doesn't have an equilibrium structure, so it's constantly evolving by slow movements of its molecules. And then there is a lot of variation in how it evolves depending on temperature and molecular features of each glassy material. These processes take a very long time to compute in molecular simulations. Speeding up computations is only possible if we can map the positions of the molecules to simpler structural models."

Glass' structure is in stark contrast to a crystalline solid, in which atoms are arranged in an ordered, predictable and symmetrical manner. "It's easy to map atoms in crystalline materials because they have a repeating structure," Keten explained. "Whereas in an amorphous material, it is difficult to map the structure due to the lack of long-range order."

"Because of the amorphous and disordered nature of glass, its properties could vary with temperature substantially, making the prediction of its physical behavior extremely difficult," Xia added. "Now, we have found a new way to solve this problem."

To address this challenge, Keten, Douglas, Xia and their collaborators designed their algorithm to factor in the many ways glass molecules would move or not move depending on varying temperatures over time. To calculate the position of each atom within glass would be painstakingly slow and tedious -- even for a high-powered algorithm -- to compute. So Keten and his collaborators used "coarse-grained modeling," a simplified approach that looks at clusters of atoms rather than single atoms. Their new methodology efficiently creates parameters for the interactions among these coarser particles so that the model can capture the dramatic slow-down in molecular motion as the glassy material cools down.

"We cannot do an atom-by-atom simulation for even glass films of nanoscale thickness because even that would be too large," Keten said. "That's still millions of molecules. The coarse-grained models allow us to study larger systems comparable to experiments done in the lab."

So far, Keten and his team have checked their algorithm against three already well-characterized and very different types of polymeric glass-forming liquids. In each case, the algorithm accurately predicts the known dynamic properties across a large range of temperatures.

"Explaining the physics of glasses has famously been one of the biggest problems that scientists haven't been able to solve," Keten said. "We're getting closer to understanding their behavior and solving the mystery."

Credit: 
Northwestern University

Inorganic perovskite absorbers for use in thin-film solar cells

image: By co-evaporation of cesium iodide and lead iodide thin layers of CsPbI3 can be produced even at moderate temperatures. An excess of cesium leads to stable perovskite phases.

Image: 
J. Marquez-Prieto/HZB

Teams all over the world are working intensively on the development of perovskite solar cells. The focus is on what are known as metal-organic hybrid perovskites whose crystal structure is composed of inorganic elements such as lead and iodine as well as an organic molecule.

Completely inorganic perovskite semiconductors such as CsPbI3 have the same crystalline structure as hybrid perovskites, but contain an alkali metal such as caesium instead of an organic molecule. This makes them much more stable than hybrid perovskites, but usually requires an extra production step at very high temperature - several hundred degrees Celsius. For this reason, inorganic perovskite semiconductors have thus far been difficult to integrate into thin-film solar cells that cannot withstand high temperatures. A team headed by Dr. Thomas Unold has now succeeded in producing inorganic perovskite semiconductors at moderate temperatures so that they might also be used in thin-film cells in the future.

The physicists designed an innovative experiment in which they synthesised and analysed many combinations of material within a single sample. Using co-evaporation of caesium-iodide and lead-iodide, they produced thin layers of CsPbI3, systematically varying the amounts of these elements, while the substrate-temperature was less than 60 degrees Celsius.

"A combinatorial research approach like this allows us to find optimal production parameters for new material systems much faster than with the conventional approach that typically requires 100 samples to be produced for 100 different compositions", explains Unold. Through careful analysis during synthesis and the subsequent measurements of the optoelectronic properties, they were able to determine how the composition of the thin film affects the material properties.

Their measurements show that the structural as well as important optoelectronic properties of the material are sensitive to the ratio of caesium to lead. Thus, excess caesium promotes a stable perovskite phase with good mobility and lifetimes of the charge carriers.

In cooperation with the HZB Young Investigator Group of Prof. Steve Albrecht, these optimized CsPbI3 layers were used to demonstrate perovskite solar cells with an initial efficiency of more than 12 % and stable performance close to 11% for over 1200 hours. "We have shown that inorganic perovskite absorbers might also be suitable for use in thin-film solar cells if they can be manufactured adequately. We believe that there is great room for further improvements", says Unold.

Credit: 
Helmholtz-Zentrum Berlin für Materialien und Energie