Tech

Adding a carbon atom transforms 2D semiconducting material

image: Schematic of plasma assisted carbon-hydrogen species doping in the WS2 lattice.

Image: 
Fu Zhang/Penn State

A technique that introduces carbon-hydrogen molecules into a single atomic layer of the semiconducting material tungsten disulfide dramatically changes the electronic properties of the material, according to Penn State researchers at Penn State who say they can create new types of components for energy-efficient photoelectric devices and electronic circuits with this material.

"We have successfully introduced the carbon species into the monolayer of the semiconducting material," said Fu Zhang, doctoral student in materials science and engineering lead author of a paper published online today (May26) in Science Advances.

Prior to doping - adding carbon - the semiconductor, a transition metal dichalcogenide (TMD), was n-type -- electron conducting. After substituting carbon atoms for sulfur atoms, the one-atom-thick material developed a bipolar effect, a p-type -- hole -- branch, and an n-type branch. This resulted in an ambipolar semiconductor.

"The fact that you can change the properties dramatically by adding as little as two atomic percent was something unexpected," Mauricio Terrones, senior author and distinguished professor of physics, chemistry and materials science and engineering.

According to Zhang, once the material is highly doped with carbon, the researchers can produce a degenerate p-type with a very high carrier mobility. "We can build n+/p/n+ and p+/n/p+ junctions with properties that have not been seen with this type of semiconductor," he said.

In terms of applications, semiconductors are used in various devices in industry. In this case, most of those devices will be transistors of different sorts. There are around 100 trillion transistors in a laptop.

"This type of material might also be good for electrochemical catalysis," Terrones said. "You could improve conductivity of the semiconductor and have catalytic activity at the same time."

There are few papers in the field of 2D materials doping, because it requires multiple processes to take place simultaneously under specific types of conditions. The team's technique uses a plasma to lower the temperature at which methane can be cracked - split apart - down to 752 degrees Fahrenheit. At the same time, the plasma has to be strong enough to knock a sulfur atom out of the atomic layer and substitute a carbon-hydrogen unit.

"It's not easy to dope monolayers, and then to measure carrier transport is not trivial," Terrones says. "There is a sweet spot where we are working. Many other things are required."

Susan Sinnott, professor and head of the Department of Materials Science and Engineering, provided theoretical calculations that guided the experimental work. When Terrones and Zhang observed that doping the 2D material was changing its optical and electronic properties - something they had never seen before - Sinnott's team predicted the best atom to dope with and predicted the properties, which corresponded with the experiment.

Saptarshi Das, assistant professor of engineering science and mechanics, and his group, then measured the carrier transport in various transistors with increasing amounts of carbon substitution. They watched the conductance change radically until they had completely changed the conduction type from negative to positive.

"It was very much a multidisciplinary work," Terrones says.

Credit: 
Penn State

New approaches to study the genetics of autism spectrum disorder may lead to new therapies

Canadian neuroscientists are using novel experimental approaches to understand autism spectrum disorder, from studying multiple variation in a single gene to the investigation of networks of interacting genes to find new treatments for the disorder.

Autism spectrum disorder (ASD) affects more than 1% of children, yet most cases are of unknown or poorly defined genetic origin. It is highly variable disorder, both in its presentation and in its genetics - hundreds of risk genes have been identified. One key to understanding and ultimately treating ASD is to identify common molecular mechanisms underlying this genetically heterogeneous disorder. Four Canadian researchers presented the results of unique approaches to understand ASD at the 14th Canadian Neuroscience Meeting in Toronto, on May 24, 2019.

One common feature of autism is a shift in the ratio of excitation (or activation) and Inhibition (or inactivation) of neurons in animal models of ASD. Mutations that cause too much excitation of neurons result in autistic-like behaviour, and paradoxically, so do mutations that cause too much inhibition. Precise control of the Excitation to Inhibition ratio is therefore viewed as a key to regulate social behaviour. Dr. Melanie Woodin, at the University of Toronto, investigated a protein that is critically important for neuronal inhibition, called KCC2. When KCC2 fails to work, inhibitory neurotransmission (through a neurotransmitter called GABA) switches to being excitatory. Breakdown of GABA inhibition is a hallmark of abnormal brain activity in conditions such as epilepsy, pain and some forms of autism.

Regulation of KCC2 therefore appears as a valid target for treatment of ASD. Dr. Woodin's team has identified the first comprehensive list of proteins that interact and modify the action of KCC2. Their work has shown that one protein, called Pacsin1, interacts with KCC2 and can regulate its abundance and localisation. These results suggest that manipulating KCC2 interacting proteins could be an efficient technique to regulate KCC2 in a neuron specific manner.

More than a thousand mutations and other forms of genetic variation affecting several hundred genes have been linked to ASD. Given this large number, analyzing each gene on its own is not a feasible approach. To make sense of this data, one approach is to determine whether multiple risk genes function in common signaling pathways, which act as "hubs" where risk genes converge. To identify such hubs or networks, Dr. Karun Singh, from McMaster's University studies proteins in mouse models of ASD, but also in cells taken from patients and induced to grow in petri dishes, called induce pluripotent stem cells or iPSCs. By looking at how the proteins taken from cells carrying ASD associated mutations interact, his team has been able to identify specific signaling pathways affected by ASD. Targeting of these networks may lead to new therapies for ASD.

Dr. Catharine Rankin, from the University of British Columbia, presented data obtained by analyzing ASD-associated genes in a much simpler species, the nematode worm C. elegans. Her team tested 87 different strains of worms, each carrying a mutation in genes similar to those found in ASD-associated genes. Analysis of the morphology, locomotion, sensitivity and habituation, which is the simplest form of learning, in these worms by an automated system revealed certain genes that caused strikingly similar effect on the worms. Further analysis revealed these similarities resulted from previously undescribed interactions between the affected genes.

A great advantage of studying ASD genes in nematode worm is the possibility to easily edit genes and study the effects of these modifications on the worm by automated systems. This provides a means to analyse a large range of genes, thereby revealing unique and/or shared functions. Candidate drugs can also be tested for their ability to rescue the deficit associated with different gene modifications. Furthermore, Dr. Rankin demonstrated the feasibility of using the gene editing system based on CRISPR-Cas9 to specifically insert or remove ASD-related genes at specific times, to study their role in development.

The final speaker in this session was Dr. Kurt Haas, from the University of British Columbia, who discussed the role of a gene called PTEN. Mutations in PTEN have been strongly linked to both cancer and ASD, yet the mechanisms through which this occurs was unclear. Dr. Haas reported on the results obtained by 7 laboratories at that institution who collaborated to test 105 variants of PTEN, in yeast, fly, worm, rat, and human cell lines, to understand the impact of different mutations in this gene in a wide diversity of cellular environments. This analysis allowed the researchers to determine the specific impact ASD associated mutations on various protein functions, with high confidence.

By using a range of different approaches, Drs. Woodin, Singh, Rankin and Haas have increased our understanding of the genetic underpinnings of autism spectrum disorder. These studies pave the way to the identification of new potential therapeutic targets to treat this disorder.

Credit: 
Canadian Association for Neuroscience

Paper stickers to monitor pathogens are more effective than swabs

Washington, DC - May 24, 2019 - Using paper stickers to collect pathogens on surfaces where antisepsis is required, such as in food processing plants, is easier, and less expensive than swabbing, yet similarly sensitive. The research is published in Applied and Environmental Microbiology, a journal of the American Society for Microbiology.

"The porous structure of paper seems able to collect and accumulate [bacterial] contamination," said first author Martin Bobal, technical assistant, Christian Doppler Laboratory for Monitoring of Microbial Contaminants, Department for Farm Animal and Public Health in Veterinary Medicine, The University of Veterinary Medicine, Vienna, Austria. "This requires mechanical contact, for example by hand, or by splashed liquids."

In the study, the investigators, who specialize in monitoring cheese production, chose to target the organism Listeria monocytogenes, a pathogen that commonly contaminates raw milk and other raw dairy products, including soft cheeses such as Brie, Camembert, and Feta. They used qPCR, a method of quantifying DNA samples to determine the numbers of these bacteria, as well as of Escherichia coli.

Surfaces in food processing plants must be cleaned regularly. Unlike swabs, artificially contaminated stickers provided a record of contamination that took place over at least two weeks, despite washing, flushing with water, or wiping with Mikrozid, an alcohol-based disinfectant, to simulate cleansing practices. "Recovery [of DNA] from the stickers was rather variable, at around 30%, but did not distinctly decrease after 14 days of storage," the report stated. "This suggests the possibility of sampling over two weeks as well."

In a proof of concept experiment, the researchers placed stickers at multiple locations that frequently undergo hand contact-- such as on light switches and door handles --for one to seven days. Both bacterial species were detected repeatedly from these stickers.

Unlike stickers, swabbing is impractical on complex surfaces, such as door handles, light switches, and other fomites (objects likely to be contaminated with, and spread infectious organisms) and does a poor job of taking up bacteria from dry surfaces, according to the report.

"In the food production facility, conventional swabbing as a standard method can only expose a momentary snapshot," the investigators wrote. "For example, it is not possible to reconstruct information about yesterday's status after cleansing has been performed. In addition, when moistened swabs or contact-plate sampling methods are used, they bring with them growth medium into a supposedly clean environment, making subsequent disinfection necessary."

The investigators showed that plain paper stickers could trap not only bacterial pathogens and related DNA, but dead, and viable but non-culturable pathogens, which also can pose a threat to public health.

"A major advantage of stickers is in handling: they are easy to distribute and to collect," the authors concluded. "We put the stickers directly into the DNA-extraction kit's first protocol step. We did not encounter any inhibition or loss of information during DNA-extraction, nor during qPCR," said Mr. Bobal.

Credit: 
American Society for Microbiology

Deletion in mouse neutrophils offers clues to pathogenesis in multiple sclerosis

image: Etty "Tika" Benveniste.

Image: 
UAB

BIRMINGHAM, Ala. - Multiple sclerosis is an autoimmune disease that damages the insulating sheaths of nerve cells of the central nervous system. People with the disease can lose vision, suffer weak limbs, show degenerative symptoms and exhibit impaired cognition.

While multiple sclerosis has 17 approved therapies to modify the disease, none is able to halt disease progression. Thus, researchers use a mouse model called experimental autoimmune encephalomyelitis, or EAE, to discover disease mechanisms that may translate into treatments for patients with multiple sclerosis. Researchers at the University of Alabama at Birmingham now report in the journal JCI Insight how dysregulated neutrophils cause damage in a severe, mouse model form of EAE called atypical EAE, which attacks cerebellum brain tissue.

"These findings contribute to our understanding of the pathobiology of brain-targeted EAE and document the detrimental role of neutrophils in autoimmune neuroinflammation," said Etty "Tika" Benveniste, Ph.D., and Hongwei Qin, Ph.D., senior authors of the study. Benveniste and Qin are professor and associate professor in the UAB Department of Cell, Developmental and Integrative Biology.

Much evidence from neutrophils points to their detrimental impact in multiple sclerosis. Neutrophils are the most common white blood cells in the body, but their exact function in multiple sclerosis is unclear. Their normal, healthy function is to protect humans, as neutrophils speed to sites of infection or inflammation, aided by their ability to crawl out of the bloodstream and into affected tissues. In everyday life, people encounter them as the most prevalent cells found in pus, as an infection clears.

Several strands of evidence from previous studies at UAB and elsewhere formed the groundwork for this current study. These include 1) UAB researchers and others have shown that brain-targeted, atypical EAE is predominantly a neutrophil-driven disease; 2) dysregulation of a cell-signaling pathway called JAK/STAT is associated with multiple sclerosis and EAE; and 3) a cytokine called granulocyte colony-stimulating factor is known to have a detrimental role in multiple sclerosis, as it correlates with neurological disability and lesion burden in patients.

In their experiments, the UAB researchers artificially dysregulated the JAK/STAT signaling system by using mice with a deleted Socs3 gene. Socs3 is a negative regulator of the JAK/STAT pathway; in the absence of Socs3, the JAK/STAT pathway is overly active and promotes inflammation. As a result, mice with Socs3 deletion in their myeloid cells have a severe, brain-targeted, atypical form of EAE that is associated with cerebellar neutrophil infiltration and over-activation of STAT3, one of the seven STAT proteins that function in the JAK/STAT cell signaling pathway.

Using this model, the researchers found that neutrophils from the cerebellum of mice lacking Socs3 showed a hyper-activated phenotype and produced excessive amounts of reactive oxygen species, chemically active compounds that can damage cell structures. However, if mice were given treatments to neutralize the reactive oxygen species, the onset of atypical EAE was delayed and disease severity was reduced.

The mechanisms causing these changes were an enhanced STAT3 activation in Socs3-deficient neutrophils, a hyper-activated phenotype in response to granulocyte colony-stimulating factor, and an increased production of reactive oxygen species after neutrophil priming by granulocyte colony-stimulating factor. Furthermore, when compounds were given to mice to neutralize granulocyte colony-stimulating factor, the incidence and severity of atypical EAE was significantly reduced.

The researchers also sequenced messenger RNA in the Socs3-deficient neutrophils after stimulation by granulocyte colony-stimulating factor to identify the cell-signaling pathways and proteins that were most differentially affected.

"Overall, our work elucidates that hypersensitivity of granulocyte colony-stimulating factor/STAT3 signaling in Socs3-deficient mice leads to atypical EAE by enhanced neutrophil activation and increased oxidative stress, which may explain the detrimental role of granulocyte colony-stimulating factor in multiple sclerosis patients. Furthermore, the work suggests that both granulocyte colony-stimulating factor and neutrophils may be therapeutic targets in MS," Qin and Benveniste said.

Credit: 
University of Alabama at Birmingham

Did Leonardo da Vinci have ADHD?

image: ADHD is linked to lack of dopamine, which causes impaired executive functions. In people with ADHD the altered connections that are important for executive functions can be visualized with diffusion Tractography (yellow tint), an MRI technique pioneered by Professor Marco Catani at King's College London.

Image: 
Professor Marco Catani, King's College London

Leonardo da Vinci produced some of the world's most iconic art, but historical accounts show that he struggled to complete his works. 500 years after his death, King's College London researcher Professor Marco Catani suggests the best explanation for Leonardo's inability to finish projects is that the great artist may have had Attention Deficit and Hyperactivity Disorder (ADHD).

In an article in the journal BRAIN, Professor Catani lays out the evidence supporting his hypothesis, drawing on historical accounts of Leonardo's work practices and behaviour. As well as explaining his chronic procrastination, ADHD could have been a factor in Leonardo's extraordinary creativity and achievements across the arts and sciences.

Professor Catani, from the Institute of Psychiatry, Psychology & Neuroscience at King's, says: 'While impossible to make a post-mortem diagnosis for someone who lived 500 years ago, I am confident that ADHD is the most convincing and scientifically plausible hypothesis to explain Leonardo's difficulty in finishing his works. Historical records show Leonardo spent excessive time planning projects but lacked perseverance. ADHD could explain aspects of Leonardo's temperament and his strange mercurial genius.'

ADHD is a behavioural disorder characterised by continuous procrastination, the inability to complete tasks, mind-wandering and a restlessness of the body and mind. While most commonly recognised in childhood, ADHD is increasingly being diagnosed among adults including university students and people with successful careers.

Leonardo's difficulties with sticking to tasks were pervasive from childhood. Accounts from biographers and contemporaries show Leonardo was constantly on the go, often jumping from task to task. Like many of those suffering with ADHD, he slept very little and worked continuously night and day by alternating rapid cycles of short naps and time awake.

Alongside reports of erratic behaviour and incomplete projects from fellow artists and patrons, including Pope Leone X, there is indirect evidence to suggest that Leonardo's brain was organised differently compared to average. He was left-handed and likely to be both dyslexic and have a dominance for language in the right-hand side of his brain, all of which are common among people with ADHD.

Perhaps the most distinctive and yet disruptive side of Leonardo's mind was his voracious curiosity, which both propelled his creativity and also distracted him. Professor Catani suggests ADHD can have positive effects, for example mind-wandering can fuel creativity and originality. However, while beneficial in the initial stages of the creative process, the same traits can be a hindrance when interest shifts to something else.

Professor Catani, who specialises in treating neurodevelopmental conditions like Autism and ADHD, says: 'There is a prevailing misconception that ADHD is typical of misbehaving children with low intelligence, destined for a troubled life. On the contrary, most of the adults I see in my clinic report having been bright, intuitive children but develop symptoms of anxiety and depression later in life for having failed to achieve their potential.'

'It is incredible that Leonardo considered himself as someone who had failed in life. I hope that the case of Leonardo shows that ADHD is not linked to low IQ or lack of creativity but rather the difficulty of capitalising on natural talents. I hope that Leonardo's legacy can help us to change some of the stigma around ADHD.'

Credit: 
King's College London

The extraordinary powers of bacteria visualized in real time

The global spread of antibiotic resistance is a major public health issue and a priority for international microbiology research. In his paper to be published in the journal Science, Christian Lesterlin, Inserm researcher at Lyon's "Molecular Microbiology and Structural Biochemistry" laboratory (CNRS/Université Claude Bernard Lyon 1), and his team were able to film the process of antibiotic resistance acquisition in real time, discovering a key but unexpected player in its maintenance and spread within bacterial populations.

This spread of antibiotic resistance is for the most part due to the capacity of bacteria to exchange genetic material through a process known as bacterial conjugation. The systematic sequencing of pathogenic or environmental strains has identified a wide variety of genetic elements that can be transmitted by conjugation and that carry resistance to most - if not all - classes of antibiotics currently used in the clinical setting. However, the process of transferring genetic material from one bacterium to another in vivo, the time needed to acquire this resistance once the new genetic material is received and the effect of antibiotic molecules on this resistance remained unelucidated.

Real-time visualization

The researchers chose to study the acquisition of Escherichia coli resistance to tetracycline, a commonly used antibiotic, by placing a bacterium that is sensitive to tetracycline in the presence of one that is resistant. Previous studies have shown that such resistance involves the ability of the bacterium to expel the antibiotic before it can exert its destructive effect using "efflux pumps" found on its membrane. These specific efflux pumps are able to eject the antimicrobial molecules from the bacteria, thereby conferring on them a certain level of resistance.

In this experiment, the transmission of the DNA from one specific "efflux pump" - the TetA pump - was observed between a resistant bacterium and a sensitive bacterium using fluorescent marking. Thanks to live-cell microscopy, the researchers just had to track the progression of the fluorescence to see how the DNA of the "pump" migrated from one bacterium to another and how it was expressed in the recipient bacterium.

The researchers revealed that in just 1 to 2 hours, the single-stranded DNA fragment of the efflux pump was transformed into double-stranded DNA and then translated into functional protein, thereby conferring the tetracycline resistance on the recipient bacterium.

https://www.youtube.com/watch?v=sIyHkkO6pxE&feature=youtu.be

The transfer of DNA from the donor bacteria (green) to the recipient bacteria (red) is revealed by the appearance of red localization foci.

The rapid expression of the newly acquired genes is revealed by the production of green fluorescence in the recipient bacteria.

How is resistance organized in the presence of an antibiotic?

Tetracycline's mode of action is well-known to scientists: it kills bacteria by binding to their translational machinery, thereby blocking any possibility of producing proteins. Following this line of reasoning, it would be expected that by adding the antibiotic to the previous culture medium, the TetA efflux pump would not be produced and the bacteria would die. However, the researchers observed that, paradoxically, the bacteria were able to survive and efficiently develop resistance, suggesting the implication of another factor essential to the process of acquiring resistance.

The scientists discovered that this phenomenon can be explained by the existence of another efflux pump that is present in virtually all bacteria: AcrAB-TolC. While this generalist pump is less efficient than TetA, it is still able to expel a small amount of antibiotic from the cell, meaning that the bacteria can maintain minimal protein synthesis activity. Therefore, if the bacterium is lucky enough to have received a resistance gene through conjugation, then the TetA pump is produced and the bacteria becomes durably resistant.

This study opens up new avenues in the search for similar mechanisms in bacteria other than E. coli, and for different antibiotics. "We could even consider a therapy combining an antibiotic and a molecule able to inhibit this generalist pump. While it is still too soon to envisage the therapeutic application of such an inhibitor, numerous studies are currently being performed in this area given the possibility of reducing antibiotic resistance and preventing its spread to the various bacterial species". concludes Lesterlin.

Credit: 
INSERM (Institut national de la santé et de la recherche médicale)

Carnegie Mellon researchers create soft, flexible materials with enhanced properties

image: Left: A single liquid metal nanodroplet grafted with polymer chains.
Right: Schematic of polymer brushes grafted from the oxide layer of a liquid metal droplet.

Image: 
Carnegie Mellon University

A team of polymer chemists and engineers from Carnegie Mellon University have developed a new methodology that can be used to create a class of stretchable polymer composites with enhanced electrical and thermal properties. These materials are promising candidates for use in soft robotics, self-healing electronics and medical devices. The results are published in the May 20 issue of Nature Nanotechnology.

In the study, the researchers combined their expertise in foundational science and engineering to devise a method that uniformly incorporates eutectic gallium indium (EGaIn), a metal alloy that is liquid at ambient temperatures, into an elastomer. This created a new material -- a highly stretchable, soft, multi-functional composite that has a high level of thermal stability and electrical conductivity.

Carmel Majidi, a professor of Mechanical Engineering at Carnegie Mellon and director of the Soft Machines Lab, has conducted extensive research into developing new, soft materials that can be used for biomedical and other applications. As part of this research, he developed rubber composites seeded with nanoscopic droplets of liquid metal. The materials seemed to be promising, but the mechanical mixing technique he used to combine the components yielded materials with inconsistent compositions, and as a result, inconsistent properties.

To surmount this problem, Majidi turned to Carnegie Mellon polymer chemist and J.C. Warner University Professor of Natural Sciences Krzysztof Matyjaszewski, who developed atom transfer radical polymerization (ATRP) in 1994. ATRP, the first and most robust method of controlled polymerization, allows scientists to string together monomers in a piece-by-piece fashion, resulting in highly-tailored polymers with specific properties.

"New materials are only effective if they are reliable. You need to know that your material will work the same way every time before you can make it into a commercial product," said Matyjaszewski. "ATRP has proven to be a powerful tool for creating new materials that have consistent, reliable structures and unique properties."

Majidi, Matyjaszewski and Materials Science and Engineering Professor Michael R. Bockstaller used ATRP to attach monomer brushes to the surface of EGaIn nanodroplets. The brushes were able to link together, forming strong bonds to the droplets. As a result, the liquid metal uniformly dispersed throughout the elastomer, resulting in a material with high elasticity and high thermal conductivity.

Matyjaszewski also noted that after polymer grafting, the crystallization temperature of eGaIn was suppressed from 15 C to -80 C, extending the droplet's liquid phase ¬-- and thus its liquid properties -- down to very low temperatures.

"We can now suspend liquid metal in virtually any polymer or copolymer in order to tailor their material properties and enhance their performance," said Majidi. "This has not been done before. It opens the door to future materials discovery."

The researchers envision that this process could be used to combine different polymers with liquid metal, and by controlling the concentration of liquid metal, they can control the properties of the materials they are creating. The number of possible combinations is vast, but the researchers believe that with the help of artificial intelligence, their approach could be used to design "made-to-order" elastomer composites that have tailored properties. The result will be a new class of materials that can be used in a variety of applications, including soft robotics, artificial skin and bio-compatible medical devices.

Credit: 
Carnegie Mellon University

Trace metal exposure among pregnant women living near fracking wells in Canada

The Journal of Exposure Science and Environmental Epidemiology last week revealed the findings of a 2016 pilot study that measured pregnant women's exposure to environmental contaminants in northeastern British Columbia, an area of intensive natural-gas production through hydraulic fracturing (fracking). The study, directed by Marc-André Verner, a professor at the School of Public Health (ESPUM) of Université de Montréal (UdeM), showed that the women had higher concentrations of some metals, especially barium, aluminium, strontium and manganese, in their hair and urine compared to the general population.

"These results are of concern because a previous study showed that relatively high concentrations of barium, aluminium, strontium and manganese are found in rock samples from B.C.'s Montney Formation, where natural gas is extracted via fracking," said Élyse Caron-Beaudoin, a post-doctoral researcher at EPSUM and the study's lead author. "In addition, recent studies analyzing wastewater from fracking generally have shown higher concentrations of the same metals."

"It's impossible to say with certainty whether fracking caused the women's exposure to these metals," she added, "but our study does provide further evidence that this could be the case."

Community-initiated studies

Initially requested by people living near the natural-gas production areas, the study was jointly launched by UdeM researchers and the region's First Nations and public-health authorities. These communities wanted clear answers about how living near natural-gas developments was affecting their health.

"We used data from the Canadian Health Measures Survey (CHMS) to compare trace metal concentrations in the urine and hair of the 29 pregnant women we studied versus the general population," said Caron-Beaudoin. "However, for some metals we had to use exposure data collected in France, because similar data has never been collected in sufficient quantity in Canada."

The researchers found that concentrations of manganese in the women's urine were 10 times higher than in the reference populations. As well, the women's hair had greater concentrations of aluminium (16 times higher), barium (three times higher) and strontium (six times higher) than in the reference populations in France. Furthermore, barium and strontium concentrations were higher in hair samples from indigenous participants than in those from non-indigenous participants.

Is there a health risk?

At this stage of their investigations, researchers cannot comment on the presence or absence of a risk to human health. Many essential data for this type of toxicological evaluation are still lacking, including epidemiological studies assessing the association between exposure of pregnant women to these trace metals and the adverse effects on children's health: "We are aware that people would like to have answers right away, but we are only at the beginning of a long process of scientific inquiry," said Caron-Beaudoin. "Other studies are already underway or being planned to clarify this legitimate issue."

Pending questions

Data on water quality in the study areas's Peace River Valley remains scarce and the data that has been collected to date is highly variable. In addition, there's no systematic water-monitoring program in the region.

A previous study on exposure to volatile organic compounds such as benzene in the same group of pregnant women was published in 2018 in Environment International. Its findings suggested benzene exposure is also potentially higher among study participants, especially indigenous women, than in the general Canadian population.

To learn more, Caron-Beaudoin has returned to the Peace River Valley to recruit a second group of pregnant women so the researchers can measure their exposure to different contaminants. She and her team will also measure concentrations of these contaminants in water and indoor air. In addition, years as part of an epidemiological study, they are assessing the overall health of babies born in the region over the last 10 years.

About this study

"Urinary and hair concentrations of trace metals in pregnant women from Northeastern British Columbia, Canada: a pilot study," by Élyse Caron-Beaudoin et al., was published in the online version of the Journal of Exposure Science & Environmental Epidemiology in May 2019. The study was funded by a grant from the Université de Montréal Public Health Research Institute, by the West Moberly First Nations, and by research subsidies from the Fonds de Recherche Santé - Québec (FRQS) and the Canadian Institutes of Health Research.

Credit: 
University of Montreal

Bringing human-like reasoning to driverless car navigation

With aims of bringing more human-like reasoning to autonomous vehicles, MIT researchers have created a system that uses only simple maps and visual data to enable driverless cars to navigate routes in new, complex environments.

Human drivers are exceptionally good at navigating roads they haven't driven on before, using observation and simple tools. We simply match what we see around us to what we see on our GPS devices to determine where we are and where we need to go. Driverless cars, however, struggle with this basic reasoning. In every new area, the cars must first map and analyze all the new roads, which is very time consuming. The systems also rely on complex maps -- usually generated by 3-D scans -- which are computationally intensive to generate and process on the fly.

In a paper being presented at this week's International Conference on Robotics and Automation, MIT researchers describe an autonomous control system that "learns" the steering patterns of human drivers as they navigate roads in a small area, using only data from video camera feeds and a simple GPS-like map. Then, the trained system can control a driverless car along a planned route in a brand-new area, by imitating the human driver.

Similarly to human drivers, the system also detects any mismatches between its map and features of the road. This helps the system determine if its position, sensors, or mapping are incorrect, in order to correct the car's course.

To train the system initially, a human operator controlled a driverless Toyota Prius -- equipped with several cameras and a basic GPS navigation system -- collecting data from local suburban streets including various road structures and obstacles. When deployed autonomously, the system successfully navigated the car along a preplanned path in a different forested area, designated for autonomous vehicle tests.

"With our system, you don't need to train on every road beforehand," says first author Alexander Amini, an MIT graduate student. "You can download a new map for the car to navigate through roads it has never seen before."

"Our objective is to achieve autonomous navigation that is robust for driving in new environments," adds co-author Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. "For example, if we train an autonomous vehicle to drive in an urban setting such as the streets of Cambridge, the system should also be able to drive smoothly in the woods, even if that is an environment it has never seen before."

Joining Rus and Amini on the paper are Guy Rosman, a researcher at the Toyota Research Institute, and Sertac Karaman, an associate professor of aeronautics and astronautics at MIT.

Point-to-point navigation

Traditional navigation systems process data from sensors through multiple modules customized for tasks such as localization, mapping, object detection, motion planning, and steering control. For years, Rus's group has been developing "end-to-end" navigation systems, which process inputted sensory data and output steering commands, without a need for any specialized modules.

Until now, however, these models were strictly designed to safely follow the road, without any real destination in mind. In the new paper, the researchers advanced their end-to-end system to drive from goal to destination, in a previously unseen environment. To do so, the researchers trained their system to predict a full probability distribution over all possible steering commands at any given instant while driving.

The system uses a machine learning model called a convolutional neural network (CNN), commonly used for image recognition. During training, the system watches and learns how to steer from a human driver. The CNN correlates steering wheel rotations to road curvatures it observes through cameras and an inputted map. Eventually, it learns the most likely steering command for various driving situations, such as straight roads, four-way or T-shaped intersections, forks, and rotaries.

"Initially, at a T-shaped intersection, there are many different directions the car could turn," Rus says. "The model starts by thinking about all those directions, but as it sees more and more data about what people do, it will see that some people turn left and some turn right, but nobody goes straight. Straight ahead is ruled out as a possible direction, and the model learns that, at T-shaped intersections, it can only move left or right."

What does the map say?

In testing, the researchers input the system with a map with a randomly chosen route. When driving, the system extracts visual features from the camera, which enables it to predict road structures. For instance, it identifies a distant stop sign or line breaks on the side of the road as signs of an upcoming intersection. At each moment, it uses its predicted probability distribution of steering commands to choose the most likely one to follow its route.

Importantly, the researchers say, the system uses maps that are easy to store and process. Autonomous control systems typically use LIDAR scans to create massive, complex maps that take roughly 4,000 gigabytes (4 terabytes) of data to store just the city of San Francisco. For every new destination, the car must create new maps, which amounts to tons of data processing. Maps used by the researchers' system, however, captures the entire world using just 40 gigabytes of data.

During autonomous driving, the system also continuously matches its visual data to the map data and notes any mismatches. Doing so helps the autonomous vehicle better determine where it is located on the road. And it ensures the car stays on the safest path if it's being fed contradictory input information: If, say, the car is cruising on a straight road with no turns, and the GPS indicates the car must turn right, the car will know to keep driving straight or to stop.

"In the real world, sensors do fail," Amini says. "We want to make sure that the system is robust to different failures of different sensors by building a system that can accept these noisy inputs and still navigate and localize itself correctly on the road."

Credit: 
Massachusetts Institute of Technology

Engineered bacteria could be missing link in energy storage

ITHACA, N.Y. - One of the big issues with sustainable energy systems is how to store electricity that's generated from wind, solar and waves. At present, no existing technology provides large-scale storage and energy retrieval for sustainable energy at a low financial and environmental cost.

Engineered electroactive microbes could be part of the solution; these microbes are capable of borrowing an electron from solar or wind electricity and using the energy to break apart carbon dioxide molecules from the air. The microbes can then take the carbon atoms to make biofuels, such as isobutanol or propanol, that could be burned in a generator or added to gasoline, for example.

"We think biology plays a significant role in creating a sustainable energy infrastructure," said Buz Barstow, assistant professor of biological and environmental engineering at Cornell University. "Some roles will be supporting roles and some will be major roles, and we're trying to find all of those places where biology can work."

Barstow is the senior author of "Electrical Energy Storage With Engineered Biological Systems," published in the Journal of Biological Engineering.

Adding electrically engineered (synthetic or non-biological) elements could make this approach even more productive and efficient than microbes alone. At the same time, having many options also creates too many engineering choices. The study supplies information to determine the best design based on needs.

"We are suggesting a new approach where we stitch together biological and non-biological electrochemical engineering to create a new method to store energy," said Farshid Salimijazi, a graduate student in Barstow's lab and the paper's first author.

Natural photosynthesis already offers an example for storing solar energy at a huge scale, and turning it into biofuels in a closed carbon loop. It captures about six times as much solar energy in a year as all civilization uses over the same time. But, photosynthesis is really inefficient at harvesting sunlight, absorbing less than one percent of the energy that hits photosynthesizing cells.

Electroactive microbes let us replace biological light harvesting with photovoltaics. These microbes can absorb electricity into their metabolism and use this energy to convert CO2 to biofuels. The approach shows a lot of promise for making biofuels at higher efficiencies.

Electroactive microbes also allow for the use of other types of renewable electricity, not just solar electricity, to power these conversions. Also, some species of engineered microbes may create bioplastics that could be buried, thereby removing carbon dioxide (a greenhouse gas) from the air and sequestering it in the ground. Bacteria could be engineered to reverse the process, by converting a bioplastic or biofuel back to electricity. These interactions can all occur at room temperature and pressure, which is important for efficiency.

The authors point out that non-biological methods for using electricity for carbon fixation (assimilating carbon from CO2 into organic compounds, such as biofuels) are starting to match and even exceed microbes' abilities. However, electrochemical technologies are not good at creating the kinds of complex molecules necessary for biofuels and polymers. Engineered electroactive microbes could be designed to convert these simple molecules into much more complicated ones.

Combinations of engineered microbes and electrochemical systems could greatly exceed the efficiency of photosynthesis. For these reasons, a design that marries the two systems offers the most promising solution for energy storage, according to the authors.

"From the calculations that we have done, we think it's definitely possible," Salimijazi said.

The paper includes performance data on biological and electrochemical designs for carbon fixation. The current study is "the first time that anybody has gathered in one place all of the data that you need to make an apples-to-apples comparison of the efficiency of all these different modes of carbon fixation," Barstow said.

In the future, the researchers plan to use the data they have assembled to test out all possible combinations of electrochemical and biological components, and find the best combinations out of so many choices.

Credit: 
Cornell University

A light matter: Understanding the Raman dance of solids

image: The research team member of Professor Nakamura Laboratory at Tokyo Tech, work with the equipment used for the ultrafast dual pump-probe experiments.

Image: 
Tokyo Institute of Technology

Scientists at Tokyo Institute of Technology and Keio University investigated the excitation and detection of photogenerated coherent phonons in polar semiconductor GaAs through an ultrafast dual pump-probe laser for quantum interferometry.

Imagine a world where computers can store, move, and process information at exponential speeds using what we currently term as waste vibrations--heat and noise. While this may remind us of a sci-fi movie, with the coming of the nano-age, this will very soon be reality. At the forefront of this is research in a branch of the quantum realm: quantum photonics.

Laws of physics help us understand the efficient ways of nature. However, their application to our imperfect lives often involves the most efficient ways of utilizing the laws of physics. Because most of our lives revolve around exchange of information, coming up with faster ways of communicating has always been a priority. Most of this information is encoded in the waves and vibrations that utilize electromagnetic fields that propagate in space or solids and randomly interact with the particles in solid devices, creating wasteful byproducts: heat and noise. This interaction propagates via two channels, absorption of light or scattering by light, both leading to random excitation of atoms that make up the solid. By converting this random excitation of particles into coherent, well-controlled vibrations of the solid, we can turn the tables--instead of using light, we can use sound (noise!) to transport information. The energy of this lattice vibration is packaged in well-defined bundles called phonons.

However, the scope of this relies on the understanding of two fundamental points--generation of the coherent phonons and its subsequent lifetime for which it retains its "information-transporting ability." This was the theme of the question that researchers from Nakamura's laboratory at Tokyo Institute of Technology (Tokyo Tech) sought to answer under the collaboration of Prof. Shikano, who is working at Quantum Computing Center, Keio University.

Optical phonons are used to describe a certain mode of vibration, which occurs when the neighboring atoms of the lattice move in the opposite direction. "Because impulsive absorption (IA) and impulsive stimulated Raman scattering (ISRS) cause zapping of such vibrations in the solid lattice leading to phonon creation," claims Nakamura, "our aim was to shed light on narrowing down this dichotomy." The researchers utilized dual pump-probe spectroscopy, where an ultrafast laser pulse is split into a stronger "pump" to excite the GaAs sample and a weaker "probe" beam irradiated on the "shaken" sample. The pump pulse is split into two collinear pulses but with a slight shift in their wave pattern to produce relative phase-locked pulses. The phonon amplitude is enhanced or suppressed in fringes, depending upon constructive and destructive interference (Figs. 1 and 2).

The probe beam reads the interference fringe pattern by reading off changes in optical properties (reflectivity) of the sample that arise due to the fringe pattern-dependent vibrations in the lattice. This method of reading off the changes in wave pulses to determine the sample characteristics is called quantum interferometry.

Nakamura and the team state, "Thus, by varying the time delay between the pump pulses in steps shorter than the light cycle and pump-probe pulse, we could detect the interference between electronic states as well as that of optical phonons, which shows temporal characteristics of the generation of coherent phonons via light-electron-phonon interactions during the photo excitation." From the quantum mechanical superposition, the researchers could sieve out the information: generation of the phonons was dominantly linked to scattering (ISRS).

Advances in ultrashort optical pulses generations have continually pushed the ability to probe and manipulate structural composition of materials. With the foundations laid by such studies in understanding the vibrations in solids, the next step will involve using them as building blocks for transistors, devices, electronic devices, and who knows, soon our future!

The paper has been selected as Editor's suggestion of Physical Review B.

Credit: 
Tokyo Institute of Technology

A simple, yet versatile, new design for chaotic oscillating circuitry inspired by prime numbers

image: The simple idea underlying the design of the circuit is linking together some ring oscillators having lengths equal to the smallest odd prime numbers, such as 3, 5 and 7 (top). Even a simple sum between sine waves having such periods yields a complicated-looking signal (bottom), but the interactions between real oscillators lead to a much richer scenario.

Image: 
Ludovico Minati

Researchers at Tokyo Institute of Technology have found a simple, yet highly versatile, way to generate "chaotic signals" with various features. The technique consists of interconnecting three "ring oscillators," effectively making them compete against each other, while controlling their respective strengths and their linkages. The resulting device is rather small and efficient, thus suitable for emerging applications such as realizing wireless networks of sensors.

Our ability to recreate the signals found in natural systems, such as those in brains, swarms, and the weather, is useful for our understanding of the underlying principles. These signals can be very complex, to the extreme case of the so-called "chaotic signals." Chaos does not mean randomness; it represents a very complicated type of order. Minute changes in the parameters of a chaotic system can result in greatly different behaviors. Chaotic signals are difficult to predict, but they are present in lots of different scenarios.

Unfortunately, the generation of chaotic signals with desired features is a difficult task. Creating them digitally is in some cases too power consuming, and approaches based on analog circuits are necessary. Now, researchers in Japan, Italy, and Poland propose a new approach for creating integrated circuits that can generate chaotic signals. This research was the result of a collaboration between scientists from Tokyo Institute of Technology (Tokyo Tech), in part funded by the World Research Hub Initiative, the Universities of Catania and Trento, Italy, and the Polish Academy of Sciences in Krakow, Poland.

The research team started from the idea that cycles that have periods set by different prime numbers cannot develop a fixed phase relationship. Surprisingly, this principle seems to have emerged in the evolution of several species of cicadas, whose life cycles follow prime numbers of years, to avoid synchronizing with each other and with predators. For example, if one tries to "tie together" oscillators with periods set to the first three prime numbers (3, 5 and 7), the resulting signals are very complicated and chaos can readily be generated (Fig. 1).

The design started from the most traditional oscillator found in integrated circuits, called the "ring oscillator," which is small and does not require reactive components (capacitors and inductors). Such a circuit was modified so that the strengths of ring oscillators having three, five and seven stages could be controlled independently, along with the tightness of their linkages. The device could generate chaotic signals over a wide frequency spectrum, from audible frequencies to the radio band (1 kHz to 10 MHz). "Moreover, it could do so at a rather low power consumption, below one-millionth of a watt," explains Dr. Hiroyuki Ito, head of the laboratory where the prototype was designed.

Even more remarkable was the discovery that totally different types of signals could be generated depending on the slightly different characteristics the individual prototypes (Fig. 2). For example, the researchers recorded trains of spikes quite similar to what is found in biological neurons. They also found situations in which the rings "fought each other" to the point of almost completely suppressing their activity: this phenomenon is called "oscillation death."

"This circuit draws its beauty from a really essential shape and principle, and simplicity is key to realizing large systems operating collectively in a harmonious manner, especially when it enriched by small differences and imperfections, such as those found in the realized circuits," says Dr. Ludovico Minati, lead author of the study. The team believes in its future ability to be a building block for many different applications. They will work on integrating this circuit with sensors to, for example, measure chemical properties in the soil. Additionally, they will create networks of these oscillators on single computer chips interconnected in manners that resemble biological neural circuits. They hope to realize certain operations while consuming many times less power than a traditional computer.

Credit: 
Tokyo Institute of Technology

Stroke deaths in England halved in the first decade of the 21st century

Deaths from stroke in England halved during the first decade of the 21st century, mainly as a result of improved survival due to better care, finds a study published by The BMJ today.

The number of strokes, and the number of people who died from stroke, decreased by 20% and 40%, respectively, over this period.

But despite this overall reduction, the researchers warn that stroke rates actually increased in people aged under 55, suggesting that stroke prevention efforts need to be strengthened in younger adults.

Deaths from stroke have been falling worldwide for several decades, but it is unclear to what extent this is due to a decrease in the number of strokes occurring (event rate), the number of people dying from stroke (case fatality), or a combination of the two.

To explore this further, researchers at the Nuffield Department of Population Health, University of Oxford used hospital and mortality records to analyse data for all residents of England aged 20 and older who were admitted to hospital with stroke or died from stroke between 2001 and 2010.

Their results are based on 947,497 stroke events, including 337,085 stroke deaths, in 795,869 people. The average age at onset of stroke was 72 years for men and 76 years for women, and the average age of those who died from stroke was 79 years for men and 83 years for women.

After adjusting for age and other potentially influential factors, stroke deaths decreased by 55% during the study period.

For example in men, death rates decreased from 140 per 100,000 in 2001 to 74 per 100,000 in 2010, and in women from 128 per 100,000 in 2001 to 72 per 100,000 in 2010. And while a reduction was seen in all age groups, the largest annual reduction was in those aged 65 to 74 (-8.1% in men and -8.3% in women).

Most of this decline - 78% in men and 66% in women - was due to a reduction in case fatality, which decreased by 40% overall and in all age groups.

The remaining 22% and 34%, respectively, was due to a reduction in event rates, which decreased by 20% overall. However, in people younger than 55 years, event rates increased by 2% each year, which contrasts with the downward trend seen in other age groups.

This is an observational study so can't establish cause, and the researchers point to some limitations, such as the possibility that "minor" strokes treated outside hospital may have been missed.

Nevertheless, they say the large study size over a continuous period of 10 years suggests that the findings are reliable and applicable to the whole of England.

"Our findings show that most of the reduction in stroke mortality is a result of improved survival of patients with stroke," write the authors.

"However, acute and long term management of such patients is expensive, and the NHS is already spending about 5% of its budget on stroke care. By focusing on prevention and reducing the occurrence of stroke, major resources can be conserved," they conclude.

This view is supported by researchers at the University of Toronto in a linked editorial, who say that while more people are surviving after acute stroke, prevention remains a key priority.

Targeted prevention is particularly needed in populations in which the incidence of stroke is increasing, such as in younger adults, certain racial/ethnic groups, and people in low and middle income countries, they add.

These findings expose another societal challenge: the growing share of overall stroke burden borne by survivors, they write. "Tackling this major public health problem requires a concerted global effort to improve stroke prevention, care, and surveillance," they conclude.

Credit: 
BMJ Group

Strain enables new applications of 2D materials

image: Superconductors' never-ending flow of electrical current could provide new options for energy storage and superefficient electrical transmission and generation. But the signature zero electrical resistance of superconductors is reached only below a certain critical temperature and is very expensive to achieve. Physicists in Serbia believe they've found a way to manipulate superthin, waferlike monolayers of superconductors, thus changing the material's properties to create new artificial materials for future devices. This image shows a liquid phase graphene film deposited on PET substrate.

Image: 
Graphene Laboratory, University of Belgrade

WASHINGTON, D.C., May 21, 2019 -- Superconductors' never-ending flow of electrical current could provide new options for energy storage and superefficient electrical transmission and generation, to name just a few benefits. But the signature zero electrical resistance of superconductors is reached only below a certain critical temperature, hundreds of degrees Celsius below freezing, and is very expensive to achieve.

Physicists from the University of Belgrade in Serbia believe they've found a way to manipulate superthin, waferlike monolayers of superconductors, such as graphene, a monolayer of carbon, thus changing the material's properties to create new artificial materials for future devices. The findings from the group's theoretical calculations and experimental approaches are published in the Journal of Applied Physics, from AIP Publishing.

"The application of tensile biaxial strain leads to an increase of the critical temperature, implying that achieving high temperature superconductivity becomes easier under strain," said the study's first author from the University of Belgrade's LEX Laboratory, Vladan Celebonovic.

The team examined how conductivity within low-dimensional materials, such as lithium-doped graphene, changed when different types of forces applied a "strain" on the material. Strain engineering has been used to fine-tune the properties of bulkier materials, but the advantage of applying strain to low-dimensional materials, only one atom thick, is that they can sustain large strains without breaking.

Conductivity depends on the movement of electrons, and although it took seven months of hard work to accurately derive the math to describe this movement in the Hubbard model, the team was finally able to theoretically examine electron vibration and transport. These models, alongside computational methods, revealed how strain introduces critical changes to doped-graphene and magnesium-diboride monolayers.

"Putting a low-dimensional material under strain changes the values of all the material parameters; this means there's the possibility of designing materials according to our needs for all kind of applications," said Celebonovic, who explained that combining the manipulation of strain with the chemical adaptability of graphene gives the potential for a large range of potential new materials. Given the high elasticity, strength and optical transparency of graphene, the applicability could be far reaching -- think flexible electronics and optoelectric devices.

Going a step further, Celebonovic and colleagues tested how two different approaches to strain engineering thin monolayers of graphene affected the 2D material's lattice structure and conductivity. For liquid-phase "exfoliated" graphene sheets, the team found that stretching strains pulled apart individual flakes and so increased the resistance, a property that could be used to make sensors, such as touch screens and e-skin, a thin electronic material that mimics the functionalities of human skin.

"In the atomic force microscopy study on micromechanically exfoliated graphene samples, we showed that the produced trenches in graphene could be an excellent platform in order to study local changes in graphene conductivity due to strain. And those results could be related to our theoretical prediction on effects of strain on conductivity in one-dimensional-like systems," said Jelena Pesic, another author on the paper, from the University of Belgrade's Graphene Laboratory.

Although the team foresees many challenges to realizing the theoretical calculations from this paper experimentally, they are excited that their work could soon "revolutionize the field of nanotechnology."

Credit: 
American Institute of Physics

Eastern forests shaped more by Native Americans' burning than climate change

image: Pollen and tree survey map.

Image: 
Marc Abrams, Penn State

Native Americans' use of fire to manage vegetation in what is now the Eastern United States was more profound than previously believed, according to a Penn State researcher who determined that forest composition change in the region was caused more by land use than climate change.

"I believe Native Americans were excellent vegetation managers and we can learn a lot from them about how to best manage forests of the U.S.," said Marc Abrams, professor of forest ecology and physiology in the College of Agricultural Sciences. "Native Americans knew that to regenerate plant species that they wanted for food, and to feed game animals they relied on, they needed to burn the forest understory regularly."

Over the last 2,000 years at least, according to Abrams -- who for three decades has been studying past and present qualities of eastern U.S. forests -- frequent and widespread human-caused fire resulted in the predominance of fire-adapted tree species. And in the time since burning has been curtailed, forests are changing, with species such as oak, hickory and pine losing ground.

"The debate about whether forest composition has been largely determined by land use or climate continues, but a new study strongly suggests anthropogenic fire has been the major driver of forest change in the East," said Abrams. "That is important to know because climate change is taking on an ever larger proportion of scientific endeavor."

But this phenomenon does not apply to other regions, Abrams noted. In the western U.S., for example, climate change has been much more pronounced than in the East. That region has received much more warming and much more drought, he explained.

"Here in the East, we have had a slight increase in precipitation that has ameliorated the warming," said Abrams.

To learn the drivers of forest change, researchers used a novel approach, analyzing both pollen and charcoal fossil records along with tree-census studies to compare historic and modern tree composition in the forests of eastern North America. They looked at seven forest types in the north and central regions of the eastern United States. Those forest types encompass two distinct floristic zones -- conifer-northern hardwood and sub-boreal to the north, and oak-pine to the south.

The researchers found that in the northernmost forests, present-day pollen and tree-survey data revealed significant declines in beech, pine, hemlock and larches, and increases in maple, poplar, ash, oak and fir. In forests to the south, both witness tree and pollen records pointed to historic oak and pine domination, with declines in oak and chestnut and increases in maple and birch, based on present-day data.

"Modern forests are dominated by tree species that are increasingly cool-adapted, shade-tolerant, drought-intolerant pyrophobes -- trees that are reduced when exposed to repeated forest burning," Abrams said. "Species such as oak are largely promoted by low-to moderate-level forest fires. Furthermore, this change in forest composition is making eastern forests more vulnerable to future fire and drought."

Researchers also included human population data for the region, going back 2,000 years, to bolster their findings, which recently were published in the Annals of Forest Science. After hundreds of years of fairly stable levels of fire caused by relatively low numbers of Native Americans in the region, they report, the most significant escalation in burning followed the dramatic increase in human population associated with European settlement prior to the early 20th century. Moreover, it appears that low numbers of Native Americans were capable of burning large areas of the eastern U.S. and did so repeatedly.

After 1940, they found, fire suppression was an ecologically transformative event in all forests.

"Our analysis identifies multiple instances in which fire and vegetation changes were likely driven by shifts in human population and land use beyond those expected from climate alone," Abrams said. "After Smokey Bear came on the scene, fire was mostly shut down throughout the U.S. and we have been paying a big price for that in terms of forest change. We went from a moderate amount of fire to too much fire to near zero fire -- and we need to get back to that middle ground in terms of our vegetation management."

Credit: 
Penn State