Tech

Convincing evidence that type 2 diabetes is associated with increased risk of Parkinson's

Research from Queen Mary University of London has concluded that there is convincing evidence that type 2 diabetes is associated with an increased risk of Parkinson's disease. The same study found that there was also evidence that type 2 diabetes may contribute to faster disease progression in patients who already have Parkinson's.

Treating people with drugs already available for type 2 diabetes may reduce the risk and slow the progression of Parkinson's. Screening for and early treatment of type 2 diabetes in patients with Parkinson's may be advisable.

Previous systematic reviews and meta-analyses have produced conflicting results around the link between diabetes and the risk of Parkinson's disease. This new study, published in the Movement Disorders Journal, used meta-analysis of observational data and meta-analysis of genetic data to evaluate the effect of type 2 diabetes on risk and progression of Parkinson's disease.

Corresponding author Dr Alastair Noyce from Queen Mary University of London said: "This research brings together the results from many other studies to provide convincing evidence that type 2 diabetes likely affects not only Parkinson's risk, but also Parkinson's progression. There are many treatment strategies for type 2 diabetes, including prevention strategies, which may be re-purposed for the treatment of Parkinson's."

Credit: 
Queen Mary University of London

Finding key to low-cost, fast production of solid-state batteries for EVs

image: A new Georgia Tech manufacturing process could enable battery makers to produce lighter, safer, and more energy-dense batteries.

Image: 
Allison Carter, Georgia Tech

A new fabrication technique could allow solid-state automotive lithium-ion batteries to adopt nonflammable ceramic electrolytes using the same production processes as in batteries made with conventional liquid electrolytes.

The melt-infiltration technology developed by materials science researchers at the Georgia Institute of Technology uses electrolyte materials that can be infiltrated into porous yet densely packed, thermally stable electrodes.

The one-step process produces high-density composites based on pressure-less, capillary-driven infiltration of a molten solid electrolyte into porous bodies, including multilayered electrode-separator stacks.

"While the melting point of traditional solid state electrolytes can range from 700 degrees Celsius to over 1,000 degrees Celsius, we operate at a much lower temperature range, depending on the electrolyte composition, roughly from 200 to 300 degrees Celsius," explained Gleb Yushin, a professor in the School of Materials Science and Engineering at Georgia Tech. "At these lower temperatures, fabrication is much faster and easier. Materials at low temperatures don't react. The standard electrode assemblies, including the polymer binder or glue, can be stable in these conditions."

The new technique, to be reported March 8 in the journal Nature Materials, could allow large automotive Li-ion batteries to be made safer with 100% solid-state nonflammable ceramic rather than liquid electrolytes using the same manufacturing processes of conventional liquid electrolyte battery production. The patent-pending manufacturing technology mimics low-cost fabrication of commercial Li-ion cells with liquid electrolytes, but instead uses solid state electrolytes with low melting points that are melted and infiltrated into dense electrodes. As a result, high-quality multi-layered cells of any size or shape could be rapidly manufactured at scale using proven tools and processes developed and optimized over the last 30 years for Li-ion.

"Melt-infiltration technology is the key advance. The cycle life and stability of Li-ion batteries depend strongly on the operating conditions, particularly temperature," Georgia Tech graduate student Yiran Xiao explained. "If batteries are overheated for a prolonged period, they commonly begin to degrade prematurely, and overheated batteries may catch on fire. That has prompted nearly all electric vehicles (EV) to include sophisticated and rather expensive cooling systems." In contrast, solid-state batteries may only require heaters, which are significantly less expensive than cooling systems.

Yushin and Xiao are encouraged by the potential of this manufacturing process to enable battery makers to produce lighter, safer, and more energy-dense batteries.

"The developed melt-infiltration technology is compatible with a broad range of material chemistries, including so-called conversion-type electrodes. Such materials have been demonstrated to increase automotive cell energy density by over 20% now and by more than 100% in the future," said co-author and Georgia Tech research scientist Kostiantyn Turcheniuk, noting that higher density cells support longer driving ranges. The cells need high-capacity electrodes for that performance leap.

Georgia Tech's technique is not yet commercially ready, but Yushin predicts that if a significant portion of the future EV market embraces solid-state batteries, "This would probably be the only way to go," since it will allow manufacturers to use their existing production facilities and infrastructure.

"That's why we focused on this project - it was one of the most commercially viable areas of innovation for our lab to pursue," he said.

Battery cell prices hit $100 per kilowatt hour for the first time in 2020. According to Yushin, they will need to drop below $70 per kilowatt hour before the consumer EV market can fully open. Battery innovation is critical to that occurring.

The Materials Science lab team currently is focused on developing other electrolytes that will have lower melting points and higher conductivities using the same technique proven in the lab.

Yushin envisions this research team's manufacturing advance opening the floodgates to more innovation in this area.

"So many incredibly smart scientists are focused on solving very challenging scientific problems, while completely ignoring economic and technical practicality. They are studying and optimizing very high-temperature electrolytes that are not only dramatically more expensive to use in cells but are also up to five times heavier compared with liquid electrolytes," he explained. "My goal is to push the research community to look outside that chemical box."

Credit: 
Georgia Institute of Technology

New teamwork model could improve patient health care

HOUSTON - (March 8, 2021) - Health care teams must prepare for anything, including the unconventional work environments brought about by a global pandemic and social unrest.

Multiracial medical team having a discussion as they stand grouped together around a tablet computer on a stair well, overhead view

Open communication and trust are essential for successful teamwork in challenging health care situations, as detailed in "Building effective healthcare team development interventions in uncertain times: Tips for success." The paper was authored by researchers at Rice University, the University of Texas MD Anderson Cancer Center and The Group for Organizational Effectiveness.

The study outlines a new model, developed at MD Anderson under the guidance of the researchers, with recommendations for health care team effectiveness. It can be implemented in different settings to improve a team's communication, coordination and attitudes toward one another, improving clinical care and patient outcomes.

Team effectiveness is based on an organization's culture, leadership support and a trained and capable workforce, the researchers write. In addition, all team members should have clearly defined roles and purposes, with a clear team direction. But leaders must also consider the way a team feels, how it functions in the workplace and the way its members think. This is known in team science as the ABCs: attitudes, behaviors and cognitions.

By considering all these factors, the researchers suggest teams can achieve what they call "ideal team states," which include psychological safety (the freedom to speak openly without fear of negative job consequences), trust, adaptability and resilience.

"Only after embracing these states will teams be able to reach their highest potential," said Stephanie Zajac, a leadership practitioner at MD Anderson and the study's lead author.

The researchers also offer tips for successfully implementing team development interventions (TDIs), exercises used to advance team effectiveness. They recommend team leaders actively engage and invest time in TDIs, and that the exercises be integrated with existing organizational efforts to support teams, such as individual leader and team coaching. Team leaders should also show care and concern for team members as they navigate these exercises together.

Finally, leaders are encouraged to communicate regularly with their employees, and reflect on and celebrate the progress they make.

As teams grow more diverse and face unexpected circumstances, including the health and social justice hurdles of the past year, these tips can make all the difference when running successful health care institutions, conducting innovative research and delivering high-quality and safe patient care, the researchers wrote.

Credit: 
Rice University

Sophisticated skin

Squids have long been a source of fascination for humans, providing the stuff of legend, superstition and myth. And it's no wonder -- their odd appearances and strange intelligence, their mastery of the open ocean can inspire awe in those who see them.

Legends aside, squids continue to intrigue people today -- people like UC Santa Barbara professor Daniel Morse -- for much the same, albeit more scientific, reasons. Having evolved for hundreds of millions of years to hunt, communicate, evade predators and mate in the vast, often featureless expanses of open water, squids have developed some of the most sophisticated skin in the animal kingdom.

"For centuries, people have been amazed at the ability of squids to change the color and patterns of their skin -- which they do beautifully -- for camoflage and underwater communication, signaling to one another and to other species to keep away, or as attraction for mating and other kinds of signaling," said Morse, a Distinguished Professor Emeritus of Biochemistry and Molecular Genetics.

Like their cephalopod cousins the octopus and cuttlefish, squids have specialized pigment-filled cells called chromatophores that expand to expose them to light, resulting in various shades of pigmentary color. Of particular interest to Morse, however, is the squids' ability to shimmer and flicker, reflecting different colors and breaking light over their skin. It's an effect that is thought to mimic the dappled light of the upper ocean -- the only feature in an otherwise stark seascape. By understanding how squids manage to fade themselves into even the plainest of backgrounds -- or stand out -- it may be possible to produce materials with the same, light tuning properties for a variety of applications.

Morse has been working to unlock the secret of squid skin for the last decade, and with support from the Army Research Office and research published in the journal Applied Physics Letters, he and co-author Esther Taxon come even closer to unraveling the complex mechanisms that underlie squid skin.

An Elegant Mechanism

"What we've discovered is that not only is the squid able to tune the color of the light that's reflected, but also its brightness," Morse said. Research had thus far has established that certain proteins called reflectins were responsible for iridescence, but the squid's ability to tune the brightness of the reflected light was still something of a mystery, he said.

Previous research by Morse had uncovered structures and mechanisms by which iridocytes -- light-reflecting cells -- in the opalescent inshore squid's (Doryteuthis opalescens) skin can take on virtually every color of the rainbow. It happens with the cell membrane, where it folds into nanoscale accordion-like structures called lamellae, forming tiny, subwavelength-wide exterior grooves.

"Those tiny groove structures are like the ones we see on the engraved side of a compact disc," Morse said. The color reflected depends on the width of the groove, which corresponds to certain light wavelengths (colors). In the squid's iridocytes, these lamellae have the added feature of being able to shapeshift, widening and narrowing those grooves through the actions of a remarkably finely tuned "osmotic motor" driven by reflectin proteins condensing or spreading apart inside the lamellae.

While materials systems containing reflectin proteins were able to approximate the iridescent color changes squid were capable of, attempts to replicate the ability to intensify brightness of these reflections always came up short, according to the researchers, who reasoned that something had to be coupled to the reflectins in squid skin, amplifying their effect.

That something turned out to be the very membrane enclosing the reflectins -- the lamellae, the same structures responsible for the grooves that split light into its constituent colors.

"Evolution has so exquisitely optimized not only the color tuning, but the tuning of the brightness using the same material, the same protein and the same mechanism," Morse said.

Light at the Speed of Thought

It all starts with a signal, a neuronal pulse from the squid's brain.

"Reflectins are normally very strongly positively charged," Morse said of the iridescent proteins, which, when not activated, look like a string of beads. Their same charge means they repel each other.

But that can change when a neural signal causes the reflectins to bind negatively charged phosphate groups that neutralize the positive charge. Without the repulsion keeping the proteins in their disordered state they fold and attract each other, accumulating into fewer, larger aggregations in the lamellae.

These aggregations exert osmotic pressure on the lamellae, a semipermeable membrane built to withstand only so much pressure created by the clumping reflectins before releasing water outside the cell.

"Water gets squished out of the accordion-like structure, and that collapses the accordion so the thickness in spacing between the folds gets reduced, and that's like bringing the grooves of a compact disc closer together," Morse explained. "So the light that's reflected can shift progressively from red to green to blue."

At the same time, the membrane's collapse concentrates the reflectins, causing an increase in their refractive index, amplifying brightness. Osmotic pressure, the motor that drives these tunings of optical properties, couples the lamellae tightly to the reflectins in a highly calibrated relationship that optimizes the output (color and brightness) to the input (neural signal). Wipe away the neural signal and the physics reverses, Morse said.

"It's a very clever, indirect way of changing color and brightness by controlling the physical behavior of what's called a colligative property -- the osmotic pressure, something that's not immediately obvious, but it reveals the intricacy of the evolutionary process, the millennia of mutation and natural selections that have honed and optimized these processes together."

Tunable-Brightness Thin-Films

The presence of a membrane may be the vital link for the development of bioinspired thin films with the optical tuning capacity of the opalescent inshore squid.

"This discovery of the key role the membrane plays in tuning the brightness of reflectance has intriguing implications for the design of future buihybrid materials and coatings with tunable optical properties that could protect soldiers and their equipment," said Stephanie McElhinny, a program manager at the the Army Research Office, an element of the U.S. Army Combat Capabilities Development Command's Army Research Laboratory.

According to the researchers, "This evolutionarily honed, efficient coupling of reflectin of its osmotic amplifier is closely analogous to the impedance matched coupling of activator-transducer-amplifier networks in well-engineered electronic, magnetic, mechanical and acoustic systems." In this case the activator would be the neuronal signal, while the reflectins acts as transducers and the osmotically controlled membranes serve as the amplifiers.

"Without that membrane surrounding the reflectins, there's no change in the brightness for these artificial thin-films," said Morse, who is collaborating with engineering colleagues to investigate the potential for a more squid skin-like thin-film. "If we want to capture the power of the biological, we have to include some kind of membrane-like enclosure to allow reversible tuning of the brightness."

Credit: 
University of California - Santa Barbara

Why odors trigger powerful memories

Other senses re-routed during evolution, but not sense of smell

Loss of smell linked to depression and poor quality of life

Smell research can help treatments for loss in COVID-19

CHICAGO ---Odors evoke powerful memories, an experience enshrined in literature by Marcel Proust and his beloved madeleine.

A new Northwestern Medicine paper is the first to identify a neural basis for how the brain enables odors to so powerfully elicit those memories. The paper shows unique connectivity between the hippocampus--the seat of memory in the brain--and olfactory areas in humans.

This new research suggests a neurobiological basis for privileged access by olfaction to memory areas in the brain. The study compares connections between primary sensory areas--including visual, auditory, touch and smell--and the hippocampus. It found olfaction has the strongest connectivity. It's like a superhighway from smell to the hippocampus.

"During evolution, humans experienced a profound expansion of the neocortex that re-organized access to memory networks," said lead investigator Christina Zelano, assistant professor of neurology at Northwestern University Feinberg School of Medicine. "Vision, hearing and touch all re-routed in the brain as the neocortex expanded, connecting with the hippocampus through an intermediary--association cortex--rather than directly. Our data suggests olfaction did not undergo this re-routing, and instead retained direct access to the hippocampus."

The paper, "Human hippocampal connectivity is stronger in olfaction than other sensory systems" was published March 4 in the journal Progress in Neurobiology.

Epidemic loss of smell in COVID-19 makes research more urgent
In COVID-19, smell loss has become epidemic, and understanding the way odors affect our brains--memories, cognition and more--is more important than ever, Zelano noted.

"There is an urgent need to better understand the olfactory system in order to better understand the reason for COVID-related smell loss, diagnose the severity of the loss and to develop treatments," said first author Guangyu Zhou, research assistant professor of neurology at Northwestern. "Our study is an example of the basic research science that our understanding of smell, smell loss and future treatments is built on."

Below is a Q & A with Zelano about the importance of the sense of smell, olfactory research and the link to COVID-19.

Why do smells evoke such vivid memories?

"This has been an enduring mystery of human experience. Nearly everyone has been transported by a whiff of an odor to another time and place, an experience that sights or sounds rarely evoke. Yet, we haven't known why. The study found the offactory parts of the brain connect more strongly to the memory parts than other senses. This is a major piece of the puzzle, a striking finding in humans. We believe our results will help future research solve this mystery.'

How does smell research relate to COVID-19?

"The COVID-19 epidemic has brought a renewed focus and urgency to olfactory research. While our study doesn't address COVID smell loss directly, it does speak to an important aspect of why olfaction is important to our lives: smells are a profound part of memory, and odors connect us to especially important memories in our lives, often connected to loved ones. The smell of fresh chopped parsley may evoke a grandmother's cooking, or a whiff of a cigar may evoke a grandfather's presence. Odors connect us to important memories that transport us back to the presence of those people."

Loss of smell linked to depression and poor quality of life

"Loss of the sense of smell is underestimated in its impact. It has profound negative effects of quality of life, and many people underestimate that until they experience it. Smell loss is highly correlated with depression and poor quality of life.

"Most people who lose their smell to COVID regain it, but the time frame varies widely, and some have had what appears to be permanent loss. Understanding smell loss, in turn, requires research into the basic neural operations of this under-studied sensory system.

"Research like ours moves understanding of the olfactory parts of the brain forward, with the goal of providing the foundation for translational work on, ultimately, interventions."

Credit: 
Northwestern University

Lights on for silicon photonics

image: Scanning transmission electron microscopy (STEM) images of one of the Ge/SiGe heterostructures at different magnifications. The SiGe layers appear darker.

Image: 
Università Roma Tre, De Seta Group

When it comes to microelectronics, there is one chemical element like no other: silicon, the workhorse of the transistor technology that drives our information society. The countless electronic devices we use in everyday life are a testament to how today very high volumes of silicon-based components can be produced at very low cost. It seems natural, then, to use silicon also in other areas where the properties of semiconductors -- as silicon is one -- are exploited technologically, and to explore ways to integrate different functionalities. Of particular interest in this context are diode lasers, such as those employed in barcode scanners or laser pointers, which are typically based on gallium arsenide (GaAs). Unfortunately though, the physical processes that create light in GaAs do not work so well in silicon. It therefore remains an outstanding, and long-standing, goal to find an alternative route to realizing a 'laser on silicon'.

Writing today in Applied Physics Letters, an international team led by Professors Giacomo Scalari and Jérôme Faist from the Institute for Quantum Electronics present an important step towards such a device. They report electroluminescence -- electrical light generation -- from a semiconductor structure based on silicon-germanium (SiGe), a material that is compatible with standard fabrication processes used for silicon devices. Moreover, the emission they observed is in the terahertz frequency band, which sits between those of microwave electronics and infrared optics, and is of high current interest with a view to a variety of applications.

Make silicon shine

The main reason why silicon cannot be used directly for building a laser following to the GaAs template has to do with the different nature of their band gaps, which is direct in the latter but indirect in the former. In a nutshell, in GaAs electrons recombine with holes across the bandgap producing light; in silicon, they produce heat. Laser action in silicon therefore requires another path. And exploring a fresh approach is what ETH doctoral researcher David Stark and his colleagues are doing. They work towards a silicon-based quantum cascade laser (QCL). QCLs achieve light emission not by electron-hole recombination across the bandgap, but by letting electrons tunnel through repeated stacks of precisely engineered semiconductor structures, during which process photons are emitted.

The QCL paradigm has been demonstrated in a number of materials -- for the first time in 1994 by a team including Jérôme Faist, then working at Bell Laboratories in the US -- but never in silicon-based ones, despite promising predictions. Turning these predictions into reality is the focus of an interdisciplinary project funded by the European Commission, bringing together a team of leading experts in growing highest-quality semiconductor materials (at the Università Roma Tre), characterising them (at the Leibniz-Institut für innovative Mikroelektronik in Frankfurt an der Oder) and fabricating them into devices (at the University of Glasgow). The ETH group of Scalari and Faist is responsible for performing the measurements on the devices, but also for the design of the laser, with numerical and theoretical support from partners in the company nextnano in Munich and at the Universities of Pisa and Rome.

From electroluminescence to lasing

With this bundled knowledge and expertise, the team designed and built devices with a unit structure made of SiGe and pure germanium (Ge), less than 100 nanometres in height, which repeats 51 times. From these heterostructures, fabricated with essentially atomically precision, Stark and co-workers detected electroluminescence, as predicted, with the spectral features of the emerging light agreeing well with calculations. Further confidence that the devices work as intended came from a comparison with a GaAs-based structure that was fabricated with identical device geometry. Whereas the emission from the Ge/SiGe structure is still significantly lower than for its GaAs-based counterpart, these results clearly signal that the team is on the right track. The next step will be now to assemble similar Ge/SiGe structures according to a laser design that the team developed. The ultimate goal is to reach room-temperature operation of a silicon-based QCL.

Such an achievement would be significant in several respects. Not only would it, at long last, realize a laser on a silicon substrate, thereby bringing a boost to silicon photonics. The emission of the structure created by Stark et al. is in the terahertz region, for which currently compact light sources are widely missing. Silicon-based QCLs, with their potential versatility and reduced fabrication cost, could be a boon for the large-scale use of terahertz radiation in existing and new fields of application, from medical imaging to wireless communication.

Credit: 
ETH Zurich Department of Physics

Northern Hemisphere summers may last nearly half the year by 2100

image: Changes in average start dates and lengths of the four seasons in the Northern
Hemisphere mid-latitudes for 1952, 2011 and 2100.

Image: 
Wang et al 2020/Geophysical Research Letters/AGU.

WASHINGTON--Without efforts to mitigate climate change, summers spanning nearly six months may become the new normal by 2100 in the Northern Hemisphere, according to a new study. The change would likely have far-reaching impacts on agriculture, human health and the environment, according to the study authors.

In the 1950s in the Northern Hemisphere, the four seasons arrived in a predictable and fairly even pattern. But climate change is now driving dramatic and irregular changes to the length and start dates of the seasons, which may become more extreme in the future under a business-as-usual climate scenario.

"Summers are getting longer and hotter while winters shorter and warmer due to global warming," said Yuping Guan, a physical oceanographer at the State Key Laboratory of Tropical Oceanography, South China Sea Institute of Oceanology, Chinese Academy of Sciences, and lead author of the new study in Geophysical Research Letters, AGU's journal for high-impact, short-format reports with immediate implications spanning all Earth and space sciences.

Guan was inspired to investigate changes to the seasonal cycle while mentoring an undergraduate student, co-author Jiamin Wang. "More often, I read some unseasonable weather reports, for example, false spring, or May snow, and the like," Guan said.

The researchers used historical daily climate data from 1952 to 2011 to measure changes in the four seasons' length and onset in the Northern Hemisphere. They defined the start of summer as the onset of temperatures in the hottest 25% during that time period, while winter began with temperatures in the coldest 25%. Next, the team used established climate change models to predict how seasons will shift in the future.

The new study found that, on average, summer grew from 78 to 95 days between 1952 to 2011, while winter shrank from 76 to 73 days. Spring and autumn also contracted from 124 to 115 days, and 87 to 82 days, respectively. Accordingly, spring and summer began earlier, while autumn and winter started later. The Mediterranean region and the Tibetan Plateau experienced the greatest changes to their seasonal cycles.

If these trends continue without any effort to mitigate climate change, the researchers predict that by 2100, winter will last less than two months, and the transitional spring and autumn seasons will shrink further as well.

"Numerous studies have already shown that the changing seasons cause significant environmental and health risks," Guan said. For example, birds are shifting their migration patterns and plants are emerging and flowering at different times. These phenological changes can create mismatches between animals and their food sources, disrupting ecological communities.

Seasonal changes can also wreak havoc on agriculture, especially when false springs or late snowstorms damage budding plants. And with longer growing seasons, humans will breathe in more allergy-causing pollen, and disease-carrying mosquitoes can expand their range northward.

Going to extremes

This shift in the seasons may result in more severe weather events, said Congwen Zhu, a monsoon researcher at the State Key Laboratory of Severe Weather and Institute of Climate System, Chinese Academy of Meteorological Sciences, Beijing, who was not involved in the new study.

"A hotter and longer summer will suffer more frequent and intensified high-temperature events - heatwaves and wildfires," Zhu said. Additionally, warmer, shorter winters may cause instability that leads to cold surges and winter storms, much like the recent snowstorms in Texas and Israel, he said.

"This is a good overarching starting point for understanding the implications of seasonal change," said Scott Sheridan, a climate scientist at Kent State University who was not part of the new study.

It is difficult to conceptualize a 2- or 5-degree average temperature increase, he said, but "I think realizing that these changes will force potentially dramatic shifts in seasons probably has a much greater impact on how you perceive what climate change is doing."

Credit: 
American Geophysical Union

Someone to watch over AI and keep it honest - and it's not the public!

The public doesn't need to know how Artificial Intelligence works to trust it. They just need to know that someone with the necessary skillset is examining AI and has the authority to mete out sanctions if it causes or is likely to cause harm.

Dr Bran Knowles, a senior lecturer in data science at Lancaster University, says: "I'm certain that the public are incapable of determining the trustworthiness of individual AIs... but we don't need them to do this. It's not their responsibility to keep AI honest."

Dr Knowles presents (March 8) a research paper 'The Sanction of Authority: Promoting Public Trust in AI' at the ACM Conference on Fairness, Accountability and Transparency (ACM FAccT).

The paper is co-authored by John T. Richards, of IBM's T.J. Watson Research Center, Yorktown Heights, New York.

The general public are, the paper notes, often distrustful of AI, which stems both from the way AI has been portrayed over the years and from a growing awareness that there is little meaningful oversight of it.

The authors argue that greater transparency and more accessible explanations of how AI systems work, perceived to be a means of increasing trust, do not address the public's concerns.

A 'regulatory ecosystem', they say, is the only way that AI will be meaningfully accountable to the public, earning their trust.

"The public do not routinely concern themselves with the trustworthiness of food, aviation, and pharmaceuticals because they trust there is a system which regulates these things and punishes any breach of safety protocols," says Dr Richards.

And, adds Dr Knowles: "Rather than asking that the public gain skills to make informed decisions about which AIs are worthy of their trust, the public needs the same guarantees that any AI they might encounter is not going to cause them harm."

She stresses the critical role of AI documentation in enabling this trustworthy regulatory ecosystem. As an example, the paper discusses work by IBM on AI Factsheets, documentation designed to capture key facts regarding an AI's development and testing.

But, while such documentation can provide information needed by internal auditors and external regulators to assess compliance with emerging frameworks for trustworthy AI, Dr Knowles cautions against relying on it to directly foster public trust.

"If we fail to recognise that the burden to oversee trustworthiness of AI must lie with highly skilled regulators, then there's a good chance that the future of AI documentation is yet another terms and conditions-style consent mechanism - something no one really reads or understands," she says.

The paper calls for AI documentation to be properly understood as a means to empower specialists to assess trustworthiness.

"AI has material consequences in our world which affect real people; and we need genuine accountability to ensure that the AI that pervades our world is helping to make that world better," says Dr Knowles.

ACM FAccT is a computer science conference that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.

Credit: 
Lancaster University

Pay-off when solar cells can keep their cool

Lowering the operating temperature of solar panels by just a few degrees can dramatically increase the electricity they generate over their lifetime, KAUST researchers have shown. The hotter a panel gets, the lower its solar power conversion efficiency (PCE) and the faster it will degrade and fail. Finding ways to keep solar panels cool could significantly improve the return on investment of solar-power systems.

The long-standing focus of photovoltaics (PV) research has been to improve solar modules' PCE and make solar power more cost-competitive than nonrenewable power generation. The higher the PCE, the better the PV system's financial payback over its lifetime or the lower its "levelized cost of energy" (LCOE).

Other factors can skew these LCOE values. Capturing sunlight is inherently hot work. "All solar cells generate heat, which can lower the electrical output and shorten the module lifetime," says Lujia Xu, a postdoc in Stefaan De Wolf's team. Panels can regularly reach 60-65 degrees Celsius, but heat's impact on LCOE rarely receives much consideration.

Now, Xu, De Wolf and their colleagues have developed a metric that directly compares the LCOE gains by reducing the module temperature with the LCOE gains for improving module efficiency. Under typical operating conditions, the same improvement in LCOE by finding a hard-won one percent gain in PCE could be achieved by lowering the module temperature by as little as 3 degrees Celsius, they showed.

The key factor was that hotter panels fail far more rapidly. "A 4 degrees Celsius decrease in module temperature would improve the module time to failure by more than 50 percent, and this improvement increases to over 100 percent with a 7 degrees Celsius reduction," says Xu.

The team then developed a model to first predict the module temperature and subsequently find ways to lower it. The most effective approach was to locate modules in a windy environment with proper mounting to enable effective heat transfer to the surrounding environment. But they also showed they could achieve significant gains by making modifications at the module level. The EVA polymer encapsulant used to seal the module strongly absorbs heat from sunlight. "Replacing EVA with a more transparent material, or even adapting an encapsulant-free module technology, would be beneficial," Xu says.

"Our results show that researchers should pay more attention to module temperature," says De Wolf. "Because crystalline silicon solar-cell efficiency is approaching the practical upper limit, it is timely to consider other ways to decrease the LCOE, which might be even more significant than further marginal gains in cell efficiency."

Credit: 
King Abdullah University of Science & Technology (KAUST)

New Lancet series shows mixed progress on maternal and child undernutrition in last decade

image: International Center for Equity in Health

Image: 
International Center for Equity in Health

The Lancet today published the latest Series on Maternal and Child Undernutrition Progress, including three new papers that build upon findings from the previous 2008 and 2013 Series, which established an evidence-based global agenda for tackling undernutrition over the past decade. The papers conclude that despite modest progress in some areas, maternal and child undernutrition remains a major global health concern, particularly as recent gains may be offset by the COVID-19 pandemic. The Series reiterates that previously highlighted interventions continue to be effective at reducing stunting, micronutrient deficiencies, and child deaths and emphasizes the importance of delivering these nutrition interventions within the first 1,000 days of life. However, despite this evidence, program delivery has lagged behind the science and further financing is needed to scale up proven interventions.

The Series finds that the prevalence of childhood stunting rates fell in low-income countries from 47.1 percent to 36.6 percent from 2000 to 2015, but less so in middle-income countries where rates fell from 23.8 percent to 18.0 percent. Yet, the world is falling short of achieving the World Health Assembly Nutrition Target of reducing stunting by 50 percent by 2025. By comparison, there was little progress in the percentage of children who are wasted in both middle- and low-income countries. A new finding also shows that nearly 5 (4.7) percent of children are simultaneously affected by both stunting and wasting, a condition associated with a 4.8-times increase in mortality. The incidence of stunting and wasting is highest in the first 6 months of life, but also exists in part at birth. For maternal nutrition, although the prevalence of undernutrition (low body mass index) has fallen, anemia and short stature remain very high.

"While there have been small improvements, specifically in middle-income countries, progress remains too slow on child wasting and stunting," said Dr. Victora of the International Center for Equity in Health, Federal University of Pelotas in Brazil. "The evidence also reinforces the need to focus on delivering interventions within the first 1,000 days, and to prioritize maternal nutrition for women's own health as well as the health of their children."

Since the 2013 Series, evidence on the efficacy of 10 recommended interventions has increased, along with evidence of newer interventions. New evidence strongly supports the use of preventive small-quantity lipid-based nutrient supplementation (SQ-LNS) for reducing childhood stunting, wasting, and underweight. It also supports the scale up of antenatal multiple micronutrient supplementation for preventing adverse pregnancy and birth outcomes and improving maternal health.

Based on this new evidence, the Series presents a new framework for categorizing nutrition actions into direct and indirect interventions as well as health and non-health-care sector interventions. This framework highlights that evidence-based interventions continue to be a combination of direct interventions (e.g., micronutrient supplementation and breastfeeding counselling), and indirect interventions (e.g., family planning and reproductive health services; cash transfer programs; and water, sanitation, and hygiene promotion) to address the underlying determinants of malnutrition. Nutritional interventions delivered within and outside the health-care sector are equally crucial for preventing and managing malnutrition.

"Our evidence supports the continued effectiveness of all the interventions from the 2013 Series. New evidence further supports the scale up of multiple micronutrient supplements that include iron and folic acid for pregnant women instead of iron-folic acid alone, and the inclusion of SQ-LNS for children, which brings us to 11 core interventions," said Dr. Emily Keats of the Centre for Global Child Health at the Hospital for Sick Children in Toronto, Canada. "We now need to focus on improving intervention coverage, especially for the most vulnerable, through multi-sectoral actions," added Dr. Jai Das of the Center of Excellence in Women and Child Health at Aga Khan University in Karachi, Pakistan.

An additional paper finds that coverage of direct nutrition interventions showed little improvement over the last decade and that renewed commitment, new insights from implementation research, and fast-tracked funding to increase coverage and improve quality of service delivery is desperately needed. It also highlights how both the evidence-base for and the implementation of interventions spanning nutrition, health, food systems, social protection, and water, sanitation, and hygiene has evolved since the 2013 Lancet Series.

The authors conclude the Series with a global call to action to recommit to the unfinished agenda of maternal and child undernutrition.

"Governments and donors must recommit to the unfinished agenda of maternal and child undernutrition with sustained and consistent financial commitments," said Dr. Zulfiqar A. Bhutta from the Centre for Global Child Health, Toronto and Aga Khan University, who is the Series coordinator and senior author of the interventions paper. "Governments must expand coverage and improve quality of direct interventions-especially in the first 1,000 days; identify and address the immediate and underlying determinants of undernutrition through indirect interventions; build and sustain a political and regulatory environment for nutrition action; and invest in monitoring and learning systems at national and subnational levels."

"The COVID-19 pandemic continues to cripple health systems, exacerbate food insecurity, and threatens to reverse decades of progress," said Dr. Rebecca Heidkamp of the Department of International Health at the Johns Hopkins Bloomberg School of Public Health. "For both the pandemic response and the rapidly approaching World Health Assembly 2025 global nutrition target deadlines, nutrition actors at all levels must respond to the call to action to bring together resources, leadership, and coordination-along with data and evidence--to address the worldwide burden of undernutrition."

In an accompanying commentary to the Series, Dr. Meera Shekar, Global Lead for Nutrition at The World Bank and co-authors note: "Progress on delivering what is known to work is unacceptably slow. To change this dynamic, we strongly believe that beyond prioritizing what to do, countries need much better guidance on how to do it at scale, with insights into how much financing is needed and how best to allocate resources to maximize impact."

In the last decade, nutrition has risen on the global agenda, spurred in part by the findings from the 2008 and 2013 Series. This new Series comes at a critical time, as 2021 has been deemed the Nutrition for Growth (N4G) Year of Action--which will culminate in the UN Food Systems Summit in September 2021 and the Tokyo N4G Summit in December 2021.

Credit: 
GMMB

One size doesn't fit all when it comes to products for preventing HIV from anal sex

The initial insights from the study, aptly named DESIRE (Developing and Evaluating Short-acting Innovations for Rectal Use), are being reported on March 6 in a Science Spotlight session at the virtual meeting of the Conference on Retroviruses and Opportunistic Infections (CROI), March 6-10. The presentation will be available for registered participants and media to view throughout the meeting.

Conducted by the National Institutes of Health (NIH)-funded Microbicide Trials Network (MTN), DESIRE focused on potential delivery methods for rectal microbicides - topical products being developed and tested to reduce a person's risk of acquiring HIV and other sexually transmitted infections from anal sex. MTN researchers are particularly interested in on-demand options - used around the time of sex - and behaviorally congruent options that deliver anti-HIV drugs via products people may already be using as part of their sex routine.

"DESIRE stands out as a unique study because we took a step back and said, 'Let's figure out the modality without automatically pairing it with a drug'," explained José A. Bauermeister, Ph.D., M.P.H., study protocol chair and Albert M. Greenfield Professor of Human Relations at the University of Pennsylvania. "It gave us the ability to manipulate the delivery method without having to worry about how reactions to a particular drug might confound the results. We also had people trying out these methods in their own lives, and only then asked them to weigh the attributes of each." As such, he said, participants weren't making choices based on theoretical concepts, but instead using real experiences to guide their preferences.

Launched in 2019, DESIRE, also referred to as MTN-035, is the first study to explore multiple placebo methods for delivering a rectal microbicide. The three delivery methods assessed included a fast-dissolving placebo insert approximately two-thirds of an inch in length, a placebo suppository approximately an inch and a half in length, and a commercially available 120 mL douche bottle that participants were instructed to fill with clean tap or bottled water prior to use.

The study enrolled 217 participants who used each rectal delivery method for a month at a time, with a week-long break in between. Study participants, whose average age was 25 years, included cisgender men who have sex with men (79 percent) as well as transgender women (19 percent) and transgender men (2 percent) who have sex with men. The participants, based in Malawi, Peru, South Africa, Thailand and the United States (Birmingham, Pittsburgh and San Francisco), were instructed to use each method between 30 minutes and 3 hours prior to engaging in receptive anal sex, or once a week if they had not engaged in receptive anal sex in a given week.

To evaluate the acceptability of each method, participants were asked to complete a four-item survey once a week by text, commonly referred to as short message service (SMS), in their preferred language. After a month of using a particular delivery method, they were asked to complete a computer-assisted interview, with a subset of participants also completing an in-depth interview led by one of the study researchers.

At their final study visit, participants ranked attributes of a hypothetical product for preventing HIV from anal sex, and researchers used conjoint analysis - a market research approach that measures the value consumers place on features of a product or service - to calculate the percentage of weight participants gave to each attribute. They found that efficacy was the strongest determinant of participants' stated modality choice at 30 percent, followed by delivery method (18 percent) and side effects (17 percent). Other factors - timing of use before sex, duration of protection, frequency of use, and the need for a prescription - were not weighted as having as much importance by study participants. Through further analysis, researchers identified the participants' most preferred package based on the product features: A douche used 30 minutes before sex with 95 percent efficacy that offers three to five days of protection. This ideal product would also only need to be used once a week, have no side effects, and be available over the counter.

While this market research approach offered insights into the most common package of features, participants also underscored how each of the modalities could offer unique advantages in their daily lives.

"As you might expect, the context of the participants' lives informed their product choice," said Dr. Bauermeister. "When asked to rank the most preferred product attributes, they based their answers on their own experiences and the tradeoffs they might make in real-life situations." In some instances, he explained, discretion might be important, so a small tablet in the form of a fast-dissolving rectal insert that could be carried in your pocket might be the best option. At other times, hygiene may be a priority and a douche would be preferred. As a pre-lubricated product, even the suppository had unexpected advantages with some participants commenting about it's potential as an alternative to sexual lubricant.

"The lesson we learned from MTN-035 is that even though the douche was preferred overall, we shouldn't assume it's right for everyone every time they plan to have sex," said Dr. Bauermeister. "Depending on who you are, what you do and where you live, it may not be a viable, or even a desirable, option for HIV prevention. At the end of the day, people could see all three of these modalities fitting into their lives."

Researchers like Dr. Bauermeister are hopeful these results will inform the development of rectal microbicides moving forward, and lead to expanded choices in preventing HIV from anal sex.

Credit: 
Microbicide Trials Network

The amazing promise of artificial intelligence in health care

image: A team of doctors led by UVA Health's James H. Harrison Jr., MD, PhD, has given us a glimpse of tomorrow in a new article on the current state and future use of artificial intelligence (AI) in the field of pathology.

Image: 
UVA Health

Artificial intelligence can already scan images of the eye to assess patients for diabetic retinopathy, a leading cause of vision loss, and to find evidence of strokes on brain CT scans. But what does the future hold for this emerging technology? How will it change how doctors diagnose disease, and how will it improve the care patients receive?

A team of doctors led by UVA Health's James H. Harrison Jr., MD, PhD, has given us a glimpse of tomorrow in a new article on the current state and future use of artificial intelligence (AI) in the field of pathology. Harrison and other members of the College of American Pathologists' Machine Learning Workgroup have spent the last two years evaluating the potential of AI and machine learning, assessing its current role in diagnostic testing and outlining what is needed to meet its potential in the not-too-distant future. And that potential is huge, they report.

In their article, the authors describe some amazing possibilities - from an "augmented reality" microscope that automatically identifies and labels important aspects in the field of view in real time to complete diagnostic image classification systems. That type of thing has, until recently, been the domain of Tony Stark and others in sci-fi movies.

In addition to predicting what the future may hold, the authors describe potential obstacles and make important recommendations for how the health-care field can best capitalize on the technology's awesome potential.

"AI and especially machine-learning algorithms introduce a fundamentally new kind of data analysis into the health-care workflow," the authors write. "By virtue of their influence on pathologists and other physicians in selection of diagnoses and treatments, the outputs of these algorithms will critically impact patient care."

Artificial Intelligence in Pathology

Right now, pathology and other health-care applications of artificial intelligence are in their infancy. The federal Food and Drug Administration has approved only a few AI devices for pathology use, mostly for classifying cells in blood and body fluids and for screening cervical tissue, the authors report. But in research labs, scientists are using machine learning to classify and grade lung and prostate cancer, predict outcomes in lung and brain cancers, measure breast cancer proliferation, predict bladder cancer reoccurrence and much more. The authors describe what they're seeing in research publications and early prototypes as "tantalizing."

"Artificial intelligence systems, especially machine learning systems that perform complex image classification, are expected to have significant impact in two areas in which images particularly important, pathology and radiology," said Harrison, director of Clinical Laboratory Informatics at UVA Health and a member of UVA's Department of Pathology. "Pathologists will likely need to choose, verify, deploy, use and monitor AI systems in the future, and therefore they need to learn the strengths and weaknesses of these types of systems and techniques for their effective management."

In addition to projections of future applications, Harrison and his colleagues provide an overview of existing AI algorithms and discuss the development and validation of systems that use AI. Their review also discusses potential concerns about clinical implementation of the technologies, emphasizing the importance of careful validation and performance monitoring to ensure AI is used safely and effectively. The article suggests potential regulations that may be needed along the way. "Creation of a regulatory framework with defined best practices for accomplishing these goals is a necessary step for successful dissemination of machine learning in pathology and medicine," the authors write.

The article does not suggest we'll be receiving care solely from robot doctors anytime soon. Instead, it predicts that the best outcomes in the near future will come from a careful combination of human and machine capabilities. The authors agree with the American Medical Association in describing the goal as "augmented intelligence" that supplements and enhances, rather than replaces, human doctors' judgment and wisdom.

"Our article was written to introduce pathologists and other clinicians to the basics of machine learning and artificial intelligence, including how the systems work and what will be needed to manage them successfully," Harrison said. "As we begin to apply these tools more broadly, doctors will need a practical understanding of when to rely on them, when to question them and how to keep them working well."

Credit: 
University of Virginia Health System

Beauty is in the brain: AI reads brain data, generates personally attractive images

image: A computer created facial images that appealed to individual preferences.

Image: 
Cognitive computing research group

Researchers have succeeded in making an AI understand our subjective notions of what makes faces attractive. The device demonstrated this knowledge by its ability to create new portraits on its own that were tailored to be found personally attractive to individuals. The results can be utilised, for example, in modelling preferences and decision-making as well as potentially identifying unconscious attitudes.

Researchers at the University of Helsinki and University of Copenhagen investigated whether a computer would be able to identify the facial features we consider attractive and, based on this, create new images matching our criteria. The researchers used artificial intelligence to interpret brain signals and combined the resulting brain-computer interface with a generative model of artificial faces. This enabled the computer to create facial images that appealed to individual preferences.

"In our previous studies, we designed models that could identify and control simple portrait features, such as hair colour and emotion. However, people largely agree on who is blond and who smiles. Attractiveness is a more challenging subject of study, as it is associated with cultural and psychological factors that likely play unconscious roles in our individual preferences. Indeed, we often find it very hard to explain what it is exactly that makes something, or someone, beautiful: Beauty is in the eye of the beholder," says Senior Researcher and Docent Michiel Spapé from the Department of Psychology and Logopedics, University of Helsinki.

The study, which combines computer science and psychology, was published in February in the IEEE Transactions in Affective Computing journal.

Preferences exposed by the brain

Initially, the researchers gave a generative adversarial neural network (GAN) the task of creating hundreds of artificial portraits. The images were shown, one at a time, to 30 volunteers who were asked to pay attention to faces they found attractive while their brain responses were recorded via electroencephalography (EEG).

"It worked a bit like the dating app Tinder: the participants 'swiped right' when coming across an attractive face. Here, however, they did not have to do anything but look at the images. We measured their immediate brain response to the images," Spapé explains.

The researchers analysed the EEG data with machine learning techniques, connecting individual EEG data through a brain-computer interface to a generative neural network.

"A brain-computer interface such as this is able to interpret users' opinions on the attractiveness of a range of images. By interpreting their views, the AI model interpreting brain responses and the generative neural network modelling the face images can together produce an entirely new face image by combining what a particular person finds attractive," says Academy Research Fellow and Associate Professor Tuukka Ruotsalo, who heads the project.

To test the validity of their modelling, the researchers generated new portraits for each participant, predicting they would find them personally attractive. Testing them in a double-blind procedure against matched controls, they found that the new images matched the preferences of the subjects with an accuracy of over 80%.

"The study demonstrates that we are capable of generating images that match personal preference by connecting an artificial neural network to brain responses. Succeeding in assessing attractiveness is especially significant, as this is such a poignant, psychological property of the stimuli. Computer vision has thus far been very successful at categorising images based on objective patterns. By bringing in brain responses to the mix, we show it is possible to detect and generate images based on psychological properties, like personal taste," Spapé explains.

Potential for exposing unconscious attitudes

Ultimately, the study may benefit society by advancing the capacity for computers to learn and increasingly understand subjective preferences, through interaction between AI solutions and brain-computer interfaces.

"If this is possible in something that is as personal and subjective as attractiveness, we may also be able to look into other cognitive functions such as perception and decision-making. Potentially, we might gear the device towards identifying stereotypes or implicit bias and better understand individual differences," says Spapé.

Credit: 
University of Helsinki

Coastal changes worsen nuisance flooding on many U.S. shorelines, study finds

ORLANDO, March 5, 2021 - Nuisance flooding has increased on U.S. coasts in recent decades due to sea level rise, and new research co-authored by the University of Central Florida uncovered an additional reason for its added frequency.

In a study appearing today in the journal Science Advances, researchers show that higher local tide ranges, most likely from human alterations to coastal areas and estuaries, has increased the number of nuisance flooding days in many coastal locations in the U.S.

Coastal nuisance flooding is considered to be minor flooding from the seas that causes problems such as flooded roads and overloaded stormwater systems, which can be major inconveniences for people and provide habitat for bacteria and mosquitoes.

Changes to local tide range often occur in coastal areas and estuaries when channels are dredged, land is reclaimed, development occurs, or river flows change. This can cause tide ranges, defined as the height difference between high tide and low tide, to increase in some areas and decrease in others.

The study found that out of the 40 U.S. National Oceanic and Atmospheric Administration tidal gauge locations used in the study that dot the continental U.S. coastlines, nearly half had more nuisance flooding days because of higher local tide ranges.

"It's the first time that the effects of tidal changes on nuisance flooding were quantified, and the approach is very robust as it is based purely on observational data and covers the entire coastline of the U.S. mainland," says study co-author Thomas Wahl, an assistant professor in UCF's Department of Civil, Environmental and Construction Engineering.

The researchers performed the study by using tidal gauge data at 40 locations along the Atlantic, Gulf and Pacific coasts spanning at least 70 years of data. They compared water levels at the locations based on two different scenarios - one in which tidal range never changed and one where it did.

This allowed them to see how often nuisance floods occurred or were prevented over time because of tidal changes.

They found that nuisance flooding increased because of tidal changes in about half the locations, decreased in a fourth of the locations, and was not changed in the remaining quarter of locations.

For example, in 2019, Cedar Key, Florida, received about 23 additional nuisance flooding days because of increased tidal range, while Washington, D.C., had about 42 fewer due to decreased tidal range.

"Seeing how many nuisance flooding events occurred in the past and are happening today simply because of tidal changes should be motivation for us to keep alterations to sensitive estuarine systems at a minimum as to not further exacerbate the problem, which we already face because of sea level rise," Wahl says. "We should at least be aware of these potentially negative impacts in the planning phase of alteration projects, and it might even be possible to reverse some of the negative impacts from past decisions."

"While a few individual instances of these minor flooding events do not cause too many impacts, the cumulative impacts of frequent events can become very large," Wahl says. "Hence, understanding what drives the changes in nuisance flooding is very important."

The study's lead author, Sida Li, is a visiting student in UCF's Department of Civil, Environmental and Construction Engineering and the National Center for Integrated Coastal Research.

Credit: 
University of Central Florida

Fine particulate matter from wildfire smoke more harmful than pollution from other sources

Researchers at Scripps Institution of Oceanography at UC San Diego examining 14 years of hospital admissions data conclude that the fine particles in wildfire smoke can be several times more harmful to human respiratory health than particulate matter from other sources such as car exhaust. While this distinction has been previously identified in laboratory experiments, the new study confirms it at the population level.

This new research work, focused on Southern California, reveals the risks of tiny airborne particles with diameters of up to 2.5 microns, about one-twentieth that of a human hair. These particles - termed PM2.5 - are the main component of wildfire smoke and can penetrate the human respiratory tract, enter the bloodstream and impair vital organs.

The study appears March 5 in the journal Nature Communications by researchers from Scripps Institution of Oceanography and the Herbert Wertheim School of Public Health and Human Longevity Science at UC San Diego. It was funded by the University of California Office of the President, the National Oceanic and Atmospheric Administration (NOAA), the Alzheimer's Disease Resource Center for Advancing Minority Aging Research at UC San Diego and theOffice of Environmental Health Hazard Assessment.

To isolate wildfire-produced PM2.5 from other sources of particulate pollution, the researchers defined exposure to wildfire PM2.5 as exposure to strong Santa Ana winds with fire upwind. A second measure of exposure involved smoke plume data from NOAA's Hazard Mapping System.

A 10 microgram-per-cubic meter increase in PM2.5 attributed to sources other than wildfire smoke was estimated to increase respiratory hospital admissions by 1 percent. The same increase, when attributed to wildfire smoke, caused between a 1.3 to 10 percent increase in respiratory admissions.

Corresponding author Rosana Aguilera said the research suggests that assuming all particles of a certain size are equally toxic may be inaccurate and that the effects of wildfires - even at a distance - represent a pressing human health concern.

"There is a daily threshold for the amount of PM2.5 in the air that is considered acceptable by the county and the Environmental Protection Agency (EPA)," said Aguilera, a postdoctoral scholar at Scripps Institution of Oceanography. "The problem with this standard is that it doesn't account for different sources of emission of PM2.5."

As of now, there is not a consensus as to why wildfire PM2.5 is more harmful to humans than other sources of particulate pollution. If PM2.5 from wildfires is more dangerous to human lungs than that of ambient air pollution, the threshold for what are considered safe levels of PM2.5 should reflect the source of the particles, especially during the expanding wildfire season. This is especially relevant in California and other regions where most PM2.5 is expected to come from wildfires.

In Southern California, the Santa Ana winds drive the most severe wildfires and tend to blow wildfire smoke towards populated coastal regions. Climate change delays the start of the region's rainy season, which pushes wildfire season closer to the peak of the Santa Ana winds in early winter. Additionally, as populations grow in wildland urban interface areas, the risks of ignitions and impacts of wildfire and smoke increase for those who live inland and downwind.

Coauthor Tom Corringham points to the implications for climate change: "As conditions in Southern California become hotter and drier, we expect to see increased wildfire activity. This study demonstrates that the harm due to wildfire smoke may be greater than previously thought, bolstering the argument for early wildfire detection systems and efforts to mitigate climate change."

Credit: 
University of California - San Diego