Culture

Model could help determine quarantine measures needed to reduce Covid-19's spread

As Covid-19 infections soar across the U.S., some states are tightening restrictions and reinstituting quarantine measures to slow the virus' spread. A model developed by MIT researchers shows a direct link between the number of people who become infected and how effectively a state maintains its quarantine measures.

The researchers described their model in a paper published in Cell Patterns in November, showing that the system could recapitulate the effects that quarantine measures had on viral spread in countries around the world. In their next study, recently posted to the preprint server medRxiv, they drilled into data from the United States last spring and summer. That earlier surge in infections, they found, was strongly related to a drop in "quarantine strength" -- a measure the team defines as the ability to keep infected individuals from infecting others.

The latest study focuses on last spring and early summer, when the southern and west-central United States saw a precipitous rise in infections as states in those regions reopened and relaxed quarantine measures. The researchers used their model to calculate the quarantine strength in these states, many of which were early to reopen following initial lockdowns in the spring.

If these states had not reopened so early, or had reopened but strictly enforced measures such as mask-wearing and social distancing, the model calculates that more than 40 percent of infections could have been avoided in all states that the researchers considered. In particular, the study estimates, if Texas and Florida had maintained stricter quarantine measures, more than 100,000 infections could have been avoided in each of those states.

"If you look at these numbers, simple actions on an individual level can lead to huge reductions in the number of infections and can massively influence the global statistics of this pandemic," says lead author Raj Dandekar, a graduate student in MIT's Department of Civil and Environmental Engineering.

As the country battles a winter wave of new infections, and states are once again tightening restrictions, the team hopes the model can help policymakers determine the level of quarantine measures to put in place.

"What I think we have learned quantitatively is, jumping around from hyper-quarantine to no quarantine and back to hyper-quarantine definitely doesn't work," says co-author Christopher Rackauckas, an applied mathematics instructor at MIT. "Instead, good consistent application of policy would have been a much more effective tool."

The new paper's MIT co-authors also include undergraduate Emma Wang and professor of mechanical engineering George Barbastathis.

Strength learning

The team's model is a modification of a standard SIR model, an epidemiological model that is used to predict the way a disease spreads, based on the number of people who are either "susceptible," "infectious," or "recovered." Dandekar and his colleagues enhanced an SIR model with a neural network that they trained to process real Covid-19 data.

The machine-learning-enhanced model learns to identify patterns in data of infected and recovered cases, and from these data, it calculates the number of infected individuals who are not transmitting the virus to others (presumably because the infected individuals are following some sort of quarantining measures). This value is what the researchers label as "quarantine strength," which reflects how effective a region is in quarantining an infected individual. The model can process data over time to see how a region's quarantine strength evolves.

The researchers developed the model in early February and have since applied it to Covid-19 data from more than 70 countries, finding that it has accurately simulated the on-the-ground quarantine situation in European, South American, and Asian countries that were initially hard-hit by the virus.

"When we look at these countries to see when quarantines were instituted, and we compare that with results for the trained quarantine strength signal, we see a very strong correlation," Rackauckas says. "The quarantine strength in our model changes a day or two after policies are instituted, among all countries. Those results validated the model."

The team published these country-level results last month in Cell Patterns, and are also hosting the results at covid19ml.org, where users can click on a map of the world to see how a given country's quarantine strength has changed over time.

What if states had delayed?

Once the researchers validated the model at the country level, they applied it to individual states in the U.S., to see not only how a state's quarantine measures evolved over time, but how the number of infections would have changed if a state modified its quarantine strength, for instance by delaying reopening.

They focused on the south and west-central U.S., where many states were early to reopen and subsequently experienced rapid surges in infections. The team used the model to calculate the quarantine strength for Arizona, Florida, Louisiana, Nevada, Oklahoma, South Carolina, Tennessee, Texas, and Utah, all of which opened before May 15. They also modeled New York, New Jersey, and Illinois -- states that delayed reopening to late May and early June.

They fed the model the number of infected and recovered individuals that was reported for each state, starting from when the 500th infection was reported in each state, up until mid-July. They also noted the day on which each state's stay-at-home order was lifted, effectively signaling the state's reopening.

For every state, the quarantine strength declined soon after reopening; the steepness of this decline, and the subsequent rise in infections, was strongly related to a state's reopening. States that reopened early on, such as South Carolina and Tennessee, had a steeper drop in quarantine strength and a higher rate of daily cases.

"Instead of just saying that reopening early is bad, we are actually quantifying here how bad it was," Dandekar says.

Meanwhile, states like New York and New Jersey, which delayed reopening or enforced quarantine measures such as mask-wearing even after reopening, kept a more or less steady quarantine strength, with no significant rise in infections.

"Now that we can give a measure of quarantine strength that matches reality, we can say, 'What if we kept everything constant? How much difference would the southern states have had in their outlook?'" Rackauckas says.

Next, the team reversed its model to estimate the number of infections that would have occurred if a given state maintained a steady quarantine strength even after reopening. In this scenario, more than 40 percent of infections could have been avoided in each state they modeled. In Texas and Florida, that percentage amounts to about 100,000 preventable cases for each state.

Conceivably, as the pandemic continues to ebb and surge, policymakers could use the model to calculate the quarantine strength needed to keep a state's current infections below a certain number. They could then look through the data to a point in time where the state exhibited this same value, and refer to the type of restrictions that were in place at that time, as a guide to the policies they could put in place at the present time.

"What is the rate of growth of the disease that we're comfortable with, and what would be the quarantine policies that would get us there?" Rackauckas says. "Is it everyone holing up in their houses, or is it everyone allowed to go to restaurants, but once a week? That's what the model can kind of tell us. It can give us more of a refined quantitative view of that question."

Credit: 
Massachusetts Institute of Technology

Massive underground instrument finds final secret of our sun's fusion

A hyper-sensitive instrument, deep underground in Italy, has finally succeeded at the nearly impossible task of detecting CNO neutrinos (tiny particles pointing to the presence of carbon, nitrogen and oxygen) from our sun's core. These little-known particles reveal the last missing detail of the fusion cycle powering our sun and other stars.

In results published Nov. 26 in the journal Nature (and featured on the cover), investigators of the Borexino collaboration report the first detections of this rare type of neutrinos, called "ghost particles" because they pass through most matter without leaving a trace.

The neutrinos were detected by the Borexino detector, an enormous underground experiment in central Italy. The multinational project is supported in the United States by the National Science Foundation under a shared grant overseen by Frank Calaprice, professor of physics emeritus at Princeton; Andrea Pocar, a 2003 graduate alumna of Princeton and professor of physics at the University of Massachusetts-Amherst; and Bruce Vogelaar, professor of physics at the Virginia Polytechnical Institute and State University (Virginia Tech).

The "ghost particle" detection confirms predictions from the 1930s that some of our sun's energy is generated by a chain of reactions involving carbon, nitrogen and oxygen (CNO). This reaction produces less than 1% of the sun's energy, but it is thought to be the primary energy source in larger stars. This process releases two neutrinos -- the lightest known elementary particles of matter -- as well as other subatomic particles and energy. The more abundant process for hydrogen-to-helium fusion also releases neutrinos, but their spectral signatures are different, allowing scientists to distinguish between them.

"Confirmation of CNO burning in our sun, where it operates at only a 1% level, reinforces our confidence that we understand how stars work," said Calaprice, one of the originators of and principal investigators for Borexino.

CNO neutrinos: Windows into the sun

For much of their life, stars get energy by fusing hydrogen into helium. In stars like our sun, this predominantly happens through proton-proton chains. However, in heavier and hotter stars, carbon and nitrogen catalyze hydrogen burning and release CNO neutrinos. Finding any neutrinos helps us peer into the workings deep inside the sun's interior; when the Borexino detector discovered proton-proton neutrinos, the news lit up the scientific world.

But CNO neutrinos not only confirm that the CNO process is at work within the sun, they can also help resolve an important open question in stellar physics: how much of the sun's interior is made up of "metals," which astrophysicists define as any elements heavier than hydrogen or helium, and whether the "metallicity" of the core matches that of the sun's surface or outer layers.

Unfortunately, neutrinos are exceedingly difficult to measure. More than 400 billion of them hit every square inch of the Earth's surface every second, yet virtually all of these "ghost particles" pass through the entire planet without interacting with anything, forcing scientists to utilize very large and very carefully protected instruments to detect them.

The Borexino detector lies half a mile beneath the Apennine Mountains in central Italy, at the Laboratori Nazionali del Gran Sasso (LNGS) of Italy's National Institute for Nuclear Physics, where a giant nylon balloon -- some 30 feet across -- filled with 300 tons of ultra-pure liquid hydrocarbons is held in a multi-layer spherical chamber that is immersed in water. A tiny fraction of the neutrinos that pass through the planet will bounce off electrons in these hydrocarbons, producing flashes of light that can be detected by photon sensors lining the water tank. The great depth, size and purity makes Borexino a truly unique detector for this type of science.

The Borexino project was initiated in the early 1990s by a group of physicists led by Calaprice, Gianpaolo Bellini at the University of Milan, and the late Raju Raghavan (then at Bell Labs). Over the past 30 years, researchers around the world have contributed to finding the proton-proton chain of neutrinos and, about five years ago, the team started the hunt for the CNO neutrinos.

Suppressing the background

"The past 30 years have been about suppressing the radioactive background," Calaprice said.

Most of the neutrinos detected by Borexino are proton-proton neutrinos, but a few are recognizably CNO neutrinos. Unfortunately, CNO neutrinos resemble particles produced by the radioactive decay of polonium-210, an isotope leaking from the gigantic nylon balloon. Separating the sun's neutrinos from the polonium contamination required a painstaking effort, led by Princeton scientists, that began in 2014. Since the radiation couldn't be prevented from leaking out of the balloon, the scientists found another solution: ignore signals from the contaminated outer edge of the sphere and protect the deep interior of the balloon. That required them to dramatically slow the rate of fluid movement within the balloon. Most fluid flow is driven by heat differences, so the U.S. team worked to achieve a very stable temperature profile for the tank and hydrocarbons, to make the fluid as still as possible. The temperature was precisely mapped by an array of temperature probes installed by the Virginia Tech group, led by Vogelaar.

"If this motion could be reduced enough, we could then observe the expected five or so low-energy recoils per day that are due to CNO neutrinos," Calaprice said. "For reference, a cubic foot of 'fresh air' -- which is a thousand times less dense than the hydrocarbon fluid -- experiences about 100,000 radioactive decays per day, mostly from radon gas."

To ensure stillness within the fluid, Princeton and Virginia Tech scientists and engineers developed hardware to insulate the detector -- essentially a giant blanket to wrap around it -- in 2014 and 2015, then they added three heating circuits that maintain a perfectly stable temperature. Those succeeded in controlling the temperature of the detector, but seasonal temperature changes in Hall C, where Borexino is located, still caused tiny fluid currents to persist, obscuring the CNO signal.

So two Princeton engineers, Antonio Di Ludovico and Lidio Pietrofaccia, worked with LNGS staff engineer Graziano Panella to create a special air handling system that maintains a stable air temperature in Hall C. The Active Temperature Control System (ATCS), developed at the end of 2019, finally produced enough thermal stability outside and inside the balloon to quiet the currents inside the detector, finally keeping the contaminating isotopes from being carried from the balloon walls into the detector's core.

The effort paid off.

"The elimination of this radioactive background created a low background region of Borexino that made the measurement of CNO neutrinos possible," Calaprice said.

"The data is getting better and better"

Before the CNO neutrino discovery, the lab had planned to end Borexino operations at the close of 2020. Now, it appears that data gathering could extend into 2021.

The volume of still hydrocarbons at the heart of the Borexino detector has continued to grow in size since February 2020, when the data for the Nature paper was collected. That means that, beyond revealing the CNO neutrinos that are the subject of this week's Nature article, there is now a potential to help resolve the "metallicity" problem as well -- the question of whether the core, outer layers and surface of the sun all have the same concentration of elements heavier than helium or hydrogen.

"We have continued collecting data, as the central purity has continued to improve, making a new result focused on the metallicity a real possibility," Calaprice said. "Not only are we still collecting data, but the data is getting better and better."

Credit: 
Princeton University

Biomarkers could help predict severe SARS-CoV-2 infection

Heidelberg/Germany, 14 December 2020 - Molecular markers in the blood shown to be predictive of severe COVID-19 outcomes resulting from SARS-CoV-2 coronavirus infection have been identified in a study by a Chinese research team. The study results extend understanding of the pathophysiology and clinical progress of COVID-19 with potential for identifying early during the course of infection which individuals are most at risk of developing severe conditions and requiring hospital care.

In addition to pneumonia and septic syndrome, a smaller proportion of patients have also developed severe gastrointestinal and/or cardiovascular symptoms as well as neurological manifestations after SARS-COV-2 infection. This is possible because the angiotensin-converting enzyme 2 (ACE 2) receptor used by SARS-COV-2 for cell entry is found in other organs beside the lungs, including the heart, liver, kidney, pancreas, small intestines and also the CNS (Central Nervous System), especially the glial non-neuronal cells of the brain.

The study took a multi-omics approach integrating data from different -omics disciplines including cutting edge transcriptomic, proteomic and metabolomic technologies to identify significant correlated molecular alterations in patients with COVID-19, especially severe cases. The work evaluated data from 83 individuals in three groups, 16 severe cases, 50 mild and 17 healthy controls without the virus.

Serial blood and throat swab samples were collected from all participants, and to determine whether COVID-19 pathophysiology was associated with particular molecular changes, a total of 23,373 expressed genes, 9,439 proteins, 327 metabolites and 769 extra-cellular RNAs (exRNAs) circulating in the blood were examined. The profiles were significantly different between all three groups.

There were significant differences between mild and severe cases in various immune markers such as type 1 interferon and inflammatory cytokines, which were elevated in the latter, while the former showed robust T cell responses that presumably helped arrest disease progression.

A remarkable and unexpected finding was the existence of significant correlations between the multi-omics data and classical diagnostic blood or biochemical parameters. This was reflected particularly in the proteomic analysis where there was a significant downregulation in the tricarboxylic acid or "Krebs" cycle (TCA) and glycolytic pathways used to release stored energy in both mild and severe patients compared to healthy controls. Conversely, well known host defense pathways, such as the T cell receptor signaling pathway, were elevated in patients with COVID-19.

Another potentially valuable finding for future clinical application was existence of an association between viral load and disease prognosis in severe COVID-19 patients. Unfortunately, six of the patients with severe symptoms died and they had earlier on admission to hospital registered SARS-CoV-2 RNA loads in the throat significantly higher than those who survived. A notable finding here was that proteins participating in antiviral processes, including the T cell and B cell receptor signaling pathways, were positively associated with viral load changes in severe patients who survived.

Finally, specific molecules as biomarkers of subsequent COVID-19 outcomes were identified and used to create prognostic classification models. Predictive models based on four types of data worked well, especially those exploiting the clinical covariates and proteomic data, suggesting a possible framework for identifying patients likely to develop severe symptoms in advance so that treatments can be targeted accordingly.

Credit: 
EMBO

First 10 days after leaving hospital carry high risk for COVID-19 patients, study finds

In the first months after their COVID-19 hospital stay, patients face a high risk of ongoing health problems, trips back to the hospital, and death, a growing number of studies has shown.

But the first week and a half may be especially dangerous, according to a new study in JAMA by a team from the University of Michigan and VA Ann Arbor Healthcare System.

COVID-19 patients had a 40% to 60% higher risk of ending up back in the hospital or dying in the first 10 days, compared with similar patients treated at the same hospitals during the same months for heart failure or pneumonia.

By the end of 60 days, the COVID-19 patients' overall risk of readmission or death was lower than that for the other two serious conditions.

Even so, in the first two months, 9% of the COVID-19 patients who survived hospitalization had died, and almost 20% had suffered a setback that sent them back to the hospital.

That's on top of the 18.5% who had died during their hospitalization.

The researchers compared post-hospital outcomes for nearly 2,200 veterans who survived their hospitalization at 132 VA hospitals for COVID-19 this spring and early summer, with outcomes for nearly 1,800 similar patients who survived a stay for pneumonia that wasn't related to COVID-19, and 3,500 who survived a heart failure-related stay, during the same time.

Special vigilance needed

With hundreds of thousands of Americans now hospitalized for severe cases of COVID-19, and hospitals working to free up beds for an ongoing surge, the study suggests a need for special vigilance in the first days after discharge, says John P. Donnelly, Ph.D., M.S.P.H., M.S., the study's first author and an epidemiologist in the Department of Learning Health Sciences at Michigan Medicine, U-M's academic medical center.

"By comparing COVID-19 patients' long-term outcomes with those of other seriously ill patients, we see a pattern of even greater-than-usual risk right in the first one to two weeks, which can be a risky period for anyone," says Donnelly, who is a scholar in a special training program for critical illness data researchers at the U-M Institute for Healthcare Policy and Innovation.

"Now, the question is what to do about it," he says. "How can we design better discharge plans for these patients? How can we tailor our communication and post-hospital care to their needs? And how can we help their caregivers prepare and cope?"

Building evidence

Donnelly worked on the study with Michigan Medicine and VAAHS critical care physicians Hallie Prescott, M.D., M.Sc., and Theodore Iwashyna, M.D., Ph.D.

Prescott is senior author of another recent paper showing slow recovery in COVID-19 patients hospitalized in Michigan hospitals during the state's spring surge.

"Unfortunately," says Iwashyna, "this is yet more evidence that COVID-19 is not 'one and done.' For many patients, COVID-19 seems to set off cascades of problems that are every bit as serious as those we see in other diseases. But too little of our healthcare response -- and too little research -- is designed to help these patients as they continue for days, weeks, even months to recover from COVID-19."

Iwashyna further notes, "It is likely that there are very similar patterns happening the private sector, but the VA had the data to be able to look early."

More about the study

The study didn't include non-veterans treated at VA hospitals in early-surge states, through the "Fourth Mission" program that offset bed shortages at non-VA hospitals. It also doesn't include any readmissions to non-VA hospitals.

All but 5% of the patients were male, and half were Black, which is not nationally representative but focuses on two high-risk groups. But within the veterans studied, the only factor that made a significant difference in outcomes was age; about half of veterans in their 70s and 80s died in the 60 days after leaving the hospital.

In all, the 2,179 COVID-19 patients spent a total of 27,496 days in the hospital, and the 354 veterans who were readmitted spent a total of 3,728 additional days in the hospital.

The most common reasons listed for rehospitalization were COVID-19, cited in 30% of patients, and sepsis seen in 8.5%. More than 22% of the readmitted veterans went to an intensive care unit.

The study included patients with all three conditions whose hospital stays began in the period from March 1 to June 1, and left the hospital before July 1.

Because these were the early months of the pandemic, Donnelly acknowledges that the experience gained by hospitals during that time, and shared with hospitals in other states with later surges, may have changed post-hospital outcomes.

But the current surge, which is causing hospitals to pull non-specialists into COVID-19 care, and care for other seriously ill patients, may affect outcomes in a negative way.

He and his colleagues hope to continue to study new data from VA and non-VA hospitals as it becomes available, and to compare COVID-19 post-hospital outcomes with those for other serious conditions. Comparisons with patients hospitalized for influenza and other viral illnesses would be important to study, given the widespread false claims that COVID-19 is just a minor illness.

Credit: 
Michigan Medicine - University of Michigan

The un-appeal of banana: liquid e-cigarette flavorings measurably injure lungs

Known for their appetizing flavors, such as bubblegum, banana and strawberry, e-cigarettes continue to grow in popularity around the world. Promoted by makers as a "healthy" alternative to regular tobacco cigarettes, researchers are finding e-cigarettes, or vaping, still result in injury to the lungs.

In a recent study published in the American Journal of Physiology, teams at University of California San Diego School of Medicine, the Royal Adelaide Hospital and the University of Adelaide School of Medicine in Australia found that the flavoring chemicals of e-cigarette vapor alone can measurably damage the lungs, regardless of the presence of nicotine.

"Ninety-nine percent of e-cigarette liquids are flavored. To create these flavor profiles, companies are adding multiple chemicals to achieve that 'perfect' taste," said Laura Crotty Alexander, MD, associate professor of medicine in the Division of Pulmonary, Critical Care and Sleep Medicine at UC San Diego School of Medicine and section chief of Pulmonary Critical Care at Veterans Affairs San Diego Healthcare System. "These chemicals have been found to be toxic to the lungs. When inhaled, they wreak havoc on the lungs and affect specialized protein levels that help keep the body's immune system on track."

Working with 21 different adults who regularly vaped, the team at UC San Diego found changes in certain inflammatory proteins known to cause disease. In each individual who used e-cigarettes, they discovered irregular protein levels within their saliva and airways compared to individuals who did not vape.

Scientists at University of Adelaide then used in vitro methodologies to observe how human airway cells reacted to vapor applied directly to them from 10 flavored liquids used in e-cigarettes. After exposure, they reported that all of the e-liquids damaged cells, with some flavors being more toxic than others.

Combined, the researchers said the data suggests people who use flavored e-cigarettes are damaging their lungs every time they vape. Among the most toxic: chemical profiles for some chocolate and banana flavors.

"Our study demonstrated to us that the name on the bottle is not what is important, it is what goes into the e-liquids and e-cigarettes that matters," said first author Miranda Ween, PhD, senior postdoctoral researcher in the Lung Research Laboratory at the University of Adelaide and Royal Adelaide Hospital, Australia. "Lung cell toxicity and bacterial clearance by the lung's alveolar macrophages was affected by almost every flavor. Chocolate in particular had an unexpectedly high impact, killing almost all the cells and blocking the ability of macrophages to clear away bacteria almost entirely."

Alveolar macrophages initiate inflammatory responses when harmful organisms are detected in the body. Like street sweepers for the lungs, they are powerful immune cells that continuously devour inhaled bacteria and foreign matter entering the lungs.

"Alveolar macrophages are one of the most important immune cells in our lungs; they are designed to maintain the body's homeostasis," said Crotty Alexander. "These macrophages are the first cells to be exposed when a person inhales vapor. When the vapor is toxic, such as in e-cigarettes, these cells trigger an inflammatory response that disrupts the body's homeostasis, resulting in disease and lung damage."

"Our research shows that allowed flavors for e-cigarettes need to be better defined," said Ween. "This could easily be achieved by limiting e-liquids to a single flavoring chemical which has been tested and safety concentrations determined, research that is unfortunately lacking for something that is so popular worldwide."

Credit: 
University of California - San Diego

What happens when rain falls on desert soils? An updated model provides answers

image: Desert Research Institute (DRI) scientist Yuan Luo standes near a weighing lysimeter at DRI's SEPHAS Lysimeter facility in Boulder City, Nev. November 2020.

Image: 
Ali Swallow/DRI

Las Vegas, Nev. (December 14, 2020) - Several years ago, while studying the environmental impacts of large-scale solar farms in the Nevada desert, Desert Research Institute (DRI) scientists Yuan Luo, Ph.D. and Markus Berli, Ph.D. became interested in one particular question: how does the presence of thousands of solar panels impact desert hydrology?

This question led to more questions. "How do solar panels change the way water hits the ground when it rains?" they asked. "Where does the water go? How much of the rain water stays in the soil? How deep does it go into the soil?"

"To understand how solar panels impact desert hydrology, we basically needed a better understanding of how desert soils function hydraulically," explained Luo, postdoctoral researcher with DRI's Division of Hydrologic Sciences and lead author of a new study in Vadose Zone Journal.

In the study, Luo, Berli, and colleagues Teamrat Ghezzehei, Ph.D. of the University of California, Merced, and Zhongbo Yu, Ph.D. of the University of Hohai, China, make important improvements to our understanding of how water moves through and gets stored in dry soils by refining an existing computer model.

The model, called HYDRUS-1D, simulates how water redistributes in a sandy desert soil based on precipitation and evaporation data. A first version of the model was developed by a previous DRI graduate student named Jelle Dijkema, but was not working well under conditions where soil moisture levels near the soil surface were very low.

To refine and expand the usefulness of Dijkema's model, Luo analyzed data from DRI's SEPHAS Lysimeter facility, located in Boulder City, Nev. Here, large, underground, soil-filled steel tanks have been installed over truck scales to allow researchers to study natural water gains and losses in a soil column under controlled conditions.

Using data from the lysimeters, Luo explored the use of several hydraulic equations to refine Dijkema's model. The end result, which is described in the new paper, was an improved understanding and model of how moisture moves through and is stored in the upper layers of dry desert soils.

"The first version of the model had some shortcomings," Luo explained. "It wasn't working well for very dry soils with volumetric water content lower than 10 percent. The SEPHAS lysimeters provided us with really good data to help understand the phenomenon of how water moves through dry soils as a result of rainfall and evaporation."

In desert environments, understanding the movement of water through soils is helpful for a variety of practical uses, including soil restoration, erosion and dust management, and flood risk mitigation. For example, this model will be useful for desert restoration projects, where project managers need to know how much water will be available in the soil for plants after a desert rainstorm, Berli said. It is also a key piece of the puzzle needed to help answer their original question about how solar farms impact desert hydrology.

"The model is very technical, but all of this technical stuff is just a mathematical way to describe how rainwater moves in the soil once the water hits the soil," Berli said. "In the bigger picture, this study was motivated by the very practical question of what happens to rainwater when falling on solar farms with thousands and thousands of solar panels in the desert - but to answer questions like that, sometimes you have to dig deep and answer more fundamental questions first."

Credit: 
Desert Research Institute

Irrelevant information interferes with making decisions, new research reveals

TROY, N.Y. -- Especially during the holidays, online shopping can be overwhelming. Have you ever found yourself spending hours comparing nearly identical products, delving into details that don't actually matter to you? Have you ever finally reached a decision only to find that the product you want is out of stock? Unfortunately, if so, there is a good chance you did not end up making the best choice.

According to new research from behavioral economist Ian Chadd, an assistant professor at Rensselaer Polytechnic Institute, irrelevant information or unavailable options often cause people to make bad choices. When both elements are present, the probability of a poor decision is even greater.

Published in Experimental Economics, Chadd's research examined the behavioral economic concept of free disposal, where resources can be ignored without having a cost. The standard belief has been that, if irrelevant information is included when presenting a product or idea, a consumer can easily skip over the data without taking time or using cognitive power.

Through an experiment involving 222 individual tests each consisting of more than 40 questions, Chadd's research revealed that, in fact, the way information is presented does matter. Decisions made in an environment of irrelevant information carry time, cognitive, and consequence costs.

"These findings tell us a lot about choice architecture, the design process that goes into the creation of environments where people make decisions," Chadd said. "In environments where you have lots of information available, it's exceptionally important that consumers have the ability to filter out information that they find to be irrelevant."

The research also showed that consumers hold a deep "preference for simplicity" in that they are willing to pay a price to reduce the amount of irrelevant and unavailable information.

"This is important insight for policy makers and choice architects alike," Chadd said. "The goal should always be to opt towards simpler and more flexible presentation of information, so that consumers can decide for themselves what is and is not irrelevant and then not just ignore it if they see it, but also give them the option not to see it."

Credit: 
Rensselaer Polytechnic Institute

Study shows endothelial cell targeting could help fight Covid-19 symptoms

image: SARS-CoV-2 infection in endothelial cells.

Image: 
Stony Brook University

The News in Brief:

SARS-CoV-2 likely activates endothelial cell responses in patients which contributes to serious lung symptoms, vascular obstruction and respiratory distress with Covid-19.

A study shows that endothelial cells which line capillaries lack ACE2 receptors for SARS-CoV-2 attachment and suggests that SARS-CoV-2 indirectly activates endothelial cell linked disease mechanisms that direct coagulation and inflammation associated with severe Covid-19 disease.

As a result, endothelial cell activation and SARS-CoV-2 induced endothelial cell responses may be potential therapeutic targets to help resolve Covid-19 disease.

STONY BROOK, NY, December 14, 2020 - For Covid-19 patients with serious lung disease, targeting endothelial cells -cells that comprise the blood vessel wall which regulate oxygen exchange between airways and the bloodstream- may be a novel approach restoring normal lung function. This hypothesis stems from a study by researchers in the Department of Microbiology and Immunology in the Renaissance School of Medicine at Stony Brook University and published in mBio, the leading journal for the American Society for Microbiology.

SARS-CoV-2 causes Covid-19, characterized by pulmonary edema, viral pneumonia, coagulopathy, inflammation and other physiological abnormalities. SARS-CoV-2 uses angiotensin-converting enzyme 2 (ACE2) receptors to infect and damage ciliated epithelial vascular cells in the upper respiratory tract. Yet how SARS2 dysregulates vascular functions causing an acute respiratory distress syndrome (ARDS) in Covid-19 patients remains an enigma.

Led by Erich Mackow, PhD, a Professor of Microbiology and Immunology, the team of scientists sought to unravel this mechanism by investigating SARS-CoV-2 infection of human endothelial cells from the lung, brain, heart and kidney that are impacted in COVID-19 patients.

"Claims that endothelial cells are infected by SARS-CoV-2 through ACE2 receptors have never been assessed directly," said Mackow. "Our research revealed that endothelial cells lack ACE2 receptors and that endothelial cells were only SARS-CoV-2 infected after expressing ACE2 receptors in them. Since endothelial cell functions are dysregualted by SARS-CoV-2, these findings suggest a novel mechanism of regulation that does not require viral infection. Instead it suggests the indirect activation of the endothelium, potentially resulting from surrounding tissue damage, that could be the basis for further research to therapeutically target and restore normal endothelial cell responses."

Mackow adds that their work centers on both endothelial cells and ACE2 functions in Covid-19 disease to identify mechanisms of capillary inflammation and aberrant clotting within vessels. He explains that the research reveals a novel mechanism of clotting and endothelial inflammation observed in the lung and heart of COVID-19 patients.

"A transformative change in the mechanism of endothelial cell dysfunction, not the infection of the cells themselves, changes the way in which disease is initiated and rationales for therapeutic targeting. If endothelial cells are not infected or directly damaged, they can still direct inflammation and clotting by just being activated," he concludes from the research findings.

The team is working on how endothelial cells can be activated by the virus or in response to other SARS-CoV-2 infected lung cells that express ACE2.

The research suggests the potential to therapeutically target activation, rather than infection of the endothelium, as a strategy for resolving coagulation and inflammatory Covid-19 symptoms.

Mackow emphasizes that additional research of endothelial cells and down-regulated ACE2 functions following SARS-CoV-2 infection are necessary to determine targets that could lead to a reduction in respiratory distress and symptoms of Covid-19 patients.

Credit: 
Stony Brook University

Explained: Political polarization

image: Polarization - which divides the population into belligerent groups with rigidly opposed beliefs and identities - has a steely grip on the United States, and University of Houston researcher Alexander Stewart reports that economic inequality is to blame.

Image: 
University of Houston

Political polarization - the ever-present and growing division between Republicans and Democrats along ideological lines ­- has a powerful grip on the United States, and a University of Houston researcher has helped explain why, reporting his findings in Science Advances.

"Political polarization is linked to economic inequality, and we have created a mathematical model showing a causal explanation for that link," said Alexander Stewart, assistant professor of mathematical biology. "We also show that once polarization occurs, it can spread rapidly to the whole population and persist even when the conditions that produced it have reversed."

Stewart's work offers both a theoretical account and empirical support for the emergence of polarization as a response to economic hardship.

Polarization is a social phenomenon in which a population divides into belligerent groups with rigidly opposed beliefs and identities that inhibit cooperation and undermine pursuit of a common good.

Stewart's models divided people into two groups, based on similarities.

"In our models, the reason that inequality ends up causing polarization is that when people are faced with economic hardship, they become more risk averse. They prefer to have interactions with their own groups, or "ingroups" of people where they have less risk of failure," said Stewart. This leads to withdrawing interaction with other "outgroups" of people who are dissimilar to them, building to an attitude of 'us vs. them.'

Stewart and colleagues also studied the last three presidential election cycles in the United States with data from the American National Election Study and the Cooperative Congressional Election Study.

"Economic inequality has been rising for the last half of the 20th century, and it has reached heights never before seen in our lifetimes," said Stewart. "Rising inequality is sufficient to cause the polarization we see now in public discourse."

That polarization may also explain the emergence of populist and other far-right political movements.

"Economic stress and fear have shown to be excellent highly localized predictors for the success of a number of populist movements including in the Ukraine as well as the United States and United Kingdom," reports Stewart.

Credit: 
University of Houston

West Nile virus infection risk is higher in less affluent neighborhoods in Baltimore, MD

image: Graduate student Kaitlin Saunders checks a mosquito trap in Baltimore, Maryland. Collected mosquitoes were tested for pathogens that could spread to people.

Image: 
Photo by Edwin Remsberg

In Baltimore, Maryland, people living in low-income urban neighborhoods are more at risk of contracting West Nile virus, a mosquito-borne disease, than people living in more affluent neighborhoods. So reports a new study published in the Journal of Medical Entomology.

Lead author Sarah Rothman, a graduate student in the Department of Environmental Science and Technology at the University of Maryland College Park, says, "Our study is the first in Baltimore to document how West Nile virus infection in mosquitoes varies relative to neighborhood socioeconomics. Knowing where mosquito abundances are high, and what diseases they carry, can help focus surveillance and management programs where they're needed most."

Mosquito-borne disease is a growing threat in cities throughout the US. Vacant lots and abandoned buildings can create environmental conditions that bolster mosquitoes and the diseases they carry. Overgrown vegetation, standing water for breeding, and access to blood-meals from rodents, cats, and birds can put nearby residents at risk of contracting mosquito-borne diseases like Zika, chikungunya, and West Nile viruses.

This study builds on previous research that found mosquito-borne disease is an environmental justice issue in Baltimore. Co-author Shannon LaDeau, a disease ecologist at Cary Institute of Ecosystem Studies, says, "Past work revealed that lower income neighborhoods tend to have more mosquito habitat than more affluent neighborhoods, leading to higher risk for people who are already vulnerable due to limited access to healthcare. We also found that larger mosquitoes, which may have greater infection potential, thrive in less affluent neighborhoods."

The latest study took place over three years and focused on five neighborhoods in Baltimore representing a socioeconomic range. Focal neighborhoods included two neighborhoods with incomes below the median, two at the median, and one above the median annual household income. All five neighborhoods consist of similar blocks of rowhomes and are located within 2km of each other, minimizing environmental variation.

In 2015 and 2016, the team sampled adult mosquitoes in all five neighborhoods; all but the one most affluent neighborhood were also sampled in 2017. Every three weeks from June through September, the team set traps (4-6 traps per neighborhood; 26 traps total) and left them to lure mosquitoes for 72 hours. Traps were baited with carbon dioxide and a mammal-derived attractant, and were placed in shaded, protected sites.

Target mosquito species included the invasive tiger mosquito (Aedes albopictus) and native Culex species (principally the northern house mosquito, Culex pipiens) which are known to thrive in cities and transmit diseases to people. Back in the lab, mosquitoes were tested for the pathogens causing West Nile virus, chikungunya virus, and Zika virus.

The team detected West Nile virus in Ae. albopictus and Culex collected in all neighborhoods sampled in 2015 and 2017. West Nile virus was not detected in any neighborhood in 2016. A possible explanation could be extreme heat recorded that summer, which can kill mosquitoes before they can spread disease. Chikungunya and Zika were not detected at any point. In 2015, mosquito infection rate was highest in the two least affluent neighborhoods. In 2017, mosquito infection rates in medium-income neighborhoods increased substantially, exceeding infection in the least affluent neighborhoods by a slim margin.

LaDeau says, "This study highlights the discrepancy in health risks facing Baltimore residents. Less affluent communities are more at risk of being bitten by a mosquito infected with West Nile virus, and negative health outcomes are compounded by deficient medical resources in these neighborhoods. Monitoring neighborhoods for mosquitoes is a step we can take to identify where mosquitoes are breeding and where residents are at risk. This information can guide measures to mitigate those conditions and protect residents."

Senior author Paul Leisnham, an Associate Professor at the University of Maryland College Park, says, "Protecting residents starts with thorough monitoring. Since mosquito numbers and infection rates vary block to block, coarse-scale sampling makes it easy to miss pockets of infection. Effective management requires consistent, fine-scale surveillance, even in sites where access is tricky."

LaDeau concludes, "This study gets us closer to understanding how mosquito-borne disease risk varies across the urban landscape and socioeconomic divides. As this work continues, we aim to supply city management agencies with information on factors that boost mosquito numbers and infection, to help solve this growing environmental justice challenge."

Credit: 
Cary Institute of Ecosystem Studies

Salt-tolerant bacteria with an appetite for sludge make biodegradable plastics

image: Zobellella denitrificans ZD1 bacteria feed on sludge (both shown in test tube) to make biodegrade bioplastics.

Image: 
Dr. Kung-Hui (Bella) Chu

The United States generates seven million tons of sewage sludge annually, enough to fill 2,500 Olympic-sized swimming pools. While a portion of this waste is repurposed for manure and other land applications, a substantial amount is still disposed of in landfills. In a new study, Texas A&M University researchers have uncovered an efficient way to use leftover sludge to make biodegradable plastics.

In the September issue of the journal American Chemical Society (ACS) Omega, the researchers report that the bacterium Zobellella denitrificans ZD1, found in mangroves, can consume sludge and wastewater to produce polyhydroxybutyrate, a type of biopolymer that can be used in lieu of petroleum-based plastics. In addition to reducing the burden on landfills and the environment, the researchers said Zobellella denitrificans ZD1 offers a way to cut down upstream costs for bioplastics manufacturing, a step toward making them more competitively priced against regular plastics.

"The price of raw materials to cultivate biopolymer-producing bacteria accounts for 25-45% of the total production cost of manufacturing bioplastics. Certainly, this cost can be greatly reduced if we can tap into an alternate resource that is cheaper and readily obtainable," said Kung-Hui (Bella) Chu, professor in the Zachry Department of Civil and Environmental Engineering. "We have demonstrated a potential way to use municipal wastewater-activated sludge and agri- and aqua-culture industrial wastewater to make biodegradable plastics. Furthermore, the bacterial strain does not require elaborate sterilization processes to prevent contamination from other microbes, further cutting down operating and production costs of bioplastics."

Polyhydroxybutyrate, an emerging class of bioplastics, is produced by several bacterial species when they experience an imbalance of nutrients in their environment. This polymer acts as the bacteria's supplemental energy reserves, similar to fat deposits in animals. In particular, an abundance of carbon sources and a depletion of either nitrogen, phosphorous or oxygen, cause bacteria to erratically consume their carbon sources and produce polyhydroxybutyrate as a stress response.

One such medium that can force bacteria to make polyhydroxybutyrate is crude glycerol, a byproduct of biodiesel manufacturing. Crude glycerol is rich in carbon and has no nitrogen, making it a suitable raw material for making bioplastics. However, crude glycerol contains impurities such as fatty acids, salts and methanol, which can prohibit bacterial growth. Like crude glycerol, sludge from wastewater also has many of the same fatty acids and salts. Chu said that the effects of these fatty acids on bacterial growth and, consequently, polyhydroxybutyrate production had not yet been examined.

"There is a multitude of bacterial species that make polyhydroxybutyrate, but only a few that can survive in high-salt environments and even fewer among those strains can produce polyhydroxybutyrate from pure glycerol," Chu said. "We looked at the possibility of whether these salt-tolerating strains can also grow on crude glycerol and wastewater."

For their study, Chu and her team chose the Zobellella denitrificans ZD1, whose natural habitat is the salt waters of mangroves. They then tested the growth and the ability of this bacteria to produce polyhydroxybutyrate in pure glycerol. The researchers also repeated the same experiments with other bacterial strains that are known producers of polyhydroxybutyrate. They found that Zobellella denitrificans DZ1 was able to thrive in pure glycerol and produced the maximum amount of polyhydroxybutyrate in proportion to its weight without water.

Next, the team tested the growth and ability of Zobellella denitrificans ZD1 to produce polyhydroxybutyrate in glycerol containing salt and fatty acids. They found that even in these conditions, it produced polyhydroxybutyrate efficiently, even under balanced nutrient conditions. When they repeated the experiments in samples of high-strength synthetic wastewater and wastewater-activated sludge, they found the bacteria was still able to make polyhydroxybutyrate, although at quantities lower than if they were in crude glycerol.

Chu noted that by leveraging Zobellella denitrificans ZD1 tolerance for salty environments, expensive sterilization processes that are normally needed when working with other strains of bacteria could be avoided.

"Zobellella denitrificans ZD1 natural preference for salinity is fantastic because we can, if needed, tweak the chemical composition of the waste by just adding common salts. This environment would be toxic for other strains of bacteria," she said. "So, we are offering a low cost, a sustainable method to make bioplastics and another way to repurpose biowastes that are costly to dispose of."

Credit: 
Texas A&M University

Shedding light on the dark side of biomass burning pollution

Oxidized organic aerosol is a major component of ambient particulate matter, substantially impacting climate, human health, and ecosystems. Oxidized aerosol from biomass burning is especially toxic, known to contain a large amount of species that are known carcinogens, mutagens. Inhaling biomass burning particles also cause oxidative stress and cause a wide range of diseases such as heart attacks, strokes, and asthma. Oxidized aerosol primarily forms from the atmospheric oxidation of volatile and semi-volatile compounds emitted by sources like biomass burning, resulting in products that readily form particulate matter. Every model in use today assumes that oxidized aerosol forms in the presence of sunlight, and that it requires days of atmospheric processing to reach the levels observed in the environment. Naturally this implies that oxidized aerosol forms in the daytime and mostly during periods with plentiful sunshine, such as in summer.

However, considerable amounts of oxidized organic aerosol forms during the wintertime and in other periods of low photochemical activity worldwide, often during periods of intense biomass burning. Models cannot capture this considerable source of oxidized aerosol, underestimating oxidized aerosol levels by a factor of 3-5. This unresolved and important mystery carries significant implications for public health and climate, given that biomass burning events often are associated with population exposure to very high particulate matter levels. This issue becomes ever more important in the future, given the increase intensity, duration and frequency of woodburning (both domestic and wildfire) throughout the globe.

Research lead by the teams of Prof. Athanasios Nenes and Spyros Pandis of the Center for Studies on Air Quality and Climate Change (C-STACC; http://cstacc.iceht.forth.gr) of the Institute of Chemical Engineering Sciences at the Foundation for Research and Technology Hellas (ICE-HT/FORTH; http://www.iceht.forth.gr) seem to have found the answer to the biomass burning underprediction riddle. The research, supported by the ERC PyroTRACH project (https://twitter.com/pyrotrach), was published in the Kodros et al. study (here provide the link) this week in the journal Proceedings of the National Academy of Sciences of the USA. The study shows that this unexplained source of oxidized secondary particulate matter is from nighttime oxidation of biomass burning emissions. Through a combination of laboratory measurements and field observations, emissions from biomass burning are rapidly oxidized overnight, and that the aerosol generated is remarkably similar to that observed in wintertime urban environments. This newly discovered mechanism was then introduced to a state-of-the-art air quality model to show, that nighttime oxidation of biomass burning emissions can substantially influence organic aerosol levels throughout the United States.

The Kodros et al. study is broadly important for a number of reasons. First, it is shown beyond doubt that sunlight is not required to rapidly generate significant amounts of oxidized aerosol - which reshapes the understanding of how pollution from biomass burning is formed. Second, this mechanism can explain the paradoxically high levels of organic pollution in urban environments during wintertime haze episodes, such as in Europe and China. Finally, the work greatly elevates the role of biomass burning as a source of air pollution at night, in winter, and during other periods of low solar activity - where intense haze episodes often seen to occur throughout the world.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Recovery of an endangered Caribbean coral from parrotfish predation

image: A large predation scar on Orbicella annularis coral from a Stoplight parrotfish (Sparisoma viride)

Image: 
Hannah Rempel

Parrotfishes are abundant herbivores that primarily graze upon algae, which may indirectly benefit corals by mitigating coral-algae competition. At a local scale, management efforts to increase populations of parrotfishes are believed to be critically important to maintaining resilient, coral-dominated reefs. Yet, some parrotfish species also occasionally graze coral - a behavior known as corallivory. Corallivory can cause the partial to total mortality of coral colonies and may have long-term impacts such as reduced coral growth and reproductive capacity and increased susceptibility to disease. While evidence suggests that parrotfishes likely have an overall net positive impact on coral communities, they may have detrimental impacts on heavily predated coral species, such as O. annularis.

To better understand the consequences of corallivory for O. annularis, researchers from California Polytechnic State University evaluated coral healing rates from parrotfish predation scars. With these data, they developed a statistical model to predict long-term coral tissue loss from snapshot surveys of parrotfish predation scars. This study, recently published in Coral Reefs, is the first to monitor coral healing rates and recovery thresholds from parrotfish predation scars in the Caribbean.

Coral healing capacity

The researchers monitored the healing of over 400 parrotfish predation scars on O. annularis coral colonies on the Caribbean islands of St. Croix and Bonaire over two months. Their research suggests that the majority of O. annularis healing occurs within the first few weeks and that scars have minimal healing after approximately 45 days. Importantly, they found that scar size strongly influences coral healing rates and recovery thresholds. They observed that smaller scars (?1.25 cm2) often fully heal, while larger scars (?8.2 cm2) - presumably resulting from repetitive, localized predation by parrotfishes - had minimal healing.

Predicted coral tissue loss

They surveyed the size and abundance of recent parrotfish predation scars present on O. annularis colonies at a point in time and used their model to predict coral tissue loss from these scars. They observed that 87% of scars were small (?1.25 cm2) and their model predicted that these small scars would fully heal or result in minimal tissue loss. In contrast, while only 6% of observed scars were large (?8.2 cm2), their model predicted that these large scars had minimal healing and would account for 96% of the total tissue loss for parrotfish predation.

Conclusions & future directions

This study suggests that the immediate negative consequences of parrotfish corallivory for O. annularis appear to be driven primarily by a few exceptionally large predation scars. Interestingly, there was no observable difference in healing rates or total coral healing between islands despite stark contrasts in parrotfish and coral community composition on St. Croix and Bonaire. This suggests that these findings and this predictive model may be broadly applicable to other regions of the Caribbean. This study presents an important advance in understanding the recovery of a heavily predated, ecologically important, and endangered Caribbean coral from parrotfish corallivory. The group's ongoing work seeks to address how the intensity of parrotfish corallivory is influenced by the community composition of corals and parrotfishes across multiple spatial scales and regions of the Greater Caribbean.

Credit: 
California Polytechnic State University

A human gene placed in fruit flies reveals details about a human developmental disorder

image: Left to right: Katarina Akhmetova, Igor Chesnokov and Maxim Balasov

Image: 
UAB

BIRMINGHAM, Ala. - Meier-Gorlin syndrome, or MGS, is a rare genetic developmental disorder that causes dwarfism, small ears, a small brain, missing patella and other skeletal abnormalities. In severe cases, MGS results in miscarriages and stillbirths.

Igor Chesnokov, Ph.D., and his University of Alabama at Birmingham colleagues study this recessive, autosomal disorder in an unusual way -- by placing mutant human genes into fruit flies. Specifically, they look at one of the genes involved in MGS called Orc6.

In a study published in Genetics, featured as a highlighted article, they used this animal model to probe the function of one human Orc6 mutation -- a Lysine 23 to Glutamic acid (K23E) substitution -- that was first reported in 2017. In people with MGS, the K23E mutation causes a similar observable developmental disorder as an Orc6 mutation that the Chesnokov team previously studied, Tyrosine 225 to Serine (Y225S) substitution.

Those two mutations are interesting to contrast, because position 23 is near the front, or the N-terminal domain, of the long chain of connected amino acids that folds to form the Orc6 protein. Position 225 is near the end, or the C-terminal domain, of the Orc6 protein strand.

Orc6 is part of the Origin Recognition Complex, or ORC. This complex of proteins is vital to initiate DNA replication in a cell, whether yeast, fruit fly, human or any other eukaryotic organism. Without DNA division, a cell cannot divide and an organism cannot grow. Poor division will stunt growth, as is seen in MGS.

In previous research on the Y225S mutation, published in the American Journal of Medical Genetics, the UAB researchers found that the C-terminal domain of Orc6 is important for protein-protein interactions to help build the ORC complex. In the current study, Chesnokov and colleagues have now found that the K23E mutation in the N-terminal domain of Orc6 disrupts the protein's ability to bind to DNA. This specific binding is a vital step in ORC function.

Thus, although the two mutations have different underlying molecular mechanisms, they both cause deficient pre-replicative complex formation and reduced DNA replication, and they produce a similar phenotype in MGS patients.

One key in this research was creating chimeric Orc6 genes that are part human gene and part fruit fly gene. Here is why that was necessary. Putting a human Orc6 gene into fruit flies fails to prevent the lethal effect of an Orc6 deletion in fruit flies; in other words, the intact human Orc6 cannot replace the function of the fruit fly Orc6, due to the difference in Orc6 interactions with the core ORC in the two organisms.

However, when the UAB researchers made a hybrid Orc6 that was human in the N-terminal domain and fruit fly in the C-terminal domain, the hybrid was able to completely rescue the fruit flies, and they grew into adults that were undistinguishable from fruit flies with wild-type Orc6. This hybrid Orc6 then could be used to test the K23E mutation in fruit flies and study its molecular mechanism.

"This hybrid approach," Chesnokov said, "allows the study of human protein functions in an animal system, and it revealed the importance of evolutionary conserved and variable domains of the Orc6 protein. We believe that this hybrid approach not only opens a broad avenue to study new Orc6 mutations for medical and general science purposes, but also might be useful in other humanized models."

In summary, says Chesnokov, a professor in the UAB Department of Biochemistry and Molecular Genetics, this humanized fly model has the unique advantage of being able to differentially test fly, human, and chimeric Orc6 proteins to reveal conserved and divergent features of the protein and its function in the cells of metazoan organisms.

Credit: 
University of Alabama at Birmingham

An unexpected role for the brain's immune cells

image: Katerina Akassoglou and her team at Gladstone Institutes discover that microglia cells constantly survey the brain to prevent spontaneous seizures.

Image: 
Photo: Gladstone Institutes

SAN FRANCISCO, CA--December 14, 2020--An important part of the brain's immune system, cells called microglia constantly extend and retract "branches" from their cell body to survey their environment. Think of an octopus, not moving its body, but reaching its tentacles in every direction. That's how microglia operate. In the span of an hour, each cell will have covered the entire three-dimensional space that surrounds it. And then, it will start all over again.

This continuous and rapid surveillance is a unique feature reserved for microglial cells in the brain. It occurs in your brain all the time, without the presence of disease, and whether you are awake or asleep. Microglia can also rapidly direct their branches toward a site of injury in the brain. The longstanding theory has been that microglia perform this surveillance to sense invasion by an infectious agent or to sense trauma.

"This never made sense to me," says Katerina Akassoglou, PhD, a senior investigator at Gladstone Institutes. "Why would a cell expend so much energy for something that might never happen? I always thought there must be another reason for microglia to be moving all the time, likely related to a normal function in the brain."

As it turns out, Akassoglou was right.

In a recent study published in the journal Nature Neuroscience, she and her team show that, in fact, surveillance by microglia helps prevent seizure activity (or hyperexcitability) in the brain. These findings could open new therapeutic avenues for several diseases, given that hyperexcitability is a feature of many neurological disorders, including Alzheimer's disease, epilepsy, and autism.

Preventing an Overactive Brain

Akassoglou has been interested in the brain's innate immune system since the beginning of her scientific career. She first witnessed microglial surveillance under the microscope during her postdoctoral fellowship in 2003, in a neighboring lab that discovered the phenomenon. Right away, she knew that to understand these cells, she had to find a way to "freeze" their movement.

"That was easier said than done--it took over 10 years to figure out how to stop them from moving," says Akassoglou, who is also a professor of neurology at UC San Francisco (UCSF). "There are ways to kill the cells, but then they are gone and you can't study their movement. It was very challenging to find a way to keep them alive while also preventing them from being able to survey the brain."

She and her team created the first mouse model in which the process of microglial brain surveillance can be blocked. The cells are still alive, but they can no longer extend and retract their branches. Then, the goal of the project was simply to observe what happens.

"It was purely driven by curiosity," Akassoglou says. "We just wanted to know, why do these cells move all the time, and what happens to the brain if they stop?"

Initially, nothing seemed to happen and the "frozen" microglia appeared normal. Until one day, Victoria Rafalski unexpectedly observed a mouse having a seizure.

"That's when we realized that with microglia not functioning properly, mice were having spontaneous seizures," says Rafalski, PhD, a first author of the study and former postdoctoral scholar in Akassoglou's lab at Gladstone. "It was our first indication that surveillance by these cells might suppress seizure activity. It also gave us a hint as to why they needed to move constantly--suppressing seizures may be a nonstop necessity in the brain."

To investigate further, the researchers relied on the latest technological advances in microscopy and image analysis. They combined these approaches to develop their own method to observe the interaction between microglia and active neurons in a live brain, as mice ran on a wheel while their whiskers were tickled.

The scientists discovered that microglia are not extending their branches at random. Instead, microglia reach out primarily to active neurons, one after another, while paying less attention to non-active neurons. Importantly, they noticed that when microglia touch an active neuron, that neuron's activity does not increase further.

"Microglia seem to sense which neuron is about to become overly active, and keep it in check by making contact with it, which prevents that neuron's activity from escalating," explains the study's other first author, Mario Merlini, former staff research scientist in Akassoglou's lab, who now leads a team at the University of Caen Normandie in France. "In contrast, in our mouse model where microglia movements are frozen, we found that the activity of nearby neurons keeps increasing, a bit like a heater with a broken thermostat. This changed our thinking on how neuronal activity is regulated in the brain. Instead of an on-off switch, microglia are the brain's thermostat, controlling excessive neuronal activity".

These findings helped the team discover a physiological role for microglial surveillance; microglia are essential for maintaining neuronal activity within a normal range by preventing neurons from becoming overactive, or hyperexcitable.

"Network hyperexcitability can be observed in patients with epilepsy and in other conditions in which epilepsy is more likely to occur, such as Alzheimer's disease and autism," says Jorge Palop, PhD, a co-author of the study and associate investigator at Gladstone. "And, a hyperactive brain causes a large number of neurons to fire (or become active) at the same time--a process known as hypersynchrony that can lead to spontaneous seizures. Our study could offer a new way to intervene in diseases with hyperexcitability."

"In many brain diseases, the ability of microglia to survey the brain is impaired," says Akassoglou. "We now have a model to study the consequences of impaired microglia surveillance on brain inflammation and cognition in diseases including Alzheimer's disease, multiple sclerosis, and also brain infection by viruses, like COVID-19."

Knowing that microglia constantly move to keep the brain from becoming hyperexcitable could have therapeutic implications. In fact, hyperactivity in the brain could be reversed by using pharmacologic activators to force microglia to extend their branches. In the study, this approach restored the microglial processes when whiskers were tickled and brought neuronal activity back to normal levels. Akassoglou and her team are now expanding these studies to test any possible beneficial effects in models of disease.

"By unlocking the enigma of microglia's constant movement, we now have new clues for treating devastating brain diseases," says Akassoglou.

Credit: 
Gladstone Institutes