Tech

Columbia researchers uncover altered brain connectivity after prolonged anesthesia

image: Contrary to the notion that the connections between neurons in the adult brain remain stable during short-term anesthesia, the Columbia researchers found that prolonged anesthesia significantly alters the synaptic architecture of the brain regardless of age.

Image: 
Michael Wenzel, MD

Prolonged anesthesia, also known as medically induced coma, is a life-saving procedure carried out across the globe on millions of patients in intensive medical care units every year.

But following prolonged anesthesia--which takes the brain to a state of deep unconsciousness beyond short-term anesthesia for surgical procedures--it is common for family members to report that after hospital discharge their loved ones were not quite the same.

"It is long known that ICU survivors suffer lasting cognitive impairment, such as confusion and memory loss, that can languish for months and, in some cases, years," said Michael Wenzel, MD, lead author of a study published in PNAS this month that documents changes associated with prolonged anesthesia in the brains of mice.

Wenzel, a former postdoctoral researcher at Columbia University with experience as a physician in neuro-intensive care in Germany, said reports of post-hospital cognitive dysfunction will likely become even more prevalent because of the significant number of coronavirus patients dependent on ventilators who have taken days or weeks to awake from medically induced comas.

Until now, despite the body of evidence that supports the association between prolonged anesthesia and cognition, the direct effects on neural connections have not been studied, said Rafael Yuste, a professor of Biological Sciences at Columbia and senior author of the paper.

"This is because it is difficult to examine the brains of patients at a resolution high enough to monitor connections between individual neurons," Yuste said.

To circumvent the problem, Yuste and Wenzel developed an experimental platform in mice to investigate the connections between neurons, or synapses, and related cognitive effects of prolonged anesthesia.

Inspired by Wenzel's experience in neuro-intensive care, the researchers established a miniature ICU-like platform for mice. They performed continuous anesthesia for up to 40 hours, many times longer than the longest animal study to date (only six hours).

The researchers performed in vivo two-photon microscopy, a type of neuro-imaging that Yuste helped pioneer and that can visualize live brain structures at micrometer resolution. The technique enabled them to monitor cortical synapses in the sensory cortex, the area of the brain responsible for processing bodily sensations, an approach they combined with repeated assessment of behavior in the cortex.

Contrary to the notion that the connections between neurons in the adult brain remain stable during short-term anesthesia, the researchers found that prolonged anesthesia significantly alters the synaptic architecture of the brain regardless of age.

"Our results should ring an alarm bell in the medical community, as they document a physical link between cognitive impairment and prolonged medically induced coma," Wenzel said.

As this study is only a pilot in mice, further study is needed, the researchers said. They added that it will be important to test different, widely used anesthetics, as well as the combination of anesthetics administered to patients. Currently, anesthetics are not individually tailored to patients in a systematic fashion.

"We are well aware that anesthesia is a life-saving procedure," Wenzel said. "Refining treatment plans for patients and developing supportive therapies that keep the brain in shape during prolonged anesthesia would substantially improve clinical outcomes for those whose lives are saved, but whose quality of life has been compromised."

Credit: 
Columbia University

How sessile seahorses managed to speciate and disperse across the world's oceans

The international research collaboration involving the research team led by evolutionary biologist Professor Axel Meyer at the University of Konstanz and researchers from China and Singapore was able to identify factors that led to the success of the seahorse from a developmental biology perspective: its quickness to adapt by, for example, repeatedly evolving spines in the skin and its fast genetic rates of evolution. The results will be published on 17 February 2021 in Nature Communications.

Seahorses of the genus Hippocampus emerged about 25 million years ago in the Indo-Pacific region from pipefish, their closest relatives. And while the latter usually swim fairly well, seahorses lack their pelvic and tail fins and evolved a prehensile tail instead that can be used, for example, to hold on to seaweed or corals. Early on, they split into two main groups. "One group stayed mainly in the same place, while the other spread all over the world", says Dr Ralf Schneider, who is now a postdoc-toral researcher at the GEOMAR Helmholtz Centre for Ocean Research Kiel, and participated in the study while working as a doctoral researcher in Axel Meyer's re-search team. In their original home waters of the Indo-Pacific, the remaining species diversified in a unique island environment, while the other group made its way into the Pacific Ocean via Africa, Europe and the Americas.

Travelling the world by raft

The particularly large amount of data collected for the study enabled the research team to create an especially reliable seahorse tree showing the relationships be-tween species and the global dispersal routes of the seahorse. Evolutionary biologist, Dr Schneider, says: "If you compare the relationships between the species to the ocean currents, you notice that seahorses were transported across the oceans". If, for example, they were carried out to sea during storms, they used their grasping tail to hold on to anything they could find, like a piece of algae or a tree trunk. These are places where the animals could survive for a long time. The currents often swept these "rafts" hundreds of kilometres across the ocean before they landed someplace where the seahorses could hop off and find a new home.

Since seahorses have been around for more than 25 million years, it was important to factor in that ocean currents have changed over time as tectonic plates have shift-ed. For example, about 15 million years ago, the Tethys Ocean was almost as large as today's Mediterranean Sea. On the west side, where the Strait of Gibraltar is lo-cated today, it connected to the Atlantic Ocean. On the east side, where the Arabian Peninsula is today, it led to the Indian Ocean.

Tectonic shifts change ocean currents

The researchers were able to underscore, for example, that the seahorses were able to colonize the Tethys Ocean via the Arabian Sea just before the tectonic plates shifted and sealed off the eastern connection. The resulting current flowing westward towards the Atlantic Ocean brought seahorses to North America. A few million years later, this western connection also closed and the entire Tethys Ocean dried out. Ralf Schneider: "Until now it was unclear whether seahorses in the Atlantic all traced their lineage to species from the Arabian Sea that had travelled south along the east coast of Africa, around the Cape of Good Hope and across the southern Atlantic Ocean to reach South America. We found out that a second lineage of seahorses had done just that, albeit later".

Since the research team gathered 20 animal samples from each habitat, it was also possible to measure the genetic variation between individuals. And this generally revealed: The greater the variation, the larger the population. "We can reconstruct the age of a variation based on its type. This makes it possible to calculate the size of the population at different points in time", the evolutionary biologist explains. This calculation reveals that the population that crossed the Atlantic Ocean to North America was very small, supporting the hypothesis that it have come from just a few animals brought there by the ocean's currents while holding on to a raft. The same data also showed that, even today, seahorses from Africa cross the southern Atlantic Ocean and introduce their genetic material into the South American population.

Fast and flexible adaptation

Seahorses not only spread around the world by travelling with the ocean currents, but they were also surprisingly good at settling in new habitats. Seahorses have greatly modified genomes and, throughout their evolution, they have lost many genes, emerged with new ones or gained duplicates. This means: Seahorses change very quickly in comparison to other fish. This is probably why different types of "bony spines" evolved quickly and independently of each other that protect seahorses from predation in some habitats.

Some of the genes have been identified that exhibit particular modifications for cer-tain species, but they are not the same for all species. Multiple fast and independent selections led to the development of spines, and although the same genes play a role in this development, different mutations were responsible. This shows that the slower, sessile seahorses were particularly able to adapt quickly to their environments. This is one of the main reasons the research team gives for seahorses being so successful in colonizing new habitats.

Credit: 
University of Konstanz

Credit card-sized soft pumps power wearable artificial muscles

image: Thin and light new pump which is the same size as a credit card

Image: 
Tim Helps, University of Bristol

Robotic clothing that is entirely soft and could help people to move more easily is a step closer to reality thanks to the development of a new flexible and lightweight power system for soft robotics.

The discovery by a team at the University of Bristol could pave the way for wearable assist devices for people with disabilities and people suffering from age-related muscle degeneration. The study is published today [17 February] in Science Robotics.

Soft robots are made from compliant materials that can stretch and twist. These materials can be made into artificial muscles that contract when air is pumped into them. The softness of these muscles makes then suited to powering assistive clothing. Until now, however, these pneumatic artificial muscles have been powered by conventional electromagnetic (motor-driven) pumps, which are bulky, noisy, complex and expensive.

Researchers from Bristol's SoftLab and Bristol Robotics Laboratory led by Jonathan Rossiter, Professor of Robotics, have successfully demonstrated a new electro-pneumatic pump that is soft, bendable, low-cost and easy to make.

In the paper the team describe how the new credit card-sized soft pump can power pneumatic bubble artificial muscles and pump fluids. The team also outline their next steps to make power clothing a reality.

Professor Rossiter from Department of Engineering Mathematics at Bristol and Head of the Soft Robotics group at BRL, said: "The lives of thousands of people with mobility issues could be transformed with this new technology. The new pumps are an important development that will help us deliver comfortable, and stylish, power-assisting clothing.

"We are now working to make the electro-pneumatic pumps smaller and more efficient and are actively seeking partners to commercialise the technologies."

Credit: 
University of Bristol

Genotoxic E. coli 'caught in the act'

image: Immunofluorescence staining shows that genotoxic colibactin-producing E. coli (green) cause DNA damage (indicated by the presence of the DNA repair protein γH2AX, white) and megalocytosis (abnormal enlargement of cells) (right). This is not observed for cells infected with a mutant E. coli strain (E. coli ΔclbR) that is defective for colibactin synthesis (left). Phalloidin (red) stains for actin filaments of the cells and DNA is shown in blue.

Image: 
Max Planck Institute for Infection Biology / Amina Iftekhar

Escherichia coli bacteria are constitutive members of the human gut microbiota. However, some strains produce a genotoxin called colibactin, which is implicated in the development of colorectal cancer. While it has been shown that colibactin leaves very specific changes in the DNA of host cells that can be detected in colorectal cancer cells, such cancers take many years to develop, leaving the actual process by which a normal cell becomes cancerous obscure. The group of Thomas F. Meyer at the Max Planck Institute for Infection Biology in Berlin together with their collaborators have now been able to "catch colibactin in the act" of inducing genetic changes that are characteristic of colorectal cancer cells and cause a transformed phenotype - after only a few hours of infection.

More than two-thirds of colorectal cancer patients carry colibactin-producing E. coli strains in their gut and the number of carriers is rising in the western world. Epidemiological evidence for a link between certain bacterial species and some forms of human cancer abound - but it remains difficult to provide the direct proof required to justify extensive prevention strategies. Meyer's team recently provided the first definitive evidence for such a link by identifying the genetic signature colibactin leaves in host cells, and showing that it can be detected in a subgroup of colorectal cancers.

Now they have gone a significant step further by utilizing organoids to observe transformation itself. This new technology makes it possible to grow normal, primary colon epithelial cells in culture in the form of 3D spheres. These hollow "mini-organs" are generated by the adult stem cells that drive the rapid turnover of the colonic mucosa. Prior to the advent of this technology, infection experiments in vitro required cell lines, which are already partially transformed and thus unsuitable for recapitulating the very early stages of cancer development. To test whether colibactin-producing E. coli have any lasting effect on host cells, the team infected their organoids for three hours. This was already sufficient to induce changes that are characteristic of colorectal cancer. Not only did the infected cells begin to proliferate faster than normal, but a subset of cells no longer required the presence of Wnt protein in the growth medium.

Growth factor drives stem cell turn-over

This critical "growth factor" is present in the environment surrounding the stem cells in the bottom of colon glands and drives their turn-over. Under healthy conditions, uncontrolled proliferation of the cells is prevented as soon as they leave this Wnt-containing niche. "Then they cease proliferation and take over digestive functions, only to be sloughed off once they reach the surface, pushed along by the continuous stream of cells leaving the stem cell niche," says Michael Sigal, one of the senior authors who recently established his own laboratory at the Charité University Hospital in Berlin to study the phenomenon in greater detail. He further explains: "The same phenomenon can be observed in the organoid cultures: they require the continuous presence of Wnt to keep growing. Without it, the cells differentiate and die shortly afterwards."

Such growth factor independence, as observed for the infected organoids, is a characteristic of early colorectal cancer cells. Sequencing of these organoids revealed that they contained numerous mutations, including large structural changes that led to whole sections of chromosomes being lost, gained or rearranged. "Surprisingly, we did not observe mutations in genes directly involved in Wnt signaling, which are known to cause colorectal cancer in patients who inherited such mutations. Instead, we found mutations related to p53 signaling", says Amina Iftekhar, first author on the new paper. This important tumour suppressor is known as the "guardian of the genome" and so far, only a few studies had hinted at the possibility that it may also affect Wnt dependence.

Mutations in the p53 signaling pathway

Thomas F. Meyer explains that these findings fit well with the evidence from large cancer sequencing programs: "It is clear that colorectal cancer can arise through different mechanisms. In cases driven by chronic inflammation, such as Colitis or Crohn's disease, where colibactin-producing E. coli strains are particularly prominent, mutations in p53 are indeed found to be an early event." And the large chromosomal rearrangements they observed are found in the majority of colorectal cancer cases.

According to Meyer, this has important implications: "Although the majority of colorectal cancer patients carry colibactin-producing E. coli, we were puzzled by the fact that the colibactin signature can only be detected in a small proportion - up to ten percent. Our new results now suggest that the characteristic signature is the result of proper removal of the cross-links from the damaged sites in the DNA. If this healing process is jeopardized or the repair machinery gets overloaded, instead gross chromosomal changes and chromosomal aberrations seem to occur when the damaged cells attempt to overcome the DNA cross-links. The evidence of such botched repair is frequent in colorectal cancers and suggests that the carcinogenic effect of colibactin may be substantially greater than the ten percent of cases suggested by the signature alone".

Credit: 
Max-Planck-Gesellschaft

Scientists develop blood test to predict environmental harms to children

Scientists at Columbia University Mailman School of Public Health developed a method using a DNA biomarker to easily screen pregnant women for harmful prenatal environmental contaminants like air pollution linked to childhood illness and developmental disorders. This approach has the potential to prevent childhood developmental disorders and chronic illness through the early identification of children at risk.

While environmental factors--including air pollutants--have previously been associated with DNA markers, no studies to date have used DNA markers to flag environmental exposures in children. Study results are published online in the journal Epigenetics.

There is ample scientific evidence that links prenatal environmental exposures to poor outcomes in children, yet so far there is no early warning system to predict which children are at highest risk of adverse health outcomes. The researchers took a major step toward overcoming this barrier by identifying an accessible biomarker measured in a small amount of blood to distinguish newborns at elevated risk due to prenatal exposure. They used air pollutants as a case study, although they say their approach is easily generalizable to other environmental exposures, and could eventually be made into a routine test.

The researchers used machine learning analysis of umbilical cord blood collected through two New York City-based longitudinal birth cohorts to identify locations on DNA altered by air pollution. (DNA can be altered through methylation, which can modify gene expression, which can, for example, impact the amount of proteins that are important for development.) Study participants had known levels of exposure to air pollution measured through personal and ambient air monitoring during pregnancy, with specific measures of fine particulate matter (PM2.5), nitrogen dioxide (NO2), and polycyclic aromatic hydrocarbons (PAH).

They tested these biomarkers and found that they could be used to predict prenatal exposure to NO2 and PM2.5 (which were monitored throughout pregnancy), although only with modest accuracy. PAH (which was only monitored for a short period during the third trimester) was less well predicted. The researchers now plan to apply their biomarker discovery process using a larger pool of data collected through the ECHO consortium, which potentially could lead to higher levels of predictability. It might also be possible to link these biomarkers with both exposures and adverse health outcomes. With better predictability and lower cost, the method could become a routine test used in hospitals and clinics.

"Using a small sample of cord blood, it may be possible to infer prenatal environmental exposure levels in women where exposures were not explicitly measured," says senior author Julie Herbstman, PhD, director of Columbia Center for Children's Environmental Health (CCCEH) and associate professor of Environmental Health Sciences. "While further validation is needed, this approach may help identify newborns at heightened risk for health problems. With this information, clinicians could increase monitoring for high-risk children to see if problems develop and prescribe interventions, as needed."

Approximately 15 percent of children in the United States ages 3 to 17 years are affected by neurodevelopmental disorders, including attention deficit hyperactive disorder (ADHD), learning disabilities, intellectual disability, autism and other developmental delays. The prevalence of childhood asthma in the US is 8 percent with the highest rates in African-American boys. Environmental exposures are known, or suspected of contributing to, multiple childhood disorders and are by nature preventable once identified as harmful. Prenatal air pollution exposure has been associated with adverse neurodevelopmental and respiratory outcomes, as well as obesity.

Credit: 
Columbia University's Mailman School of Public Health

Platelets may play key role in development of lupus

Québec City, February 17, 2021 - Platelets may play a key role in the development of lupus, according to a study published today by researchers at Université Laval and CHU de Québec-Université Laval Research Centre. Extracellular DNA circulating in the blood of patients with lupus causes the inflammatory reaction associated with the disease. The researchers have shown that this DNA comes in part from the platelets, better known for their role in coagulating blood. The details of the breakthrough have been published today in Science Translational Medicine and could lead to a better understanding of the disease and more effective treatment.

"Lupus is an autoimmune disease that causes chronic inflammation of various body parts, particularly the joints, skin, brain, and kidneys," explained lead author Éric Boilard, professor at the Université Laval Faculty of Medicine and researcher at CHU de Québec-Université Laval Research Centre. "It strikes 40 people per 100,000, frequently between the ages of 20 and 40, and is nine times more prevalent in women than in men. Lupus presents in a variety of ways and can be difficult to diagnose."

One common denominator of severe forms of the disease is the presence of anti-DNA antibodies in the blood. "When DNA circulates freely in the blood, antigen-antibody complexes form and accumulate in the tissues where the lupus presents. Until now, we didn't know exactly where this genetic material was coming from," said Professor Boilard, who is also a researcher at the ARThrite research centre.

In collaboration with fellow professor and clinical researcher Paul R. Fortin, Boilard's team analyzed blood samples from 74 patients with lupus and discovered that the platelets were the source of the extracellular DNA. "To be precise, the DNA is present in the platelet mitochondria. Most of the DNA was actually still inside the mitochondria in the blood we studied. The body produces antibodies against the mitochondria and the mitochondrial DNA because it considers them foreign bodies," explained Professor Boilard.

When the platelets are activated, the mitochondria and their DNA are released. "But this activation does not seem involved in normal platelet functions such as preventing bleeding," said Boilard. "If we can figure out how to interrupt this activation process, we can prevent the mitochondria and the mitochondrial DNA from being released, which will reduce the autoimmune reaction we see with this disease."

Credit: 
Université Laval

Helping Congress get the most from research

UNIVERSITY PARK, Pa. -- In a new study, Penn State researchers demonstrated that facilitating researcher-policymaker interactions in rapid response processes can influence both how legislators think about policy issues and how they draft legislation.

Penn State professors Max Crowley, associate professor of human development and family studies, and public policy, and Taylor Scott, assistant research professor in the Edna Bennett Pierce Prevention Research Center, co-direct the Research-to-Policy Collaboration, which connects members of Congress with researchers who synthesize evidence about family and child policy in a timely and digestible manner.

The Research-to-Policy Collaboration has the potential to improve the quality of information available to Congress, increase the impact of relevant research, and create more common ground among American lawmakers at a polarized point in our history, said the researchers.

"We believe that lawmakers can make better use of research throughout the planning, decision making, allocation of resources, and implementation of policies," said Crowley. "The goal of our work is to build a bridge between the research community and policy community. This study examined whether Congress would put research to better use if we facilitated researchers' rapid responses to their specific questions."

The research team wants to improve how lawmakers use scientific evidence, but the researchers do not lobby Congress. In lobbying, people try to influence how lawmakers act on an issue. The Research-to-Policy Collaboration provides evidence -- not opinions -- on specific legislation or federal programs.

"Issues relating to children are important to everyone, no matter where they fall on the political spectrum," Scott said. "There are a lot of opportunities for non-partisan or bipartisan conversations about children and family issues, which is not always the case for a lot of other important topics."

The researchers hope to change the culture of how Congress uses research. Like many other people, lawmakers at times cherry-pick statistics or cite single research studies that reinforce their entrenched positions on issues, said the researchers. People across the political spectrum are prone to using research in this tactical manner when addressing polarizing topics like climate change, healthcare or taxes.

In contrast, researchers in the Research-to-Policy Collaboration seek to increase the use of research evidence when policies are conceptualized or framed. For example, when drafting new laws, lawmakers could consider funding programs or policies that have been shown to be effective by research, said the researchers. This model encourages the use of research evidence as a tool for informed decision making and does not support tactical uses of research for bolstering a political position.

The difference between lobbying and collaboration is not lost on those who participated in the study, explained the researchers. A counsel to a senator who worked with the researchers said, "It was not lobbyists asking us for something but really us asking what we needed and them providing it back, so it was a really helpful relationship."

This study was the Research-to-Policy Collaboration model's first experiment involving Congress, and the results were clear, the researchers reported. Participating legislative offices sponsored more than 20% more bills containing research language compared to legislative offices in the control group of the study.

Furthermore, participating members of Congress did not become more likely to single out select statistics or cite individual research studies to defend an entrenched position, according to the study. The members also showed a modest increase in their belief that research evidence is valuable for understanding how to think about problems when developing legislation.

Crowley and Scott are optimistic both because of the potential in the Research-to-Policy Collaboration model and because they believe that members of Congress want to use the best available information to make the most informed decisions possible.

"In my experience, people want to use science, no matter their party affiliation," Crowley stated. "The use of science is not partisan, per se."

Crowley and Scott recognize that this approach will not solve partisan issues in lawmaking bodies, they said, but they hope that promoting the use of scientific evidence can establish a common language for debate.

"Recently, our society has struggled to find common ground about what is fact and what is truth," said Scott. "If we can enable people of different parties to understand scientific evidence, then we can start the process of finding common ground.

"In our work, we have seen lawmakers take scientific evidence that we provided across the aisle to their colleagues," Scott continued. "We also have seen those recipients embrace the evidence, and this has served as the starting point for meaningful conversation."

Credit: 
Penn State

New tech aims to tackle 'disseminated intravascular coagulation' blood disorder

Researchers have developed a new tool for addressing disseminated intravascular coagulation (DIC) - a blood disorder that proves fatal in many patients. The technology has not yet entered clinical trials, but in vivo studies using rat models and in vitro models using blood from DIC patients highlight the tech's potential.

"DIC basically causes too much clotting and too much bleeding at the same time," says Ashley Brown, corresponding author of a paper on the work. "Small blood clots can form throughout the circulatory system, often causing organ damage. And because this taxes the body's supply of clotting factors, patients also experience excess bleeding. Depending on the severity of DIC, between 40% and 78% of patients with DIC die.

"DIC is associated with a number of other conditions, such as sepsis and cancer - and it is very difficult to treat," adds Brown, who is an assistant professor in the Joint Department of Biomedical Engineering at North Carolina State University and the University of North Carolina at Chapel Hill. "Doctors often focus on trying to treat underlying condition. But if the DIC is bad, doctors face a dilemma: if they treat the bleeding, they'll make the clotting worse; if they treat the clotting, they'll make the bleeding worse. Our goal is to find a clinical intervention that addresses this dilemma. And our results so far are promising."

Brown and her collaborators have developed a technique that makes use of nanogel spheres. The spheres are engineered to bind to a protein called fibrin, which is the main protein found in blood clots. As a result, the spheres will travel through the bloodstream until they reach a blood clot, at which point they will stick to the fibrin in the clot.

These nanospheres are about 250 nanometers in diameter and are porous. In this case, the researchers have loaded the nanospheres with tissue-type plasminogen activator (tPA) - a drug that breaks down clots.

"Based on in vitro testing and testing in a rat model, we found that where you have pre-formed clots (not active bleeding) that the tPA spheres stick to fibrin and break up the clots," says Emily Mihalko, first author of the paper and a Ph.D. candidate in the joint biomedical engineering department. "Breaking up these clots also releases other clot constituents, such as platelets, which evidence suggests may be re-recruited by the body at active clotting sites (i.e., places where there was actually bleeding)."

In one study, the researchers evaluated the use of the tPA and targeted nanospheres in a rat model involving DIC that stems from sepsis. In that study, the researchers found that delivering tPA via the targeted nanospheres eliminated 91 and 93% of the clots found in the heart and lung respectively, and 77% of the clots found in the liver and kidneys.

"We also did in vitro testing using blood plasma from patients with DIC, and found similarly promising results," Brown says.

"We are currently exploring different dosages in the animal model," Mihalko says. "And are doing work to better understand how the particles are distributed in the body and how long it takes before they are cleared by the body - which is important information for addressing safety considerations prior to any clinical trials."

The researchers note that it is too early to put a price tag on any potential treatments that make use of the technology. However, they note that the targeted nanogels mean that treatment would likely involve using smaller doses of tPA than are currently in clinical use.

"The cost of the creating the targeted nanospheres would likely offset the savings from using less tPA, so we suspect it may be comparable to the cost of conventional tPA therapies," Brown says.

Credit: 
North Carolina State University

How the 'noise' in our brain influences our behavior

The brain's neural activity is irregular, changing from one moment to the next. To date, this apparent "noise" has been thought to be due to random natural variations or measurement error. However, researchers at the Max Planck Institute for Human Development have shown that this neural variability may provide a unique window into brain function. In a new Perspective article out now in the journal Neuron, the authors argue that researchers need to focus more on neural variability to fully understand how behavior emerges from the brain.

When neuroscientists investigate the brain, its activity seems to vary all the time. Sometimes activity is higher or lower, rhythmic or irregular. Whereas averaging brain activity has served as a standard way of visualizing how the brain "works," the irregular, seemingly random patterns in neural signals have often been disregarded. Strikingly, such irregularities in neural activity appear regardless of whether single neurons or entire brain regions are assessed. Brains simply always appear "noisy," prompting the question of what such moment-to-moment neural variability may reveal about brain function.

Across a host of studies over the past 10 years, researchers from the Lifespan Neural Dynamics Group (LNDG) at the Max Planck Institute for Human Development and the Max Planck UCL Centre for Computational Psychiatry and Ageing Research have systematically examined the brain's "noise," showing that neural variability has a direct influence on behavior. In a new Perspective article published in the journal Neuron, the LNDG in collaboration with the University of Lübeck highlights what is now substantial evidence supporting the idea that neural variability represents a key, yet under-valued dimension for understanding brain-behavior relationships. "Animals and humans can indeed adapt successfully to environmental demands, but how can such behavioral success emerge in the face of neural variability? We argue that neuroscientists must grapple with the possibility that behavior may emerge because of neural variability, not in spite of it," says Leonhard Waschke, first author of the article and LNDG postdoctoral fellow.

A recent LNDG study published in the journal eLife exemplifies the direct link between neural variability and behavior. Participants' brain activity was measured via electroencephalography (EEG) while they responded to faint visual targets. When people were told to detect as many visual targets as possible, neural variability generally increased, whereas it was downregulated when participants were asked to avoid mistakes. Crucially, those who were better able to adapt their neural variability to these task demands performed better on the task. "The better a brain can regulate its 'noise,' the better it can process unknown information and react to it. Traditional ways of analyzing brain activity simply disregard this entire phenomenon." says LNDG postdoctoral fellow Niels Kloosterman, first author of this study and co-author of the article in Neuron.

The LNDG continues to demonstrate the importance of neural variability for successful human behavior in an ongoing series of studies. Whether one is asked to process a face, remember an object, or solve a complex task, the ability to modulate moment-to-moment variability seems to be required for optimal cognitive performance. "Neuroscientists have seen this 'noise' in the brain for decades but haven't understood what it means. A growing body of work by our group and others highlights that neural variability may indeed serve as an indispensable signal of behavioral success in its own right. With the increasing availability of tools and approaches to measure neural variability, we are excited that such a hypothesis is now immediately testable," says Douglas Garrett, Senior Research Scientist and LNDG group leader. In the next phases of their research, the group plans to examine whether neural variability and behavior can be optimized through brain stimulation, behavioral training, or medication.

Credit: 
Max Planck Institute for Human Development

Quantum collaboration gives new gravity to the mysteries of the universe

Scientists have used cutting-edge research in quantum computation and quantum technology to pioneer a radical new approach to determining how our Universe works at its most fundamental level.

An international team of experts, led by the University of Nottingham, have demonstrated that only quantum and not classical gravity could be used to create a certain informatic ingredient that is needed for quantum computation. Their research "Non-Gaussianity as a signature of a quantum theory of gravity" has been published today in PRX Quantum.

Dr Richard Howl led the research during his time at the University of Nottingham's School of Mathematics, he said: "For more than a hundred years, physicists have struggled to determine how the two foundational theories of science, quantum theory and general relativity, which respectively describe microscopic and macroscopic phenomena, are unified into a single overarching theory of nature.

During this time, they have come up with two fundamentally contrasting approaches, called 'quantum gravity' and 'classical gravity'. However, a complete lack of experimental evidence means that physicists do not know which approach the overarching theory actually takes, our research provides an experimental approach to solving this."

This new research, which is a collaboration between experts in quantum computing, quantum gravity, and quantum experiments finds an unexpected connection between the fields of quantum computing and quantum gravity and uses this to propose a way to test experimentally that there is quantum not classical gravity. The suggested experiment would involve cooling billions of atoms in a millimetre-sized spherical trap to extremely low temperatures such that they enter a new phase of matter, called a Bose-Einstein condensate, and start to behave like a single large, quantum atom. A magnetic field is then applied to this "atom" so that it feels only its own gravitational pull. With this all in place, if the single gravitating atom demonstrates the key ingredient needed for quantum computation, which is curiously associated with "negative probability", nature must take the quantum gravity approach.

This proposed experiment uses current technology, involves just a single quantum system, the gravitating "atom", and does not rely on assumptions concerning the locality of the interaction, making it simpler than previous approaches and potentially expediating the delivery of the first experimental test of quantum gravity. Physicists would then, after more than a hundred years of research, finally have information on the true overarching, fundamental theory of nature.

Dr Marios Christodoulou, from the University of Hong Kong who was part of the collaboration, added: "This research is particularly exciting as the experiment proposed would also connect with the more philosophical idea that the universe is behaving as an immense quantum computer that is calculating itself, by demonstrating that quantum fluctuations of spacetime are a vast natural resource for quantum computation."

Credit: 
University of Nottingham

You snooze, you lose - with some sleep trackers

image: Evaluations of commercial sleep technologies for objective monitoring during routine sleeping conditions.

Image: 
WVU Rockefeller Neuroscience Institute

Wearable sleep tracking devices - from Fitbit to Apple Watch to never-heard-of brands stashed away in the electronics clearance bin - have infiltrated the market at a rapid pace in recent years.

And like any consumer products, not all sleep trackers are created equal, according to West Virginia University neuroscientists.

Prompted by a lack of independent, third-party evaluations of these devices, a research team led by Joshua Hagen, director of the Human Performance Innovation Center at the WVU Rockefeller Neuroscience Institute, tested the efficacy of eight commercial sleep trackers.

Fitbit and Oura came out on top in measuring total sleep time, total wake time and sleep efficiency, the results indicate. All other devices, however, either overestimated or underestimated at least one of those sleep metrics, and none of the eight could quantify sleep stages (REM, non-REM) with effective accuracy to be useful when compared to an electroencephalogram, or EEG, which records electrical activity in the brain.

The study is published in the Nature and Science of Sleep.

"The biggest takeaway is that not all consumer devices are created equal, and for the end user to take care in selecting the technology to suit their application based on the data," Hagen said. "Some devices are currently performing well for total sleep time and sleep efficiency, but the community at large seems to still struggle with sleep staging (deep, REM, light). This is not surprising, since typically brain waves are needed to properly measure this. However, when thinking about what you generally have control over with your sleep - time to bed, time in bed, choices before bed that impact sleep efficiency - these can be accurately measured in some devices."

Researchers observed five healthy adults - two males, ages 26 and 41, and three females, ages 22, 23 and 27 - who participated by wearing the sleep trackers for a combined total of 98 nights.

The commercial sleep technologies displayed lower error and bias values when quantifying sleep/wake states as compared to sleep staging durations. Still, these findings revealed that there is a remarkably high degree of variability in the accuracy of commercial sleep technologies, the researchers stated.

"While technology, both hardware and software, continually advances, it is critical to evaluate the accuracy of these devices in an ongoing fashion," Hagen said. "Updates to hardware, firmware and algorithms happen continuously, and we must understand how this affects accuracy."

Research in this area will evolve with the technology, added Hagen, who himself utilizes four to five sleep devices to keep monitoring his ZZZs.

"I'm a big believer in living the research," he said. "I need to understand what the consumer sees in the smartphone apps, what the usability of the devices is, etc. Without that objective sleep data, you can only rely on how you feel when you wake up - and while that is important, that doesn't tell the whole story. If your alarm goes off and you happen to be in a deep sleep stage, you will wake up very groggy, and could feel as though that sleep was not restorative, when in fact it could have been. It's just not subjectively noticeable right at that moment."

At the end of the day, however, it's up to the user's needs as to which product may be most suited for that person, Hagen added.

"After accuracy, it comes down to logistics. Do you prefer a watch with a display? A ring? A mattress sensor? What is the price of each? Which smartphone app is most appealing? But again, that is if all accuracies are close to equal. If the price is right and the form factor is ideal, but the data accuracy is extremely poor, then those factors don't matter."

The Human Performance Innovation Center works with members of the US military along with collegiate and professional athletes to better understand and optimize human performance, resiliency, and recovery, applying these findings to solutions for the general and clinical populations.

Credit: 
West Virginia University

Proton therapy induces biologic response to attack treatment-resistant cancers

ROCHESTER, Minn. -- Mayo Clinic researchers have developed a novel proton therapy technique to more specifically target cancer cells that resist other forms of treatment. The technique is called LEAP, an acronym for "biologically enhanced particle therapy." The findings are published today in Cancer Research, the journal of the American Association for Cancer Research.

"The human body receives tens of thousands of DNA lesions per day from a variety of internal and external sources," says Robert Mutter, M.D., a radiation oncologist at Mayo Clinic and co-principal investigator of the study." Therefore, cells have evolved complex repair pathways to efficiently repair damaged DNA. Defects in these repair pathways can lead to the development of diseases, including cancer, says Dr. Mutter.

"Defects in the ATM-BRCA1-BRCA2 DNA repair pathway are commonly observed in cancer," says Dr. Mutter. "And breast and ovarian cancer mutations in BRCA1 and BRCA2 repair genes are the most common cause."

Dr. Mutter; Zhenkun Lou, Ph.D., co-principal investigator of the study; and their colleagues studied a novel method of delivering proton therapy to target tumors with inherent defects in the ATM-BRCA1-BRCA2 DNA repair pathway.

"We compared the effects of delivering the same amount of energy or dose into cancer cells using a dense energy deposition pattern with LEAP versus spreading out the same energy more diffusely, which is typical of conventional photon and proton therapy," says Dr. Mutter. "Surprisingly, we discovered that cancers with inherent defects in the ATM-BRCA1-BRCA2 pathway are exquisitely sensitive to a new concentrated proton technique."

Dr. Lou says surrounding normal tissues were spared, and their full complement of DNA repair elements remained intact. "We also found that we could rewire the DNA repair machinery pharmacologically by co-administration of an ATM inhibitor, a regulator of the body's response to DNA damage, to make repair-proficient cells exquisitely sensitive to LEAP," says Dr. Lou.

Dr. Mutter says the distinct physical characteristics of protons allow radiation oncologists to spare nearby normal tissues with superior accuracy, compared to conventional photon-based radiation therapy. "LEAP is a paradigm shift in treatment, whereby newly discovered biologic responses, induced when proton energy deposition is concentrated in cancer cells using novel radiation planning techniques, may enable the personalization of radiotherapy based on a patient's tumor biology," says Dr. Mutter.

Drs. Mutter and Lou say their findings are the product of several years of preclinical development through collaboration with experts in physics, radiation biology and mechanisms of DNA repair. Dr. Mutter and the radiation oncology team at Mayo Clinic are developing clinical trials to test the safety and efficacy of LEAP in multiple tumor types.

Credit: 
Mayo Clinic

Termite gut microbes could aid biofuel production

Wheat straw, the dried stalks left over from grain production, is a potential source of biofuels and commodity chemicals. But before straw can be converted to useful products by biorefineries, the polymers that make it up must be broken down into their building blocks. Now, researchers reporting in ACS Sustainable Chemistry & Engineering have found that microbes from the guts of certain termite species can help break down lignin, a particularly tough polymer in straw.

In straw and other dried plant material, the three main polymers -- cellulose, hemicelluloses and lignin -- are interwoven into a complex 3D structure. The first two polymers are polysaccharides, which can be broken down into sugars and then converted to fuel in bioreactors. Lignin, on the other hand, is an aromatic polymer that can be converted to useful industrial chemicals. Enzymes from fungi can degrade lignin, which is the toughest of the three polymers to break down, but scientists are searching for bacterial enzymes that are easier to produce. In previous research, Guillermina Hernandez-Raquet and colleagues had shown that gut microbes from four termite species could degrade lignin in anaerobic bioreactors. Now, in a collaboration with Yuki Tobimatsu and Mirjam Kabel, they wanted to take a closer look at the process by which microbes from the wood-eating insects degrade lignin in wheat straw, and identify the modifications they make to this material.

The researchers added 500 guts from each of four higher termite species to separate anaerobic bioreactors and then added wheat straw as the sole carbon source. After 20 days, they compared the composition of the digested straw to that of untreated straw. All of the gut microbiomes degraded lignin (up to 37%), although they were more efficient at breaking down hemicelluloses (51%) and cellulose (41%). Lignin remaining in the straw had undergone chemical and structural changes, such as oxidation of some of its subunits. The researchers hypothesized that the efficient degradation of hemicelluloses by the microbes could have also increased the degradation of lignin that is cross-linked to the polysaccharides. In future work, the team wants to identify the microorganisms, enzymes and lignin degradation pathways responsible for these effects, which could find applications in lignocellulose biorefineries.

Credit: 
American Chemical Society

Cosmetic laser may boost effectiveness of certain anti-cancer therapies

BOSTON - Use of a cosmetic laser invented at Massachusetts General Hospital (MGH) may improve the effectiveness of certain anti-tumor therapies and extend their use to more diverse forms of cancer. The strategy was tested and validated in mice, as described in a study published in Science Translational Medicine.

Immune checkpoint inhibitors are important medications that boost the immune system's response against various cancers, but only certain patients seem to benefit from the drugs. The cancer cells of these patients often have multiple mutations that can be recognized as foreign by the immune system, thereby inducing an inflammatory response.

In an attempt to expand the benefits of immune checkpoint inhibitors for additional patients, a team led by David E. Fisher, MD, PhD, director of the Mass General Cancer Center's Melanoma Program and director of MGH's Cutaneous Biology Research Center, conducted experiments in mice with a poorly immunogenic melanoma that is not hindered by immune checkpoint inhibitors. The researchers found that exposing the melanoma cells to ultraviolet radiation caused them to take on more mutations, which made immune checkpoint inhibitors more effective at boosting the immune response against the melanomas. Somewhat unexpectedly, the enhanced response included immune attack against non-mutated proteins in the tumor, a process called "epitope spreading."

"Epitope spreading could be important because many human cancers do not have very high mutation numbers, and correspondingly do not respond well to immunotherapy, so a treatment that can safely target nonmutated proteins could be valuable," Fisher explains.

The researchers next sought to find a substitute for the response triggered by mutations after ultraviolet radiation, since it's likely not safe to add mutations to a patient's tumor as a treatment strategy. "We discovered that use of a cosmetic laser, also known as a fractional laser, developed at MGH, when shined on a tumor, could trigger a form of local inflammation that mimicked the presence of mutations, strongly enhancing immune attacks against nonmutated tumor proteins, thereby curing many mice of tumors that otherwise did not respond to immunotherapy," says Fisher.

The findings suggest that using such a laser approach, or other methods to optimize immune responses against nonmutated targets on tumors, might make immune checkpoint inhibitors effective against currently incurable cancers.

Credit: 
Massachusetts General Hospital

Ceramic fuel cells: Reduced nickel content leads to improved stability and performance?

image: Conceptual diagram of oxidation-reduction cycle of ceramic fuel cells and Comparison of New Concept vs. Deterioration Rate of Conventional Fuel Plates

Image: 
Korea Institute of Science and Technology (KIST)

A research team in Korea has developed a ceramic fuel cell that offers both stability and high performance while reducing the required amount of catalyst by a factor of 20. The application range for ceramic fuel cells, which have so far only been used for large-scale power generation due to the difficulties associated with frequent start-ups, can be expected to expand to new fields, such as electric vehicles, robots, and drones.

The Korea Institute of Science and Technology (KIST) announced that a team led by Dr. Ji-Won Son at the Center for Energy Materials Research, through joint research with Professor Seung Min Han at the Korea Advanced Institute of Science and Technology (KAIST), has developed a new technology that suppresses the deterioration brought on by the reduction-oxidation cycle, a major cause of ceramic fuel cell degradation, by significantly reducing the quantity and size of the nickel catalyst in the anode using a thin-film technology.

Ceramic fuel cells, representative of high-temperature fuel cells, generally operate at high temperatures - 800 °C or higher. Therefore, inexpensive catalysts, such as nickel, can be used in these cells, as opposed to low-temperature polymer electrolyte fuel cells, which use expensive platinum catalysts. Nickel usually comprises approximately 40% of the anode volume of a ceramic fuel cell. However, since nickel agglomerates at high temperatures, when the ceramic fuel cell is exposed to the oxidation and reduction processes which accompany stop-restart cycles, uncontrollable expansion occurs. This results in the destruction of the entire ceramic fuel cell structure. This fatal drawback has prevented the generation of power by ceramic fuel cells from applications which require frequent start-ups.

In an effort to overcome this, Dr. Ji-Won Son's team at KIST developed a new concept for an anode which contains significantly less nickel, just 1/20 of a conventional ceramic fuel cell. This reduced amount of nickel enables the nickel particles in the anode to remain isolated from one another. To compensate for the reduced amount of the nickel catalyst, the nickel's surface area is drastically increased through the realization of an anode structure where nickel nanoparticles are evenly distributed throughout the ceramic matrix using a thin-film deposition process. In ceramic fuel cells utilizing this novel anode, no deterioration or performance degradation of the ceramic fuel cells was witnessed, even after more than 100 reduction-oxidation cycles, in comparison with conventional ceramic fuel cells, which failed after fewer than 20 cycles. Moreover, the power output of the novel anode ceramic fuel cells was improved by 1.5 times compared to conventional cells, despite the substantial reduction of the nickel content.

Dr. Ji-Won Son explained the significance of the study, stating, "Our research into the novel anode fuel cell was systematically conducted at every stage, from design to realization and evaluation, based on our understanding of reduction-oxidation failure, which is one of the primary causes of the destruction of ceramic fuel cells." Dr. Son also commented, "The potential to apply these ceramic fuel cells to fields other than power plants, such as for mobility, is tremendous."

Credit: 
National Research Council of Science & Technology