Earth

Small rises in blood glucose trigger big changes in insulin-producing cells

BOSTON - (April 21, 2020) - In diabetes, tiny clusters of insulin-producing "beta cells" in the pancreas don't produce enough of the hormone to keep people healthy, and their blood glucose levels climb. Perhaps unsurprisingly, their beta cells then function very differently than the cells do in people with normal blood glucose levels.

What's surprising is that the changes in beta-cell behavior begin to occur when the blood glucose levels are barely elevated, still within the pre-diabetes range. "These slightly high concentrations of glucose are enough to really confuse the cell," says Gordon Weir, MD, senior investigator and senior staff physician at Joslin Diabetes Center.

In a paper recently published in Molecular Metabolism, Weir's lab laid out a wealth of new data about how beta cells behave at slightly raised levels of blood glucose. The work provides major additional evidence of a "glucose toxicity" effect that helps to drive the development of both type 1 and type 2 diabetes.

Studying beta cells in lab rats whose blood glucose levels were slightly elevated, Weir's lab found changes in gene expression that affect not just how well the cells function but their ability to divide and grow, as well as their vulnerability to autoimmunity and inflammation.

Weir, professor of medicine at Harvard Medical School, has long studied a puzzling type 2 diabetes phenomenon called first-phase insulin release and how this release is shut down as the disease progresses.

In healthy people with normal blood glucose levels, Weir explains, the body responds quickly to glucose with a big spike of insulin secretion. "If then you take people who have slightly higher glucose levels, above 100 mg/dl, which is still not even diabetes, this first-phase insulin release is impaired," he says. "And when the level gets above 115 mg/dl, it's gone. So virtually all the beta cells don't respond to that acute stimulus." Fortunately, the cells eventually do wake up and respond to other stimuli well enough to keep blood glucose in a prediabetic range.

In earlier research, Weir and collaborators studied this phenomenon in rats who were surgically altered to generate slightly high blood glucose levels, and found that the rats' beta cells secreted less insulin. In their latest experiments, the Joslin team employed the same approach along with powerful "RNA sequencing" methods that revealed patterns of gene expression in the beta cells, either four weeks or ten weeks after surgery. "We found incredible changes in gene expression, and the higher the glucose, the worse the changes," Weir says.

As expected, genes involved in insulin secretion were highly active in the beta cells. More striking were newly discovered alterations in gene expression that could make the cells more vulnerable. Some of these changes were related to cell growth--healthy beta cells may respond to increased blood glucose levels by copying themselves, but these cells were getting stuck as they tried to divide. Additionally, the cells showed many differences in the expression of genes involved in cell inflammation and autoimmunity.

In type 1 diabetes, immune cells called "T cells" begin to kill off the beta cells and blood glucose levels start to creep up. Weir's team found that in the rats with just slightly greater glucose levels, beta cells showed dramatic increases in the expression of some key genes involved in T cell interactions. That effect could make the beta cells a better target for autoimmune attack, and thus speed the disease.

This finding may improve the understanding of the rapid death of beta cells that patients typically experience just before they are diagnosed with type 1 diabetes, Weir says. It also might shed light on the "honeymoon" period some people experience after diagnosis, in which their blood glucose levels are relatively easy to control. During this period, if insulin treatments can bring the remaining beta cells back down to only slightly elevated glucose levels, the cells can function much better, he says.

Glucose toxicity also might trigger the loss of first-phase insulin release as type 2 diabetes develops, Weir says. Immunologists often blamed this loss on inflammation of the beta cells, but other studies have shown that less than half of these cells appear to suffer from inflammation. "So somehow these beta cells with no evidence of inflammation end up not secreting properly," Weir says. "We think these higher glucose levels are causing the trouble."

More evidence for the role of higher blood glucose levels in type 2 diabetes comes from the subset of people undergoing gastric bypass surgery who are "cured" of diabetes and return to healthy blood glucose levels. "Their first-phase insulin release also comes right back to normal, which fits perfectly with our hypothesis," he says.

Credit: 
Joslin Diabetes Center

Scientists explore using 'own' immune cells to target infectious diseases including COVID-19

The engineering of specific virus-targeting receptors onto a patient's own immune cells is now being explored by scientists from Duke-NUS Medical School (Duke-NUS), as a potential therapy for controlling infectious diseases, including the COVID-19-causing virus, SARS-CoV-2. This therapy that has revolutionised the treatment of patients with cancer has also been used in the treatment of other infectious diseases such as Hepatitis B virus (HBV), as discussed by the School's researchers in a commentary published in the Journal of Experimental Medicine.

This therapy involves extracting immune cells, called T lymphocytes, from a patient's blood stream and engineering one of two types of receptors onto them: chimeric antigen receptors (CAR) or T cell receptors (TCR). TCRs are naturally found on the surfaces of T lymphocytes while CARs are artificial T cell receptors that are generated in the laboratory. These receptors allow the engineered T lymphocytes to recognise cancerous or virus infected cells.

"This therapy is classically used in cancer treatment, where the lymphocytes of the patients are redirected to find and kill the cancer cells. However, its potential against infectious diseases and specific viruses has not been explored. We argue that some infections, such as HIV and HBV, can be a perfect target for this therapy, especially if lymphocytes are engineered using an approach that keeps them active for a limited amount of time to minimise potential side effects," said Dr Anthony Tanoto Tan, Senior Research Fellow at the Duke-NUS' Emerging Infectious Diseases (EID) programme and the lead author of this commentary.

This type of immunotherapy requires specialised personnel and equipment, and it needs to be administered indefinitely. This makes it cost-prohibitive for treating most types of viral infections. However, in the case of HBV infections, for example, current anti-viral treatments merely suppress viral replication and cure less than 5% of patients. Treating these patients with a combination of anti-virals and CAR/TCR T cells could be a viable option. The team's approach using mRNA electroporation to engineer CAR/TCR T cells limits their functional activity to a short period of time, and hence provides enhanced safety features suited for its deployment in patients with chronic viral diseases.

"We demonstrated that T cells can be redirected to target the coronavirus responsible for SARS. Our team has now begun exploring the potential of CAR/TCR T cell immunotherapy for controlling the COVID-19-causing virus, SARS-CoV-2, and protecting patients from its symptomatic effects," said Professor Antonio Bertoletti from the Duke-NUS' EID programme, who is the senior author of this commentary.

"Infectious diseases remain a leading cause of morbidity and mortality worldwide, necessitating the development of novel and innovative therapeutics. Although immunotherapy is most commonly associated with the treatment of cancer or inflammatory diseases such as arthritis, this commentary accentuates the evolving role of this specialised treatment strategy for various infectious diseases," said Professor Patrick Casey, Senior Vice Dean for Research at Duke-NUS.

Credit: 
Duke-NUS Medical School

A new biosensor for the COVID-19 virus

Jing Wang and his team at Empa and ETH Zurich usually work on measuring, analyzing and reducing airborne pollutants such as aerosols and artificially produced nanoparticles. However, the challenge the whole world is currently facing is also changing the goals and strategies in the research laboratories. The new focus: a sensor that can quickly and reliably detect SARS-CoV-2 - the new coronavirus.

But the idea is not quite so far removed from the group's previous research work: even before the COVID-19 began to spread, first in China and then around the world, Wang and his colleagues were researching sensors that could detect bacteria and viruses in the air. As early as January, the idea of using this basis to further develop the sensor in such a way that it could reliably identify a specific virus was born. The sensor will not necessarily replace the established laboratory tests, but could be used as an alternative method for clinical diagnosis, and more prominently to measure the virus concentration in the air in real time: For example, in busy places like train stations or hospitals.

Fast and reliable tests for the new coronavirus are urgently needed to bring the pandemic under control as soon as possible. Most laboratories use a molecular method called reverse transcription polymerase chain reaction, or RT-PCR for short, to detect viruses in respiratory infections. This is well established and can detect even tiny amount of viruses - but at the same time it can be time consuming and prone to error.

An optical sensor for RNA samples

Jing Wang and his team have developed an alternative test method in the form of an optical biosensor. The sensor combines two different effects to detect the virus safely and reliably: an optical and a thermal one.

The sensor is based on tiny structures of gold, so-called gold nanoislands, on a glass substrate. Artificially produced DNA receptors that match specific RNA sequences of the SARS-CoV-2 are grafted onto the nanoislands. The coronavirus is a so-called RNA virus: Its genome does not consist of a DNA double strand as in living organisms, but of a single RNA strand. The receptors on the sensor are therefore the complementary sequences to the virus' unique RNA sequences, which can reliably identify the virus.

The technology the researchers use for detection is called LSPR, short for localized surface plasmon resonance. This is an optical phenomenon that occurs in metallic nanostructures: When excited, they modulate the incident light in a specific wavelength range and create a plasmonic near-field around the nanostructure. When molecules bind to the surface, the local refractive index within the excited plasmonic near-field changes. An optical sensor located on the back of the sensor can be used to measure this change and thus determine whether the sample contains the RNA strands in question.

Heat increases reliability

However, it is important that only those RNA strands that match exactly the DNA receptor on the sensor are captured. This is where a second effect comes into play on the sensor: the plasmonic photothermal (PPT) effect. If the same nanostructure on the sensor is excited with a laser of a certain wavelength, it produces localized heat.

And how does that help reliability? As already mentioned, the genome of the virus consists of only a single strand of RNA. If this strand finds its complementary counterpart, the two combine to form a double strand - a process called hybridization. The counterpart - when a double strand splits into single strands - is called melting or denaturation. This happens at a certain temperature, the melting temperature. However, if the ambient temperature is much lower than the melting temperature, strands that are not complementary to each other can also connect. This could lead to false test results. If the ambient temperature is only slightly lower than the melting temperature, only complementary strands can join. And this is exactly the result of the increased ambient temperature, which is caused by the PPT effect.

To demonstrate how reliably the new sensor detects the current COVID-19 virus, the researchers tested it with a very closely related virus: SARS-CoV. This is the virus that broke out in 2003 and triggered the SARS pandemic. The two viruses - SARS-CoV and SARS-CoV2 - differ only slightly in their RNA. And validation was successful: "Tests showed that the sensor can clearly distinguish between the very similar RNA sequences of the two viruses," explains Jing Wang. And the results are ready in a matter of minutes.

At the moment, however, the sensor is not yet ready to measure the corona virus concentration in the air, for example in Zurich's main railway station. A number of developmental steps are still needed to do this - for example, a system that draws in the air, concentrates the aerosols in it and releases the RNA from the viruses. "This still needs development work," says Wang. But once the sensor is ready, the principle could be applied to other viruses and help to detect and stop epidemics at an early stage.

Credit: 
Swiss Federal Laboratories for Materials Science and Technology (EMPA)

Developing human corneal tissue

image: Expression of CEC markers in iCEC sheets produced using the novel M+2sup/3ad method

Image: 
(reproduced from Shibata et al., Stem Cell Reports, 2020)

Osaka, Japan - Corneal diseases often require a transplant using corneal tissue from a donor. Now, researchers from Osaka University developed a novel method that could be used to generate corneal tissue in a lab more easily. In a new study published in Stem Cell Reports, they show how culturing eye cells derived from human induced pluripotent stem cells (hiPSCs) on specific proteins helped purify corneal epithelial cells (iCECs), which they then used to manufacture iCEC sheets that could be used for corneal therapy.

hiPSCs have the potential to produce any cell of the body in any number. However, tissue development using hiPSCs still mimics embryonic development, which means that when hiPSCs are directed to develop into tissues that consist of different cell types, the result is a mix of these cells. Unfortunately, this means that specific parts of organs, like the cornea of the eye, are inherently difficult to make from hiPSCs because the eye consists of corneal, neuronal, retinal and several other cells. Until today, despite the progress in regenerative medicine using hiPSCs, robust methods that enable the production and purification of corneas from hiPSCs have been lacking.

"The cornea is an extremely important part of the eye that helps us see clearly. Unfortunately, damage to the cornea, such as from injury or inflammation, is very difficult to treat," says corresponding author of the study Ryuhei Hayashi. "The goal of our study was to develop a novel method to generate corneal sheets for therapeutic purposes without expensive equipment such as a cell sorter machine."

To achieve their goal, the researchers generated eye cells from hiPSCs and cultured them on five different types of the protein laminin, which occur naturally in the human body including the eye. They found that iCECs had a strong propensity to adhere to three of the five, while non-CECs preferred to bind to one specific type of laminin, called LN211. The researchers then found that the difference between the various eye cells originates from differences in the production of integrins, a family of proteins on the surface of cells that links them to proteins outside cells, such as laminins. One of the laminin proteins, called LN332, even promoted the proliferation in addition to the specific binding of iCECs, allowing iCECs to outcompete non-CECs, and thereby increase their purity.

"While LN332 appeared to be the optimal substrate for iCEC growth, it was not sufficient to achieve the high purity that is required for the production of cell sheets for corneal therapy," says lead author of the study Shun Shibata.

The researchers thus turned to magnetic-activated cell sorting (MACS), which allows the separation of cell populations using a magnetic field after coating specific surface proteins of cells with antibodies that carry magnetic nanoparticles. By first removing cells that produce the protein CD200 and then selecting for the cells that produce the cell surface marker SSEA-4 among the CD200- population, the researchers were able to separate most iCECs from non-CECs. To achieve high purity in iCECs, the researchers cultured these cells on LN221 and then on LN332 substrates, to further remove non-CECs and enable the adhesion and growth of iCECs only, respectively. The result was a highly purified corneal cell sheet.

"These findings show how our novel method, combined with MACS technology, can be used to produce corneal tissues from human stem cells," says Hayashi. "Our vision is to facilitate regenerative medicine in the clinical setting."

Credit: 
Osaka University

How the brain recognizes change

Have you ever gotten a dramatic haircut? Certainly for some who had their heart broken. That is like a silent shouting to call people to spot change in ourselves and find "the new us". Then how does our brain spot change? Recognizing new objects, new people, new environments, and new rules is critical for survival. Though animal studies found that the hippocampus and NMDA receptors, which mediates and regulates excitatory synaptic transmission, are considered important for novelty recognition, the underlying neural circuit and synaptic mechanisms remain largely unclear.

Led by Professor KIM Eunjoon, a research team of the Center for Synaptic Brain Dysfunctions within the Institute for Basic Science (IBS) in Daejeon, South Korea revealed in an animal study a previously unknown role of a presynaptic adhesion molecule to tell the new change by regulating postsynaptic NMDA-type receptor responses at excitatory synapses. "In order to form a synapse and mediate synaptic transmission, postsynaptic receptors should cluster at sites of new synaptic formation and maturation. Little has been known about what "matures" new synapses and whether synapse maturation affects cognitive brain functions such as novelty recognition". Our data suggest that presynaptic PTPσ promotes postsynaptic NMDA receptor responses, thus allowing to recognize new change" explains Kim.

The brain is composed of a large number of neurons, and these neurons are connected through submicron-size structures known as "synapses". Each individual synapses are composed of two parts; the presynaptic structure that releases neurotransmitter, and the postsynaptic structure that responds to the released neurotransmitter through neurotransmitter receptors. Cell adhesion molecules bridge pre- and postsynaptic specializations. Since there are many different types of synaptic adhesion molecules, it is important for correct pairs of pre- and postsynaptic adhesion molecules to form a complex (bridge) and connect correct partners of neurons. After the initial connections, pre- and postsynaptic adhesion molecules organize the maturation of pre- and postsynaptic structures to mediate synaptic transmission.

One of the key synapse maturation processes is the recruitment of postsynaptic neurotransmitter receptors. However, whether and how presynaptic adhesion molecules trans-synaptically regulate the localization and stabilization of postsynaptic neurotransmitter receptors remained largely unclear. Hypothesizing that there is a key presynaptic adhesion molecule that trans-synaptically regulates postsynaptic receptor responses, the research team knocked out PTPσ, a presynaptic adhesion molecule at excitatory synapses in mice to see whether and how this deletion affects synapse formation and function and mouse behaviors.

As described in Figure 1, (Left) researchers tested social ability and social novelty recognition ability in the three-chamber apparatus. APTP-sigma KO mice, or a WT mouse, was exposed to a social target (S1; stranger mouse) and a non-social target (O; object) for 10 min, where both WT and PTP-sigma KO mice showed normal preference for S1 over O, indicative of normal social interaction. In the next session for the test of social novelty recognition, where O was replaced with S2 (a novel social stranger), whereas the WT mouse showed normal preference for S2 over S1, PTP-sigma KO mice showed impaired social novelty recognition, suggesting that PTPsigma promotes normal social novelty recognition. The heat maps represent mouse movements during social approach and social novelty recognition behaviors indicated: locations with red colors indicate the sites of longer stay of the mouse. The graphs indicate quantitative analyses of social approach (left) and social novelty recognition (right).

(Right) In the second set of experiments (in Figure 1), a PTP-sigma KO, or WT, mice were exposed to a stranger mouse (social target, S1) for four consecutive days, during which the subject mouse spent less and less time exploring the social stranger because of the increasing habituation to the stranger and indicative of normal social recognition and memory. On day 5, when S1 is replaced with a novel stranger mouse (S2), the WT subject showed strongly increased social exploration, as shown by time spent in target exploration or chamber, indicative of normal social novelty recognition and exploration, whereas the PTP-sigma KO mouse showed unchanged exploration of S2 (relative to that for S1 on day 4) (shown by a red circle), indicative of impaired social novelty recognition and exploration. In control experiments, time spent for exploring the empty container was unaffected by experimental conditions.

In sum, they found that PTPσ deletion did not affect excitatory synapse formation but strongly suppressed NMDA receptor responses in the hippocampus, a brain region known to regulate learning and memory. In addition, mice lacking PTPσ showed strongly suppressed novelty recognition in various behavioral tests. For instance, PTPσ-mutant mice failed to recognize new objects, new stranger mice, and new rules. These results suggest that presynaptic PTPσ trans-synaptically regulates postsynaptic NMDA receptor responses and novelty recognition in mice.

"The findings suggest that dephosphorylation of some other presynaptic adhesion molecules and certain trans-synaptic mechanisms may underlie the presynaptic PTPσ-dependent regulation of postsynaptic NMDA receptors. However, the underlying molecular mechanisms still need to identified," notes the first author, KIM Kyungdeok.

These results were corroborated by the essentially similar results reported by the group of Dr. Thomas Sudhof at Stanford University in the same journal eLife almost at the same time.

Credit: 
Institute for Basic Science

Cool down fast to advance quantum nanotechnology

The team, led by physicists at the Technische Universität Kaiserslautern (TUK) in Germany and University of Vienna in Austria, generated the Bose-Einstein condensate (BEC) through a sudden change in temperature: first heating up quasi-particles slowly, then rapidly cooling them down back to room temperature. They demonstrated the method using quasi-particles called magnons, which represent the quanta of magnetic excitations of a solid body.

"Many researchers study different types of Bose-Einstein condensates," said Professor Burkard Hillebrands from TUK, one of the leading researchers in the field of BEC. "The new approach we developed should work for all systems."

Puzzling and spontaneous

Bose-Einstein condensates, named after Albert Einstein and Satyendra Nath Bose who first proposed they exist, are a puzzling type of matter. They are particles that spontaneously all behave the same way on the quantum level, essentially becoming one entity. Originally used to describe ideal gas particles, Bose-Einstein condensates have been established with atoms, as well as with quasi-particles such as bosons, phonons and magnons.

Creating Bose-Einstein condensates is tricky business because, by definition, they have to occur spontaneously. Setting up the right conditions to generate the condensates means not trying to introduce any kind of order or coherence to encourage the particles to behave the same way; the particles have to do that themselves.

Currently, Bose-Einstein condensates are formed by decreasing the temperature to near absolute zero, or by injecting a large number of particles at room temperature into a small space. However, the room temperature method, which was first reported by Hillebrands and collaborators in 2005, is technically complex and only a few research teams around the world have the equipment and know-how required.

The new method is much simpler. It requires a heating source, and a tiny magnetic nanostructure, measuring a hundred times smaller than the thickness of a human hair.

"Our recent progress in the miniaturization of magnonic structures to nanoscopic scale allowed us to address BEC from completely different perspective," said Professor Andrii Chumak from the University of Vienna.

The nanostructure is heated up slowly to 200°C to generate phonons, which in turn generate magnons of the same temperature. The heating source is turned off, and the nanostructure rapidly cools down to room temperature in about a nanosecond. When this happens, the phonons escape to the substrate, but the magnons are too slow to react, and remain inside the magnetic nanostructure.

Michael Schneider, lead paper author and a PhD student in TUK'S Magnetism Research Group, explained why this happens: "When the phonons escape, the magnons want to reduce energy to stay in equilibrium. Since they cannot decrease the number of particles, they have to decrease energy in some other way. So, they all jump down to the same low energy level."

By spontaneously all occupying the same energy level, the magnons form a Bose-Einstein condensate.

"We never introduced coherence in the system," Chumak said, "so this is a very pure and clear way to create Bose-Einstein condensates."

Unexpected result

As is often the case in science, the team made the discovery quite by accident. They had set out to study a different aspect of nanocircuits when strange things began to happen.

"At first we thought something was really wrong with our experiment or data analysis," Schneider said.

After discussing the project with collaborators at TUK and in the U.S., they tweaked some experimental parameters to see if the strange thing was in fact a Bose-Einstein condensate. They verified its presence with spectroscopy techniques.

The finding will primarily interest other physicists studying this state of matter. "But revealing information about magnons and their behavior in a form of macroscopic quantum state at room temperature could have bearing on the quest to develop computers using magnons as data carriers," Hillebrands said.

Chumak stressed the importance of the collaboration within TUK'S OPTIMAS Research Group towards solving the mystery. Combining his team's expertise in magnonic nanostructures with Hillebrand's expertise in magnon Bose-Einstein condensates was essential. Their research has received significant support from two European Research Council (ERC) grants.

Credit: 
University of Vienna

Novel computational methods provide new insight into daytime alertness in people with sleep apnoea

New polysomnography parameters are better than conventional ones at describing how the severity of oxygen desaturation during sleep affects daytime alertness in patients with obstructive sleep apnoea, according to a new study published in European Respiratory Journal.

Inadequate sleep is widely recognised as a significant public health burden in Western countries. Good quality sleep is also crucial for maintaining neurocognitive performance. An increasing number of occupational accidents and absences, as well as traffic accidents, are caused by factors decreasing sleep quality.

Obstructive sleep apnoea (OSA) is one of the most prevalent sleep disorders, affecting more than 20% of the adult population in Western countries. OSA is attributed to daytime symptoms of shortened daytime sleep latencies as well as chronic fatigue and sleepiness. Furthermore, OSA is related to poor neurocognitive performance and inability to sustain attention.

Neurocognitive disorders cover the domains of learning and memory, language, executive functioning and complex attention, among others. One test assessing neurocognitive performance is the psychomotor vigilance task (PVT), which evaluates the domain of complex attention by measuring repeated responses to visual stimuli, thus assessing a person's ability to sustain attention.

The researchers investigated the role of conventional and novel polysomnography parameters in predicting PVT performance in a sample of 743 OSA patients. The patients completed the PVT test in the evening before undergoing a polysomnography.

All polysomnography recordings were scored manually by experienced sleep technicians who regularly participate in scoring concordance activities. All apnoeas, hypopnoeas and desaturations were scored manually in accordance with established guidelines.

The researchers found that their novel parameters describing the severity of intermittent hypoxaemia, i.e. oxygen desaturation during sleep, are significantly associated with an increased risk of impaired PVT performance, whereas conventional OSA severity and sleep fragmentation metrics are not. This finding is also supported by the group's previous study showing that OSA-related objective daytime sleepiness is more strongly associated with the severity of individual desaturations than the number of apnoeas and hypopnoeas.

According to the researchers, parameters quantifying desaturations based on their characteristic properties have a significant association with impaired vigilance and ability to sustain attention. Furthermore, an increase in the apnoea-hypopnoea index or the oxygen desaturation index does not significantly elevate the odds of having impaired PVT performance.

"Our results highlight the importance of developing methods for a more detailed assessment of OSA severity and comprehensive analysis of PSGs. This would enhance the assessment of OSA severity and improve the estimation of risk and severity of related daytime symptoms," Early Stage Researcher and first author Samu Kainulainen from the University of Eastern Finland concludes.

Credit: 
University of Eastern Finland

Rising carbon dioxide levels will change marine habitats and fish communities

image: A boxfish swimming above dense mats of diatoms in the high CO2 site along the Shikine volcanic gradient.

Image: 
Nicolas Floc'h

Rising carbon dioxide in the atmosphere and the consequent changes created through ocean acidification will cause severe ecosystem effects, impacting reef-forming habitats and the associated fish, according to new research.

Using submerged natural CO2 seeps off the Japanese Island of Shikine, an international team of marine biologists showed that even slightly higher CO2 concentrations than those existing today may cause profound changes in marine habitats and the fish that rely on them.

Writing in Science of The Total Environment, researchers from the Universities of Palermo (Italy), Tsukuba (Japan) and Plymouth (UK) showed that under elevated dissolved CO2 conditions, habitats are dominated by few ephemeral algae.

In such conditions, species such as complex corals and canopy-forming macroalgae mostly disappeared. This shift from complex reefs to habitats dominated by opportunistic low-profile algae led to a 45% decrease of fish diversity, with a loss of coral-associated species and a rearrangement of feeding behaviour.

Lead author Dr Carlo Cattano, from the University of Palermo, said: "Our findings show that the CO2-induced habitat shifts and food web simplification, which we observed along a volcanic gradient in a climatic transition zone, will impact specialist tropical species favouring temperate generalist fish. Our data also suggests that near-future projected ocean acidification levels will oppose the ongoing poleward expansion of corals (and consequently of reef-associated fish) due to global warming."

"Submerged volcanic degassing systems may provide realistic insights into future ocean conditions," added Dr Sylvain Agostini, from Shimoda Marine Research Center. "Studying organism and ecosystem responses off submerged CO2 seeps may help us to understand how the oceans will look in the future if anthropogenic CO2 emissions won't be reduced."

In addition to the new findings, the study also reinforces previous research which has demonstrated the ecological effects of habitat changes due to ongoing ocean acidification.

This has shown that decreased seawater pH may impair calcification and accelerate dissolution for many calcifying habitat-formers, while rising CO2 concentrations may favour non-calcifying autotrophs enhancing the primary production and carbon fixation rates.

As a result, there will be losers and winners under increasingly acidified conditions, and fish species that rely on specific resources during their different life stages could disappear. This would lead to the composition of fish communities changing in the near future with potential severe consequences for marine ecosystem functioning and the goods and services they provide to humans.

Jason Hall-Spencer, Professor of Marine Biology at the University of Plymouth, said: "Our work at underwater volcanic seeps shows that coastal fish are strongly affected by ocean acidification, with far fewer varieties of fish able to cope with the effects of carbon dioxide in the water. This underlines the importance of reducing greenhouse gas emissions to safeguard ocean resources for the future."

Credit: 
University of Plymouth

Continued CO2 emissions will impair cognition

As the 21st century progresses, rising atmospheric carbon dioxide (CO2) concentrations will cause urban and indoor levels of the gas to increase, and that may significantly reduce our basic decision-making ability and complex strategic thinking, according to a new CU Boulder-led study. By the end of the century, people could be exposed to indoor CO2 levels up to 1400 parts per million--more than three times today's outdoor levels, and well beyond what humans have ever experienced.

"It's amazing how high CO2 levels get in enclosed spaces," said Kris Karnauskas, CIRES Fellow, associate professor at CU Boulder and lead author of the new study published today in the AGU journal GeoHealth. "It affects everybody--from little kids packed into classrooms to scientists, business people and decision makers to regular folks in their houses and apartments."

Shelly Miller, professor in CU Boulder's school of engineering and coauthor adds that "building ventilation typically modulates CO2 levels in buildings, but there are situations when there are too many people and not enough fresh air to dilute the CO2." CO2 can also build up in poorly ventilated spaces over longer periods of time, such as overnight while sleeping in bedrooms, she said.

Put simply, when we breathe air with high CO2 levels, the CO2 levels in our blood rise, reducing the amount of oxygen that reaches our brains. Studies show that this can increase sleepiness and anxiety, and impair cognitive function.

We all know the feeling: Sit too long in a stuffy, crowded lecture hall or conference room and many of us begin to feel drowsy or dull. In general, CO2 concentrations are higher indoors than outdoors, the authors wrote. And outdoor CO2 in urban areas is higher than in pristine locations. The CO2 concentrations in buildings are a result of both the gas that is otherwise in equilibrium with the outdoors, but also the CO2 generated by building occupants as they exhale.

Atmospheric CO2 levels have been rising since the Industrial Revolution, reaching a 414 ppm peak at NOAA's Mauna Loa Observatory in Hawaii in 2019. In the ongoing scenario in which people on Earth do not reduce greenhouse gas emissions, the Intergovernmental Panel on Climate Change predicts outdoor CO2 levels could climb to 930 ppm by 2100. And urban areas typically have around 100 ppm CO2 higher than this background.

Karnauskas and his colleagues developed a comprehensive approach that considers predicted future outdoor CO2 concentrations and the impact of localized urban emissions, a model of the relationship between indoor and outdoor CO2 levels and the impact on human cognition. They found that if the outdoor CO2 concentrations do rise to 930 ppm, that would nudge the indoor concentrations to a harmful level of 1400 ppm.

"At this level, some studies have demonstrated compelling evidence for significant cognitive impairment," said Anna Schapiro, assistant professor of psychology at the University of Pennsylvania and a coauthor on the study. "Though the literature contains some conflicting findings and much more research is needed, it appears that high level cognitive domains like decision-making and planning are especially susceptible to increasing CO2 concentrations."

In fact, at 1400 ppm, CO2 concentrations may cut our basic decision-making ability by 25 percent, and complex strategic thinking by around 50 percent, the authors found.

The cognitive impacts of rising CO2 levels represent what scientists call a "direct" effect of the gas' concentration, much like ocean acidification. In both cases, elevated CO2 itself--not the subsequent warming it also causes--is what triggers harm.

The team says there may be ways to adapt to higher indoor CO2 levels, but the best way to prevent levels from reaching harmful levels is to reduce fossil fuel emissions. This would require globally adopted mitigation strategies such as those set forth by the Paris Agreement of the United Nations Framework Convention on Climate Change.

Karnauskas and his coauthors hope these findings will spark further research on 'hidden' impacts of climate change such as that on cognition. "This is a complex problem, and our study is at the beginning. It's not just a matter of predicting global (outdoor) CO2 levels," he said. "It's going from the global background emissions, to concentrations in the urban environment, to the indoor concentrations, and finally the resulting human impact. We need even broader, interdisciplinary teams of researchers to explore this: investigating each step in our own silos will not be enough."

Credit: 
University of Colorado at Boulder

Cost-effective canopy protects health workers from COVID infection during ventilation

image: The image depicts the constant flow canopy system described in the research letter. The flexible plastic canopy forms an air chamber that covers the upper part of the patient's body, which is connected to the filtration system that cleans the air and pushes it back out.

Image: 
Photo courtesy of Prof. Yochai Adir.

Researchers have designed a cost-effective, constant flow plastic canopy system that can help to protect healthcare workers who are at risk of airborne coronavirus infection while delivering non-invasive ventilation or oxygen via high flow nasal canula (HFNC), according to a research letter published in the European Respiratory Journal [1].

Ventilatory support with non-invasive ventilation or HFNC are often used to treat people with respiratory failure, a symptom of severe coronavirus disease, as they help patients with breathing difficulties to breathe by pushing pressured air into the lungs via a mask covering the mouth and/or nose. This can alleviate the need for in-demand invasive mechanical ventilators, but there are concerns about the increased risk of infection for healthcare workers who treat patients with non-invasive respiratory support.

Professor Yochai Adir, from the Lady Davis Carmel Medical Center Pulmonary Division, Israel, led the research team. He explained: "The current crisis has resulted in a shortage of access to negative pressure facilities and invasive mechanical ventilators. This means we must adapt, so that we can continue to treat patients as best we can while protecting the health and safety of healthcare workers.

"Non-invasive ventilation is one solution for this, but it may increase the risk of infection for healthcare workers, as virus particles can become airborne due to mask leakage, the speed and direction of the air flow, or from patient coughing. The constant flow canopy system that we designed and built addresses this risk, by eliminating healthcare workers' exposure to this potentially dangerous situation."

The flexible plastic canopy forms an air chamber that covers the upper part of the patient's body. The canopy is connected to a system containing a high-quality air filter that cleans the air, and an electrical fan that creates negative pressure, pulling the filtered air to the open air. The canopy system can be used to support up to four patients at a time.

The researchers say the plastic used for the canopy design does not allow fluid or particles to pass through it and that it has been tested against international standards, which score effectiveness based on the number and size of airborne particles that pass through the material.

Professor Adir said: "We installed this cost-effective system within our hospital and found it supports the delivery of non-invasive ventilatory support with minimal risk of infection for the medical staff. It enables alternatives to mechanical ventilation for patients with moderate to severe coronavirus infection, who may otherwise go untreated because of a shortage of equipment."

The researchers say the physical barrier between patients and medical staff created by the canopy could make administering treatment challenging, and the size of the canopy system can be difficult to install in small treatment rooms.

Professor Leo Heunks is an expert in intensive care medicine from the European Respiratory Society and was not involved in the study. He said: "Critical care systems are facing unprecedented challenges because of the coronavirus pandemic, so it is vital that we come up with ways to alleviate the pressure on healthcare systems without compromising health worker safety. The design outlined in this research paper offers an interesting approach for treating patients who require breathing support, and importantly it has a clear focus on protecting the health of frontline medical staff."

Credit: 
European Respiratory Society

Silent, airborne transmission likely to be a key factor in scarlet fever outbreaks

New research due to be presented at the European Congress of Clinical Microbiology and Infectious Diseases (ECCMID)* shows that the airborne transmission, both through symptomatic patients and those who are shedding the virus with no symptoms, may be key factors in the spread of scarlet fever. The study, funded by Action Medical Research, was led by Prof Shiranee Sriskandan at Imperial College London and Dr Rebecca Cordery at Public Health England, London, UK.

Scarlet fever is a disease caused by Group A Streptococcus infection that usually affects young children. The first signs of the illness can be flu-like symptoms, including a high temperature of 38 degrees or above, a sore throat and pinkish/red rash. England is experiencing an upsurge in scarlet fever (SF) and associated outbreaks in recent years. Transmission mechanisms are poorly understood but thought to be through respiratory droplet spread.

The dates of the scarlet fever season in England follow those of the school year (from September of one year to August of the next). Scarlet fever notifications in England increased in line with the usual seasonal pattern up to week 10 of 2020, with a marked decrease in numbers since then (see full abstract link below). A total of 11,739 notifications of scarlet fever have been received to date this season in England (weeks 37 to 12, 2019/20) compared to an average of 10,355 (range: 7,897 to 17,454) for this same period in the previous five years.

In recent weeks, notifications have declined from the seasonal average, with the authors of Public Health England's regular update on scarlet fever saying this could be due to social distancing measures decreasing transmission or lower attendance at GPs in light of COVID-19 messaging to the public to stay at home except for certain key reasons. They note "This is of concern given the importance of prompt treatment with antibiotics to limit further spread as well reducing risk of potential complications."

In this study, the authors undertook a prospective, observational study in schools and nurseries in London with scarlet fever outbreaks to assess impact of antibiotic treatment on detection of Group A Streptococcus (GAS) in cases; prevalence of GAS carriage in classroom and household contacts using throat swabs; and presence of GAS in the classroom environment. Transmissibility was assessed using cough plates, hand swabs, environmental swabs and air settle plates with genome sequencing to confirm chains of transmission. Cases were tested on days 1-3 of antibiotics, then weekly for 3-4 weeks. Contacts were tested weekly over 3-4 weeks.

Six classes, comprising 11 scarlet fever cases, 17 household contacts, and 142 classroom contacts were recruited. Of 10 cases on treatment, all had negative samples after starting antibiotics, however 4 of 10 became GAS-positive by week 2 or 3. One untreated case remained positive. GAS was identified in 3 out of 17 household contacts.

GAS prevalence in classroom contacts was high and increased between weeks 1 and 2 in all outbreaks (week 1, 0-19%; week 2, 9-56%; week 3, 18-50%), 27 contacts (19%) were GAS-positive on two, and 4 on three samples. Surface swabs (n=60) taken from various places in 3 classrooms did not yield GAS except in one instance. Genome sequencing showed clonality of isolates (meaning near-identical) within three classes tested, confirming recent transmission accounted for high carriage. A newly emergent lineage (called M1UK**) accounted for 2/3 outbreaks.

Of 28 classroom contacts with GAS-positive throat swabs, who were tested for transmissibility, 6 (21%) had positive cough plates and/or hand swabs, of whom three remained GAS-positive for 3 weeks. These classroom contacts were reported to be symptom free. Settle plates were GAS-positive in 2/3 classrooms tested despite being placed in elevated locations.

The authors say: "GAS transmission within classrooms was extensive despite short-term effectiveness of antibiotic treatment. Transmission may occur prior to receipt of antibiotics, underlining the importance of rapid diagnosis and treatment. Despite exclusion of cases and guideline adherence, heavy shedding of GAS by classroom contacts, who may represent subclinical infection or carriage, are likely to perpetuate outbreaks. Airborne transmission appears to be a key factor in scarlet fever, in contrast to environmental contamination."

Credit: 
European Society of Clinical Microbiology and Infectious Diseases

Infant temperament predicts personality more than 20 years later

Researchers investigating how temperament shapes adult life-course outcomes have found that behavioral inhibition in infancy predicts a reserved, introverted personality at age 26. For those individuals who show sensitivity to making errors in adolescence, the findings indicated a higher risk for internalizing disorders (such as anxiety and depression) in adulthood. The study, funded by the National Institutes of Health and published in Proceedings of the National Academy of Sciences, provides robust evidence of the impact of infant temperament on adult outcomes.

"While many studies link early childhood behavior to risk for psychopathology, the findings in our study are unique," said Daniel Pine, M.D., a study author and chief of the NIMH Section on Development and Affective Neuroscience. "This is because our study assessed temperament very early in life, linking it with outcomes occurring more than 20 years later through individual differences in neural processes."

Temperament refers to biologically based individual differences in the way people emotionally and behaviorally respond to the world. During infancy, temperament serves as the foundation of later personality. One specific type of temperament, called behavioral inhibition (BI), is characterized by cautious, fearful, and avoidant behavior toward unfamiliar people, objects, and situations. BI has been found to be relatively stable across toddlerhood and childhood, and children with BI have been found to be at greater risk for developing social withdrawal and anxiety disorders than children without BI.

Although these findings hint at the long-term outcomes of inhibited childhood temperament, only two studies to date have followed inhibited children from early childhood to adulthood. The current study, conducted by researchers at the University of Maryland, College Park, the Catholic University of America, Washington, D.C., and the National Institute of Mental Health, recruited their participant sample at 4 months of age and characterized them for BI at 14 months (almost two years earlier than the previously published longitudinal studies). In addition, unlike the two previously published studies, the researchers included a neurophysiological measure to try to identify individual differences in risk for later psychopathology.

The researchers assessed the infants for BI at 14 months of age. At age 15, these participants returned to the lab to provide neurophysiological data. These neurophysiological measures were used to assess error-related negativity (ERN), which is a negative dip in the electrical signal recorded from the brain that occurs following incorrect responses on computerized tasks. Error-related negativity reflects the degree to which people are sensitive to errors. A larger error-related negativity signal has been associated with internalizing conditions such as anxiety, and a smaller error-related negativity has been associated with externalizing conditions such as impulsivity and substance use. The participants returned at age 26 for assessments of psychopathology, personality, social functioning, and education and employment outcomes.

"It is amazing that we have been able to keep in touch with this group of people over so many years. First their parents, and now they, continue to be interested and involved in the work," said study author Nathan Fox, Ph.D., of the University of Maryland Department of Human Development and Quantitative Methodology.

The researchers found that BI at 14 months of age predicted, at age 26, a more reserved personality, fewer romantic relationships in the past 10 years, and lower social functioning with friends and family. BI at 14 months also predicted higher levels of internalizing psychopathology in adulthood, but only for those who also displayed larger error-related negativity signals at age 15. BI was not associated with externalizing general psychopathology or with education and employment outcomes.

This study highlights the enduring nature of early temperament on adult outcomes and suggests that neurophysiological markers such as error-related negativity may help identify individuals most at risk for developing internalizing psychopathology in adulthood.

"We have studied the biology of behavioral inhibition over time and it is clear that it has a profound effect influencing developmental outcome," concluded Dr. Fox.

Although this study replicates and extends past research in this area, future work with larger and more diverse samples are needed to understand the generalizability of these findings.

Credit: 
NIH/National Institute of Mental Health

A method for predicting antiviral drug or vaccine targets

image: The leading peer-reviewed journal in computational biology and bioinformatics, publishing in-depth statistical, mathematical, and computational analysis of methods, as well as their practical impact.

Image: 
Mary Ann Liebert, Inc., publishers

New Rochelle, NY, April 8, 2020--A novel method to predict the most promising targets for antiviral drugs or vaccines is based on the conformational changes viral glycoproteins go through during the process of recognition and binding to the host cell. This prediction method, which targets backbone hydrogen bonds for motifs with the highest free energy, is published in Journal of Computational Biology, a peer-reviewed publication from Mary Ann Liebert, Inc., publishers. Click here to read the full-text article on the Journal of Computational Biology website.

Robert Penner, the René Thom Chair of Mathematical Biology at the Institut des Hautes Études Scientifiques (Bures-sur-Yvette, France) and University of California at Los Angeles, is the author of the article entitled "Backbone Free Energy Estimator Applied to Viral Glycoproteins."

During viral entry and absorption into a host cell, the virus typically undergoes dramatic reconformation to form fusion/penetration motifs. Penner presents a general method to predict the residues of high conformational activity from the three-dimensional structure of viral glycoproteins. The method involves analyzing the hydrogen bonds of a protein and computing the free energy differences of corresponding features to identify regions where conformational change is most likely. The methods presented in the paper are currently being implemented for the spike glycoprotein of SARS CoV-2, the virus that causes COVID-19.

"It is nothing short of miraculous for a mathematical idea to directly lead to a practical implication, but the pay-off can be enormous, as it happened to the Maxwell's theory of electromagnetic fields -- this brought radio and television -- and with Turing's and von Neumann's ideas on the theory of computation which opened the portal to our digital world," says Misha Gromov of Institut des Hautes Études Scientifiques (Bures-sur-Yvette, France) and New York University, renowned mathematician and Abel Prize winner (a.k.a. Nobel Prize for mathematics). "Such an idea may have a chance to work, only when its author, besides being a brilliant mathematician, has a broad scientific vision, which is rare among even the best mathematicians of our time. Robert Penner is this kind of a rare mathematician, and his method of detection of patterns in the protein structures, a fruit of purely mathematical thinking, is different from everybody else's in the world. There is a positive chance this method will work and accelerate the development of antiviral drugs or vaccines. Only an experiment can tell if this is so."

Credit: 
Mary Ann Liebert, Inc./Genetic Engineering News

Revealed: the secret life of godwits

image: This is a black-tailed godwit in a Dutch meadow, with colour rings and a geolocator on its legs.
The picture was used on the cover of the April issue of the Journal of Avian Biology.

Image: 
Jan van de Kam

To find out more about birds such as the black-tailed godwit, ecologists have been conducting long-term population studies using standardized information on reproductive behaviour--such as dates of egg-laying or hatching and levels of chick survival. New information gathered using geolocators on godwits in the Netherlands shows that traditional observation methods can lead to inaccurate data. The study was published in the April-issue of the Journal of Avian Biology.

PhD student Mo Verhoeven from the University of Groningen used geolocators attached to the legs of black-tailed godwits to follow their migration pattern. 'These consist of a tiny chip that records light intensity every five minutes, together with the exact date and time,' explains Verhoeven. This combination allows him to determine longitude and latitude from the times of sunrise and sunset. Geolocators can collect data for up to 26 months and the information is read after removal of the chip.

Shaded periods

Geolocators are generally used to learn about where birds migrate to and when they migrate. However, for this study, Verhoeven used the geolocators in a different way. 'During the nesting season, the geolocators registered shaded periods during the day,' says Verhoeven. This happens when a bird is sitting on a nest, with its legs folded under its body. 'We were, therefore, able to determine when these birds were nesting.' This was interesting. Accurate data on nesting is difficult to obtain since every observation of a nesting bird will also disturb its behaviour.

The main revelation from Verhoeven's analysis is that the godwits in this study always started a second nest if their first nest failed. 'So far, estimates of this re-nesting varied from 20 to 45 percent,' says Verhoeven. Based on traditional observation techniques, many of these second attempts were previously thought to be first nests, or they were not discovered at all. 'During the season, our observations shifted focus from detecting nests to following the development of chicks,' explains Verhoeven. His data also mean that counting nests to estimate the breeding population is not very accurate since many birds build a second nest.

Conservation

The study also revealed a firm final date for re-nesting attempts: after 18 May, no godwit built another nest if its previous clutch failed. 'There is something really cool here,' says Verhoeven. 'Back in 1954, a biologist experimented by destroying godwit nests. He noticed that they would not re-nest after 20 May, which is almost the same date! It is really intriguing that there is such a strict end to the nesting season. What causes it? That's something I would like to find out.'

The fact that these birds make a second attempt after their first clutch is lost means that the breeding season for godwits is longer than previously thought. That has consequences for conservation. 'These second attempts are on average less successful than the first clutches,' says Verhoeven. 'But they are not trivial.' In conservation areas, farmers delay mowing the meadows in which the birds nest until some time in June. 'But the young from the second attempts may still be there with their parents in early July.'

Magic

By re-purposing the information from the geolocators, Verhoeven and his colleagues revealed an unknown part of godwit life. 'This also confronts us with our very limited understanding of the ecology of these birds and the bias in our observations,' he says. This is succinctly described by the team at the end of the article in the Journal of Avian Biology: 'Ultimately, part of the magic of ecology is its complexity and our permanent inability to fully understand that complexity.'

Credit: 
University of Groningen

Mental health preparedness among older youth in foster care

An estimated 25,000 to 28,000 youth transition out of foster care each year in the United States. In a new study, interviews with hundreds of 17-year-olds in the California foster care system reveal not only elevated mental health counseling and medication use, but also that youth with indicated mental health needs feel less prepared to manage their mental health.

The study was conducted by a team of researchers from across the country led by Professor Michelle Munson of the Silver School of Social Work at New York University. It was just published in the Journal of Adolescent Health, and provides an updated look at counseling and medication use among teens in foster care, and reports on how prepared 17-year-olds feel to manage their mental health as they near adulthood.

"As far as we know, this is the first study to ask 17-year-olds in foster care how prepared they feel to manage their mental health," Munson wrote. "These results are important as the [child welfare] field continues to develop new supports for older youth in foster care, and as society continues to strive to help individuals increasingly maintain their mental health in young adulthood."

Rising rates of mental health symptoms among children and adolescents is a matter of widespread concern.

Not surprisingly, mental disorders are elevated among youth in foster care. Among these youth, the transition to adulthood has been shown to be especially difficult and challenging. One contributing factor is the curtailment of support from professional child welfare and mental health workers in the youth's life.

Munson and her coauthors - including Mark Courtney, Samuel Deutsch Professor at the University of Chicago and Principal Investigator of the California Youth Transitions to Adulthood Study (CalYOUTH) where the data were drawn; Nate Okpych of the University of Connecticut; and Colleen Katz of Hunter College, CUNY -- interviewed 727 youth in foster care at age 17 about their mental health, service use, and preparedness to manage their mental health.

As part of the structured interviews, the researchers asked the youth how prepared they felt to manage their mental health: that is, finding ways to relax when they felt stressed out; being able to calm down when they became angry or upset; talking to others about things that were bothering them; knowing how to make an appointment with a psychiatrist or therapist, and following through with their provider's instructions.

Among this representative sample, more than half said they were using counseling services, and almost a third were using medications. Youth with a current mental disorder indicated they were more likely to receive mental health services, but said they felt less prepared to manage their mental health than those without a current mental disorder.

Youth who resided in largely rural counties were more likely to receive mental health services, compared to their counterparts in larger counties such as Los Angeles County. Authors suggested this may be due to variation in caseload size.

Additionally, youth who identified as 100% heterosexual were less likely to receive counseling and reported feeling more prepared to manage their mental health, than were youth who identified as not 100% heterosexual.

These and other findings can inform the development and delivery of mental health interventions designed for youth with particular characteristics, according to the study.

Credit: 
New York University