Earth

Introducing 'sci-Space,' a new method for embryo-scale, single-cell spatial transcriptomics

Researchers introduce "sci-Space," a new approach to spatial transcriptomics that can retain single-cell resolution and spatial heterogeneity at scales much larger than previous methods. They used their approach to build single-cell atlases of whole sections of mouse embryos at 14 days of development. Single-cell RNA sequencing methods have led to great advances in understanding how organisms and complex tissues develop. Although cells' spatial organization is central to normal development, homeostasis, and pathophysiology, many single-cell RNA sequencing methods lose valuable contextual spatial information. Those that preserve spatial context between cells can be limited to a specific set of genes and/or a small tissue area. To overcome these challenges, Sanjay Srivatsan and colleagues developed sci-Space, a spatial transcriptomic approach that retains single-cell resolution while also resolving spatial context of cells at larger scales. Srivatsan et al.'s approach uses a grid of barcoded oligos - short, single strands of synthetic DNA - that can be transferred from a slide to nuclei of an overlaid frozen tissue section. According to the authors, sci-Space allows for both the spatial origin and transcriptome for thousands of cells per slide to be obtained. To demonstrate their new technique, Srivatsan et al. applied sci-Space to developing mouse embryos. By capturing spatial coordinates and whole transcriptomes of nearly 120,000 cells, the authors assembled a spatially resolved single-cell atlas of whole day 14 mouse embryo sections and revealed spatially expressed genes across a variety of cell types, including differentiating neurons.

Credit: 
American Association for the Advancement of Science (AAAS)

Last ice-covered parts of summertime Arctic Ocean vulnerable to climate change

image: The study looked at the Wandel Sea north of Greenland, which is inside what's known as the "Last Ice Area" of the Arctic Ocean.

Image: 
Schweiger et al./Communications Earth & Environment

In a rapidly changing Arctic, one area might serve as a refuge - a place that could continue to harbor ice-dependent species when conditions in nearby areas become inhospitable. This region north of Greenland and the islands of the Canadian Arctic Archipelago has been termed the Last Ice Area. But research led by the University of Washington suggests that parts of this area are already showing a decline in summer sea ice.

Last August, sea ice north of Greenland showed its vulnerability to the long-term effects of climate change, according to a study published July 1 in the open-access journal Communications Earth & Environment.

"Current thinking is that this area may be the last refuge for ice-dependent species. So if, as our study shows, it may be more vulnerable to climate change than people have been assuming, that's important," said lead author Axel Schweiger, a polar scientist at the UW Applied Physics Laboratory.

How the last ice-covered regions will fare matters for polar bears that use the ice to hunt for seals that use the ice for building dens for their young, and for walruses that use the ice as a platform for foraging.

"This area has long been expected to be the primary refuge for ice-dependent species because it is one of the last places where we expect summer sea ice to survive in the Arctic," said co-author Kristin Laidre, a principal scientist at the UW Applied Physics Laboratory.

The study focused on sea ice in August 2020 in the Wandel Sea, an area that used to be covered year-round in thick, multi-year ice.

"Sea ice circulates through the Arctic, it has a particular pattern, and it naturally ends up piling up against Greenland and the northern Canadian coast," Schweiger said. "In climate models, when you spin them forward over the coming century, that area has the tendency to have ice survive in the summer the longest."

Like other parts of the Arctic Ocean, the ice here has been gradually thinning, though last spring's sea ice in the Wandel Sea was on average slightly thicker than previous years. But satellite images showed a record low of just 50% sea ice concentration on Aug. 14, 2020.

The new study uses satellite data and sea ice models to determine what caused last summer's record low. It finds that about 80% was due to weather-related factors, like winds that break up and move the ice around. The other 20%, or one-fifth, was from the longer-term thinning of the sea ice due to global warming.

The model simulated the period from June 1 to Aug. 16 and found that unusual winds moved sea ice out of the area, but that the multiyear thinning trend also contributed, by allowing more sunlight to warm the ocean. Then, when winds picked up, this warm water was able to melt the nearby ice floes.

The record-low ice concentration in 2020 was surprising because the average ice thickness at the beginning of summer was actually close to normal.

"During the winter and spring of 2020 you had patches of older, thicker ice that had drifted into there, but there was enough thinner, newer ice that melted to expose open ocean," Schweiger said. "That began a cycle of absorbing heat energy to melt more ice, in spite of the fact that there was some thick ice. So in years where you replenish the ice cover in this region with older and thicker ice, that doesn't seem to help as much as you might expect."

The results raise concerns about the Last Ice Area but can't immediately be applied to the entire region, Schweiger said. Also unknown is how more open water in this region would affect ice-dependent species over the short and long terms.

"We know very little about marine mammals in the Last Ice Area," said Laidre, who is also an associate professor in the School of Aquatic and Fishery Sciences. "We have almost no historical or present-day data, and the reality is that there are a lot more questions than answers about the future of these populations."

Credit: 
University of Washington

Multimodality care improves treatment outcomes for aggressive prostate cancer

FINDINGS

Men with high-risk prostate cancer with at least one additional aggressive feature have the best outcomes when treated with multiple healthcare disciplines, known as multimodality care, according to a UCLA study led by Dr. Amar Kishan, assistant professor of radiation oncology at the David Geffen School of Medicine at UCLA and a researcher at the UCLA Jonsson Comprehensive Cancer Center.

The study found no difference in prostate cancer-specific deaths across treatment modalities when patients received guideline-concordant multimodality therapy, which in this case was inclusion of hormone therapy for men receiving radiation and a low-bar for postoperative radiation in men undergoing surgery. The research team did however, find significant differences in deaths when guideline-concordant multimodality care was not delivered. Those treated with external beam radiotherapy or external beam radiotherapy with a brachytherapy boost were consistently associated with lower rates of distant metastasis (8% with EBRT+BT, 16% with EBRT, and 24% with RP, at 10 years).

BACKGROUND

The optimal treatment for patients with high-risk prostate cancer and additional aggressive features is currently unknown. It is important to understand whether multimodality care can help improve outcomes for this patient population without increasing side effects and lowering the quality of life. The researchers sought to find whether there was a difference in prostate cancer-specific mortality and distant metastasis associated with radiotherapy or radical prostatectomy.

METHOD

UCLA investigators collaborated with 15 other institutions around the world to investigate treatment outcomes in 6,004 men with high-risk prostate cancer and at least one adverse clinicopathologic feature, which can include a Gleason grade group 4-5 diagnosis, disease extending into the seminal vesicles or tumor extending outside of the prostate capsule. Of the group, 3,175 underwent radiotherapy or upfront radical prostatectomy, 1,830 underwent external beam radiotherapy and 999 underwent external beam radiotherapy with a brachytherapy boost.

IMPACT

The study shows multimodality therapy is critical for treating more aggressive prostate cancers. While multimodality therapy resulted in equal drops in cancer-specific death, there were still lower rates of metastases in men receiving primary radiation, particularly with extremely high dose radiation, in conjunction with hormone therapy. This suggests men with very aggressive prostate cancers might have undetected disease outside the prostate. A type of systemic therapy that has an impact all throughout the bloodstream, such as hormonal therapy, might be helpful even in men receiving surgery.

Credit: 
University of California - Los Angeles Health Sciences

Is global plastic pollution nearing an irreversible tipping point?

image: Catching a big blue barrel floating on the ocean surface in the Great Pacific Garbage Patch from the German research vessel SONNE during expedition SO268/3 crossing the North Pacific Ocean from Vancouver to Singapore in summer, 2019. ©Roman Kroke UFZ

Image: 
©Roman Kroke UFZ"

Current rates of plastic emissions globally may trigger effects that we will not be able to reverse, argues a new study by researchers from Sweden, Norway and Germany published on July 2nd in Science. According to the authors, plastic pollution is a global threat, and actions to drastically reduce emissions of plastic to the environment are ”the rational policy response”.

Plastic is found everywhere on the planet: from deserts and mountaintops to deep oceans and Arctic snow. As of 2016, estimates of global emissions of plastic to the world's lakes, rivers and oceans ranged from 9 to 23 million metric tons per year, with a similar amount emitted onto land yearly. These estimates are expected to almost double by 2025 if business-as-usual scenarios apply.

"Plastic is deeply engrained in our society, and it leaks out into the environment everywhere, even in countries with good waste-handling infrastructure," says Matthew MacLeod, Professor at Stockholm University and lead author of the study. He says that emissions are trending upward even though awareness about plastic pollution among scientists and the public has increased significantly in recent years.

That discrepancy is not surprising to Mine Tekman, a PhD candidate at the Alfred Wegener Institute in Germany and co-author of the study, because plastic pollution is not just an environmental issue but also a "political and economic" one. She believes that the solutions currently on offer, such as recycling and cleanup technologies, are not sufficient, and that we must tackle the problem at its root.

"The world promotes technological solutions for recycling and to remove plastic from the environment. As consumers, we believe that when we properly separate our plastic trash, all of it will magically be recycled. Technologically, recycling of plastic has many limitations, and countries that have good infrastructures have been exporting their plastic waste to countries with worse facilities. Reducing emissions requires drastic actions, like capping the production of virgin plastic to increase the value of recycled plastic, and banning export of plastic waste unless it is to a country with better recycling" says Tekman.

A poorly reversible pollutant of remote areas of the environment

Plastic accumulates in the environment when amounts emitted exceed those that are removed by cleanup initiatives and natural environmental processes, which occurs by a multi-step process known as weathering.

"Weathering of plastic happens because of many different processes, and we have come a long way in understanding them. But weathering is constantly changing the properties of plastic pollution, which opens new doors to more questions," says Hans Peter Arp, researcher at the Norwegian Geotechnical Institute (NGI) and Professor at the Norwegian University of Science and Technology (NTNU) who has also co-authored the study. "Degradation is very slow and not effective in stopping accumulation, so exposure to weathered plastic will only increase," says Arp. Plastic is therefore a "poorly reversible pollutant", both because of its continuous emissions and environmental persistence.

Remote environments are particularly under threat as co-author Annika Jahnke, researcher at the Helmholtz Centre for Environmental Research (UFZ) and Professor at the RWTH Aachen University explains:

"In remote environments, plastic debris cannot be removed by cleanups, and weathering of large plastic items will inevitably result in the generation of large numbers of micro- and nanoplastic particles as well as leaching of chemicals that were intentionally added to the plastic and other chemicals that break off the plastic polymer backbone. So, plastic in the environment is a constantly moving target of increasing complexity and mobility. Where it accumulates and what effects it may cause are challenging or maybe even impossible to predict."

A potential tipping point of irreversible environmental damage

On top of the environmental damage that plastic pollution can cause on its own by entanglement of animals and toxic effects, it could also act in conjunction with other environmental stressors in remote areas to trigger wide-ranging or even global effects. The new study lays out a number of hypothetical examples of possible effects, including exacerbation of climate change because of disruption of the global carbon pump, and biodiversity loss in the ocean where plastic pollution acts as additional stressor to overfishing, ongoing habitat loss caused by changes in water temperatures, nutrient supply and chemical exposure.

Taken all together, the authors view the threat that plastic being emitted today may trigger global-scale, poorly reversible impacts in the future as "compelling motivation" for tailored actions to strongly reduce emissions.

"Right now, we are loading up the environment with increasing amounts of poorly reversible plastic pollution. So far, we don't see widespread evidence of bad consequences, but if weathering plastic triggers a really bad effect we are not likely to be able to reverse it," cautions MacLeod. "The cost of ignoring the accumulation of persistent plastic pollution in the environment could be enormous. The rational thing to do is to act as quickly as we can to reduce emissions of plastic to the environment."

Credit: 
Stockholm University

The first commercially scalable integrated laser and microcomb on a single chip

image: Artist's concept illustration of electrically controlled optical frequency combs at wafer scale.

Image: 
Brian Long

Fifteen years ago, UC Santa Barbara electrical and materials professor John Bowers pioneered a method for integrating a laser onto a silicon wafer. The technology has since been widely deployed in combination with other silicon photonics devices to replace the copper-wire interconnects that formerly linked servers at data centers, dramatically increasing energy efficiency -- an important endeavor at a time when data traffic is growing by roughly 25% per year.

For several years, the Bowers group has collaborated with the group of Tobias J. Kippenberg at the Swiss Federal Institute of Technology (EPFL), within the Defense Advanced Research Projects Agency (DARPA) Direct On-Chip Digital Optical Synthesizer (DODOS) program. The Kippenberg group discovered "microcombs," a series of parallel, low-noise, highly stable laser lines. Each of the many lines of the laser comb can carry information, extensively multiplying the amount of data that can be sent by a single laser.

Recently, several teams demonstrated very compact combs by placing a semiconductor laser chip and a separate silicon nitride ring-resonator chip very close together. However, the laser and the resonator were still separate devices, made independently and then placed in close proximity to each other perfectly aligned, a costly and time-consuming process that is not scalable.

The Bowers lab has worked with the Kippenberg lab to develop an integrated on-chip semiconductor laser and resonator capable of producing a laser microcomb. A paper titled "Laser soliton microcombs heterogeneously integrated on silicon," published in the new issue of the journal Science, describes the labs' success in becoming the first to achieve that goal.

Soliton microcombs are optical frequency combs that emit mutually coherent laser lines -- that is, lines that are in constant, unchanging phase relative to each other. The technology is applied in the areas of optical timing, metrology and sensing. Recent field demonstrations include multi-terabit-per-second optical communications, ultrafast light detection and ranging (LiDAR), neuromorphic computing, and astrophysical spectrometer calibration for planet searching, to name several. It is a powerful tool that normally requires exceptionally high power and expensive lasers and sophisticated optical coupling to function.

The working principle of a laser microcomb, explained lead author Chao Xiang, a postdoctoral researcher and newly minted Ph.D. in Bowers's lab, is that a distributed feedback (DFB) laser produces one laser line. That line then passes through an optical phase controller and enters the micro-ring resonator, causing the power intensity to increase as the light travels around the ring. If the intensity reaches a certain threshold, non-linear optical effects occur, causing the one laser line to create two additional, identical lines on either side. Each of those two "side lines" creates others, leading to a cascade of laser-line generation. "You end up with a series of mutually coherent frequency combs," Xiang said -- and a vastly expanded ability to transmit data.

This research enables semiconductor lasers to be seamlessly integrated with low-loss nonlinear optical micro-resonators -- "low-loss" because the light can travel in the waveguide without losing a significant amount of its intensity over distance. No optical coupling is required, and the device is entirely electrically controlled. Importantly, the new technology lends itself to commercial-scale production, because thousands of devices can be made from a single wafer using industry standard complementary metal oxide semiconductor (CMOS)-compatible techniques. "Our approach paves the way for large-volume, low-cost manufacturing of chip-based frequency combs for next-generation high-capacity transceivers, datacenters, space and mobile platforms," the researchers stated.

The key challenge in making the device was that the semiconductor laser and the resonator, which generates the comb, had to be built on different material platforms. The lasers can be made only with materials from the III and V groups on the Periodic Table, such as indium phosphide, and the best combs can be made only from silicon nitride. "So, we had to find a way to put them together on a single wafer," Xiang explained.

Working sequentially on the same wafer, the researchers leveraged UCSB's heterogeneous integration process for making high-performance lasers on silicon substrate and the ability of their EPFL collaborators to make record ultra-low-loss high-Q silicon nitride micro-resonators using the "photonic damascene process" they developed. The wafer-scale process -- in contrast to making individual devices and then combining them one by one -- enables thousands of devices to be made from a single 100-mm-diameter wafer, a production level that can be scaled up further from the industry standard 200-mm- or 300-mm-diameter substrate.

For the device to function properly, the laser, the resonator and the optical phase between them must be controlled to create a coupled system based on the "self-injection locking" phenomenon. Xiang explained that the laser output is partially back-reflected by the micro-resonator. When a certain phase condition is achieved between the light from the laser and the back-reflected light from the resonator, the laser is said to be locked to the resonator.

Normally, back-reflected light harms laser performance, but here it is crucial for generating the microcomb. The locked laser light triggers soliton formation in the resonator and reduces the laser light noise, or frequency instability, at the same time. Thus, something harmful is transformed into a benefit. As a result, the team was able to create not only the first laser soliton microcomb integrated on a single chip, but also the first narrow-linewidth laser sources with multiple available channels on one chip.

"The field of optical comb generation is very exciting and moving very fast. It is finding applications in optical clocks, high-capacity optical networks and many spectroscopic applications," said Bowers, the Fred Kavli Chair in Nanotechnology and the director of the College of Engineering's Institute for Energy Efficiency. "The missing element has been a self-contained chip that includes both the pump laser and the optical resonator. We demonstrated that key element, which should open up rapid adoption of this technology."

"I think this work is going to become very big," said Xiang. The potential of this new technology, he added, reminds him of the way putting lasers on silicon 15 years ago advanced both research and industrial commercialization of silicon photonics. "That transformative technology has been commercialized, and Intel ships millions of transceiver products per year," he said. "Future silicon photonics using co-packaged optics will likely be a strong driver for higher-capacity transceivers using a large number of optical channels."

Xiang explained that the current comb produces about twenty to thirty usable comb lines and that the goal going forward will be to increase that number, "hopefully to get one hundred combined lines from each laser-resonator, with low power consumption."

Based on the soliton microcombs' low energy use and their ability to provide a large number of high-purity optical comb lines for data communications, said Xiang, "We believe that our achievement could become the backbone of efforts to apply optical frequency comb technologies in many areas, including efforts to keep up with fast-growing data traffic and, hopefully, slow the growth of energy consumption in mega-scale datacenters."

Credit: 
University of California - Santa Barbara

Scientists find genetic cause, underlying mechanisms of new neurodevelopmental syndrome

image: Right, βII-spectrin (magenta) forms aggregates throughout neurites of a mouse cortical neuron expressing one of the human SPTBN1 variants.

Image: 
Lorenzo Lab, UNC School of Medicine

CHAPEL HILL, NC - Scientists at the University of North Carolina at Chapel Hill School of Medicine and colleagues have demonstrated that variants in the SPTBN1 gene can alter neuronal architecture, dramatically affecting their function and leading to a rare, newly defined neurodevelopmental syndrome in children.

Damaris Lorenzo, PhD, assistant professor in the UNC Department of Cell Biology and member of the UNC Neuroscience Center at the UNC School of Medicine, led this research, which was published today in the journal Nature Genetics. Lorenzo, who is also a member of the UNC Intellectual and Developmental Disabilities Research Center (IDDRC) at the UNC School of Medicine, is the senior author.

The gene SPTBN1 instructs neurons and other cell types how to make βII-spectrin, a protein with multiple functions in the nervous system. Children carrying these variants can suffer from speech and motor delays, as well as intellectual disability. Some patients have received additional diagnosis, such as autism spectrum disorder, ADHD, and epilepsy. Identification of the genetic variants that cause this broad spectrum of disabilities is the first important milestone to finding treatments for this syndrome.

Lorenzo first learned about patients with complex neurodevelopmental presentations carrying SPTBN1 variants from Queenie Tan, MD, PhD, a medical geneticist, and Becky Spillmann, MS, a genetic counselor - both members of the NIH-funded Undiagnosed Disease Network (UDN) site at Duke University and co-authors of the Nature Genetics paper. They connected with Margot Cousin, PhD, a geneticist associated with the UDN site at the Mayo Clinic and co-first author or the study. Cousin had also collected clinical information from SPTBN1 variant carriers. Other clinical genetics teams learned about these efforts and joined the study.

The cohort of individuals affected by SPTBN1 variants continues to grow. Lorenzo and colleagues have been contacted about new cases after they published a preprint of their initial findings last summer. Identifying the genetic cause of rare diseases such as the SPTBN1 syndrome requires pooling knowledge from several patients to establish common clinical and biological patterns.

"Fortunately, the advent of affordable gene sequencing technology, together with the creation of databases and networks to facilitate the sharing of information among clinicians and investigators, has vastly accelerated the diagnosis of rare diseases," Lorenzo said. "To put our case in historical perspective, βII-spectrin was co-discovered 40 years ago through pioneering work that involved my UNC colleagues Keith Burridge, PhD, and Richard Cheney, PhD, as well as my postdoctoral mentor Vann Bennett, PhD, at Duke. However, its association with disease eluded us until now."

βII-spectrin is tightly associated with the neuronal cytoskeleton - a complex network of filamentous proteins that spans the neuron and plays pivotal roles in their growth, shape, and plasticity. βII-spectrin forms an extended scaffolding network that provides mechanical integrity to membranes and helps to orchestrate the correct positioning of molecular complexes throughout the neuron. Through research published in PNAS in 2019, Lorenzo found that βII-spectrin is essential for normal brain wiring in mice and for proper transport of organelles and vesicles in axons - the long extensions that carry signals from neurons to other neurons. βII-spectrin is an integral part of the process that enables normal development, maintenance, and function of neurons.

In this new study, Lorenzo's research team showed that, at the biochemical level, the genetic variants identified in patients are sufficient to cause protein aggregation, aberrant association of βII-spectrin with the cytoskeleton, impair axonal organelle transport and growth, and change the morphology of neurons. These deficiencies can permanently alter how neurons connect and communicate with each other, which is thought to contribute to the etiology of neurodevelopmental disorders. The team showed that reduction of βII-spectrin levels only in neurons disrupts structural connectivity between cortical areas in mutant mice, a deficit also observed in brain MRIs of some patients.

In collaboration with Sheryl Moy, PhD, professor in the UNC Department of Psychiatry and director of the Mouse Behavioral Phenotyping (MBP) Core of the UNC IDDRC, the researchers found that these mice have developmental and behavioral deficits consistent with presentations observed in humans.

"Now that we've established the methods to assign likelihood of pathogenicity to SPTBN1 variants and to determine how they alter neurons, our immediate goal is to learn more about the affected molecular and cellular mechanisms and brain circuits, and evaluate strategies for potential clinical interventions," Lorenzo said.

To this end, her team will collaborate with Adriana Beltran, PhD, assistant professor in the UNC Department of Genetics and director of the UNC Human Pluripotent Cell Core, to use neurons differentiated from patient-derived induced pluripotent stem cells. And the research team will continue to tap into molecular modeling predictions in collaboration with Brenda Temple, PhD, professor in the UNC Department of Biochemistry and Biophysics and director of the UNC Structural Bioinformatics Core, both co-authors on the Nature Genetics paper.

"As a basic science investigator, it's so satisfying to use knowledge and tools to provide answers to patients," Lorenzo said. "I first witnessed this thrill of scientific discovery and collaborative work as a graduate student 15 years ago when our lab identified the genetic cause of the first spectrinopathy affecting the nervous system, and it has been a powerful motivator since."

That work was the discovery of variants in a different spectrin gene as the cause of spinocebellar ataxia type 5 (SCA5), led by Laura Ranum, PhD, who at the time was at the University of Minnesota. In follow up work, as part of that team, Lorenzo contributed insights into the pathogenic mechanism of SCA5.

"Aside from the immediate relevance to affected patients, insights from our work on SPTNB1 syndrome will inform discoveries in other complex disorders with overlapping pathologies," Lorenzo said. "It is exciting to be part of such important work with a team of dedicated scientists and clinicians."

Credit: 
University of North Carolina Health Care

Prenatal exposure to THC, CBD affects offspring's responsiveness to fluoxetine

BLOOMINGTON, Ind. -- Scientists at Indiana University have found that significant amounts of the two main components of cannabis, THC and CBD, enter the embryonic brain of mice in utero and impair the mice's ability as adults to respond to fluoxetine, a drug commonly used to treat anxiety and depression and known by the brand name Prozac.

The study suggests that when the developing brain is exposed to THC or CBD, normal interactions between endocannabinoid and serotonin signaling may be diminished as they become adults.

"Hemp-derived CBD is a legal substance in the U.S., and we are in a time of increasing state-level legalization of cannabis. Therefore, use of cannabis components have increased across most levels of society, including among pregnant women," said Hui-Chen Lu, author of the study, director of the Linda and Jack Gill Center and a professor in the Department of Psychological and Brain Sciences in the IU Bloomington College of Arts and Sciences. "The study marks the beginning of an effort to understand the effects of THC and CBD on the endogenous cannabinoid system in the developing brain and body."

The study was published in Cannabis and Cannabinoid Research and will be a part of the upcoming 2021 Gill Symposium, which will focus exclusively on the topic of cannabis.

Researchers studied four groups of pregnant mice. Some received daily moderate doses of either THC, CBD, or a combination of equal parts THC and CBD; a control group was given placebo injections throughout pregnancy. Using mass spectrometry, IU psychological and brain sciences professor Heather Bradshaw tested embryos and found that CBD and THC both could reach the embryonic brain, determining that the drug was making it past the placenta.

"The surprising part is that maternal exposure to CBD alone -- a drug that is often considered as safe and harmless and is a popular 'natural' therapy for morning sickness -- resulted in a lasting impact on adult mice offspring," Lu said. "Both prenatal THC and CBD exposure impaired the adult's ability to respond to fluoxetine. The results suggest taking a cautious approach to using CBD during pregnancy."

There is some evidence for CBD's effectiveness in treating chronic pain and anxiety, though the only FDA-approved indication for CBD to date is the treatment of severe seizure disorders.

"We still know very little about the effects of CBD on the developing brain," Lu said.

The new paper is one of the first studies to see the potential negative impact of CBD on the developing brain and later behaviors.

Study co-author Ken Mackie, Gill Chair of Neuroscience at IU Bloomington, said researchers know that prenatal cannabis exposure may increase the risk for anxiety and depression, so it is important to evaluate the response to a class of drug used to treat anxiety and depression.

While many of the tests reflected normal mouse behaviors, one test -- to determine their response to stress -- stood out strongly as atypical. The mice in all groups responded normally to a stressful situation. As expected, fluoxetine increased stress resilience in mice whose mothers had received the placebo. However, the drug was ineffective in mice whose mothers had received THC, CBD or their combination.

Fluoxetine works by increasing the amount of serotonin available at brain synapses, an effect known to require the endocannabinoid system. This internal system of receptors, enzymes and molecules both mediates the effects of cannabis and plays a role in regulating various bodily systems, such as appetite, mood, stress and chronic pain.

To test if maternal exposure to THC and/or CBD impaired endocannabinoid signaling in the adult offspring, the researchers tested whether giving a drug to boost the endocannabinoid system would restore fluoxetine's effectiveness. They found that enhancing the endocannabinoid system restored normal fluoxetine responses in mice that had received THC or CBD while their brains were developing.

Credit: 
Indiana University

Recent technology cost forecasts underestimate the pace of technological change

A team of researchers from the University of Cambridge, University College London, University of Oxford, and University of Brescia/RFF-CMCC European Institute on Economics and the Environment carried out the first systematic analysis of the relative performance of probabilistic cost forecasts from expert-based methods and model-based methods.

They specifically focused on one expert-based method -- expert elicitations -- and four model-based methods which model costs either as a function of cumulative installed capacity or as a function of time. The results of this comparison are published in PNAS

Accurately forecasting energy technology costs is a requirement for the design of robust and cost-effective decarbonization policies and business plans. The future of these and other technologies is notoriously hard to predict because the process by which technology is conceived, developed, codified, and deployed is part of a complex adaptive system and is made up of interconnected actors and institutions.

A range of probabilistic forecasting methods have been developed and used to generate estimates of future technology costs. Two high-level types of approaches have been most often used to generate quantitative forecasts: expert-based and model-based approaches. Broadly speaking, expert-based approaches involve different ways of obtaining information from knowledgeable individuals who may have differing opinions and/or knowledge about the relative importance of various drivers of innovation and how they may evolve. Experts make implicit judgments about the underlying drivers of change when producing their forecasts and can take into account both public information about observed costs as well as information that may not yet be widely available or codified. Expert-based approaches are often the only source of information available to analysts when data, on a given technology, has not yet been collected--as is generally the case for emerging technologies. 

By contrast, model-based approaches explicitly use one or more variables from available observed data to approximate the impact of the full set of drivers of innovation on technology costs, implicitly assuming that the rate of change in the past will be the best predictor of the rate of change in the future.

"The increased availability of information on future energy technology costs allowed us to conduct the first systematic comparison of the relative performance of probabilistic technology cost forecasts generated by different expert-based and model-based methodologies with observed costs" notes senior and corresponding author Prof. Diaz Anadon, Professor of Climate Change Policy at the University of Cambridge and Director of the University's Centre for Environment, Energy and Natural Resource Governance. "Such a comparison is essential to ensure researchers and analysts have more empirically-grounded evidence in integrated assessment models, cost benefit analyses and broader policy design efforts". She suggests that undertaking this type of comparison to assess and better understand different forecasting methods should become more common among modellers and forecasting practitioners, as more data is available. "Our analysis is focused on a particular period of time and on correlated energy technologies, so although our results point to current methods underestimating technological progress in this space, more research is needed".

Prof. Anadon authored the article with Dr. Jing Meng, Lecturer at University College London at the Bartlett School, Dr. Rupert Way, postdoctoral researcher at the Oxford Martin School, and Prof. E. Verdolini from the Law Department of the University of Brescia and affiliated to the RFF-CMCC European Institute on Economics and the Environment. Prof. Anadon and Prof. Verdolini were Work Package Leaders in the EU H2020 project INNOPATHS, which funded the majority of the research work.  

A number of key results emerge from this analysis. 

As Dr. Way of the University of Oxford explains, "the comparison of expert- and model-based forecasts with observed 2019 costs over a short time frame (a maximum of 10 years) shows that model-based approaches outperformed expert elicitations. More specifically, the 5th-95th percentile range of the four model-based approaches were much more likely to contain the observed value than that of EE forecasts. Among the model-based methods, some captured 2019 observed costs more often than others".

"In addition", notes Dr. Meng from University College London "the 2019 medians of model-based forecasts were closer to the average observed 2019 cost for five out of the six technologies. However, this comparison was possible only for a small number of technologies; furthermore, some of the EE forecasts included the observed value". For these reasons, the authors argue, this should not be taken as evidence that model-based approaches perform better than expert-based methods for all or most cases. 
 

Prof. Verdolini, from UniBrescia/EIEE points to the fact that both expert-based methods and model-based methods underestimated technological progress in most of the energy technologies analysed in this paper. "That is, in five out of six technologies analyzed, the methods produced 2019 cost forecast medians that were higher than the observed 2019 costs. This indicates that the rate of progress in cost reduction has been higher than what both historical data and expert opinions predicted. But the extent to which this faster pace of progress compared to forecasts will continue (or not) in the future remains to be seen".

The urgency of developing policies for deep decarbonisation, as outlined in the IPCC 1.5 C report, makes this systematic analysis timely and necessary. Taken together, results point to various worthwhile avenues for future research. Concerning  expert elicitations, this paper calls attention to  the need to continue methodological improvements to reduce overconfidence. For  model-based methods, this work highlights the challenge of finding (and collecting) data for many key energy technologies. It also calls for increased efforts in data collection and publication by international organizations and other entities. The underestimation of technological progress also points to the value of further method development to reflect structural changes and technology correlations. Lastly, given the large uncertainty ranges and major policy decisions associated with the energy transition and with addressing climate change, additional research comparing the performance of different probabilistic forecasting approaches with observed values across a wider range of technologies should be carried out as more data becomes available and more time passes.

The article is complemented with a database containing a large number of data points on the costs of 32 energy technologies relevant to support the energy transition. These data points include 25 sets of data from expert elicitations conducted between 2007 and 2016 covering a range of geographies and 25 sets of observed technology data including the evolution of cost and deployment over different periods of time. This data was made publicly available here.

Credit: 
CMCC Foundation - Euro-Mediterranean Center on Climate Change

Astonishing altitude changes in marathon flights of migratory birds

Extreme differences in flight altitude between day and night may have been an undetected pattern amongst migratory birds - until now. The observation was made by researchers at Lund University in Sweden in a study of great snipes, where they also measured a new altitude record for migratory birds, irrespective of the species, reaching 8 700 metres.

Great snipes are shorebirds that breed in Sweden, among other places, and spend the winter in areas near the equator in Africa. Previous studies have shown that great snipes make long marathon flights of up to 6 000 kilometres lasting 60-90 hours when they migrate between breeding sites in Sweden and wintering sites close to the equator.

In a new study published in Current Biology, the international research team describe how the great snipes fly at a higher altitude during the day than at night. The difference can be as much as several thousand metres. The birds regularly flew at an altitude of over 6 000 metres during the day, compared to an average altitude of about 2 000 metres at night. One bird even flew at over 8 000 metres for five hours during the autumn migration to Africa, reaching a maximum altitude of 8 700 metres.

"It is the highest flight altitude that has ever been recorded for a migratory bird", says Åke Lindström at the Department of Biology in Lund who led the study.

Researchers used small data loggers developed at Lund University and attached these to the great snipes in order to follow changes in flight altitude during the long flights. The record height of 8 700 metres is astonishing.

However, the researchers are even more fascinated by the pattern among migratory birds that they may have detected. A recently published study on great reed warblers, which was also led by researchers in Lund, found that on the few occasions during the migration when the small passerines prolonged their otherwise nocturnal flights into the day, they flew at much higher flight altitudes during the day than at night. This occurred when great reed warblers crossed inhospitable terrain such as the Sahara Desert and the Mediterranean Sea.

The considerably larger great snipes do the same. However, not only when they fly over so-called ecological barriers such as deserts and seas, but also when they fly over the tropics and over Europe.

"Other species that that make long migratory flights are also likely to use this day-and-night rhythm. We may well be tracking a general pattern, it will be up to future studies to show this", says Åke Lindström.

If it transpires to be a pattern amongst many migratory birds, it would enhance our understanding of which environmental factors are important for migratory birds. This knowledge may in turn bring us closer to explaining the great variation in the behaviour of these birds. Why do some species migrate at night and others during the day? Why do some birds only fly short distances at a time while others, such as great snipes fly for several days in a row?

As yet, no one knows for certain why great snipes and great reed warblers fly at a higher altitude in the day than at night during migration. The research team mention three explanations as the most probable: birds can navigate more easily via landmarks, they avoid birds of prey, the cold temperature at high altitudes helps prevent overheating during strenuous exercise under the blazing sun.

"Our main line of inquiry is that they fly at a high altitude to cool down, but we must be humble and acknowledge that there may be other or additional explanations", concludes Åke Lindström.

Credit: 
Lund University

Scalable manufacturing of integrated optical frequency combs

image: Photograph showing hundreds of semiconductor lasers and silicon nitride microresonators.

Image: 
Chao Xiang, UCSB

Optical frequency combs consist of light frequencies made of equidistant laser lines. They have already revolutionized the fields of frequency metrology, timing and spectroscopy. The discovery of ''soliton microcombs'' by Professor Tobias Kippenberg's lab at EPFL in the past decade has enabled frequency combs to be generated on chip. In this scheme, a single-frequency laser is converted into ultra-short pulses called dissipative Kerr solitons.

Soliton microcombs are chip-scale frequency combs that are compact, consume low power, and exhibit broad bandwidth. Combined with large spacing of comb "teeth", microcombs are uniquely suited for a wide variety of applications, such as terabit-per-second coherent communication in data centers, astronomical spectrometer calibration for exoplanet searches and neuromorphic computing, optical atomic clocks, absolute frequency synthesis, and parallel coherent LiDAR.

Yet, one outstanding challenge is the integration of laser sources. While microcombs are generated on-chip via parametric frequency conversion (two photons of one frequency are annihilated, and a pair of two new photons are generated at a higher and lower frequency), the pump lasers are typically off-chip and bulky. Integrating microcombs and lasers on the same chip can enable high-volume production of soliton microcombs using well-established CMOS techniques developed for silicon photonics, however this has been an outstanding challenge for the past decade.

For the nonlinear optical microresonators, where soliton microcombs are formed, silicon nitride (Si3N4))has emerged as the leading platform due to its ultralow loss, wide transparency window from visible to mid-infrared, absence of two-photon absorption, and high power-handling capability. But achieving ultralow-loss Si3N4 microresonators is still insufficient for high-volume production of chip-scale soliton microcombs, as co-integration of chip-scale driving lasers are required.

Fifteen years ago, Professor John Bowers's lab at UCSB pioneered a method for integrating semiconductor lasers onto a silicon wafer. Since silicon has an indirect bandgap and cannot emit light, scientists bond indium phosphide semiconductors on silicon wafers to form laser gain sections. This heterogeneous integration laser technology has now been widely deployed for optical interconnects to replace the copper-wire ones that linked servers at data centers. This transformative laser technology has been already commercialized, and Intel ships millions of transceiver products per year.

In an article published in Science, the two labs at EPFL and UCSB now demonstrate the first heterogenous integration of ultralow-loss Si3N4 photonic integrated circuits (fabricated at EPFL) and semiconductor lasers (fabricated at UCSB) through wafer-scale CMOS techniques.

The method is mainly based on multiple wafer bonding of silicon and indium phosphide onto the Si3N4 substrate. Distributed feedback (DFB) lasers are fabricated on the silicon and indium phosphide layers. The single-frequency output from one DFB laser is delivered to a Si3N4 microresonator underneath, where the DFB laser seeds soliton microcomb formation and creates tens of new frequency lines.

This wafer-scale heterogeneous process can produce more than a thousand chip-scale soliton microcomb devices from a single 100-mm-diameter wafer, lending itself to commercial-level manufacturing. Each device is entirely electrically controlled. Importantly, the production level can be further scaled up to the industry standard 200- or 300-mm-diameter substrates.

"Our heterogenous fabrication technology combines the three mainstream integrated photonics platforms, namely silicon, inidium phosphate and Si3N4, and can pave the way for large-volume, low-cost manufacturing of chip-based frequency combs for next-generation high-capacity transceivers, data centers, sensing and metrology," says Dr Junqiu Liu who leads the Si3N4 fabrication at EPFL's Center of MicroNanoTechnology (CMi).

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Antidiabetic drug causes double the weight loss of competitor in Type 2 diabetes patients

BUFFALO, N.Y. -- Patients with Type 2 diabetes who were prescribed SGLT2 inhibitors lost more weight than patients who received GLP-1 receptor agonists, according to a University at Buffalo-led study.

The research, which sought to evaluate the difference in weight loss caused by the antidiabetic medications -- both of which work to control blood sugar levels -- found that among 72 patients, people using SGLT2 inhibitors experienced a median weight loss of more than 6 pounds, while those on GLP-1 receptor agonists lost a median of 2.5 pounds.

The findings, published last month in the Journal of the American Pharmacists Association, represent one of the first attempts to compare the two drugs.

"Weight loss is an advantageous quality for diabetic medications as being overweight is a common characteristic of the disease, and can eventually lead to reduced insulin sensitivity," said lead author Nicole Paolini Albanese, PharmD, clinical associate professor of pharmacy practice in the UB School of Pharmacy and Pharmaceutical Sciences. "With weight loss, it is possible to regain insulin sensitivity, improve glucose control, and reduce heart risk factors and comorbidities."

Both SGLT2 inhibitors and GLP-1 receptor agonists are recommended as second-line therapies for Type 2 diabetes after use of metformin, a drug also prescribed to control blood sugar, says Albanese.

The study examined records for patients with Type 2 diabetes who received either SGLT2 inhibitors or GLP-1 receptor agonists, in addition to other diabetes medications, from 2012-17. The researchers measured weight loss after six months of consecutive therapy, and differences in blood pressure, blood sugar levels and kidney function.

Canagliflozin, sold under the brand name Invokana, was the most commonly prescribed SGLT2 inhibitor. Liraglutide, sold under the brand name Victoza, was the most commonly prescribed GLP-1 receptor agonist.

No significant differences were found in blood pressure, blood sugar levels and kidney function after use of the medications. The data suggest that SGLT2 inhibitors may be more protective against weight gain caused by other antidiabetic drugs than GLP-1 receptor agonists, says Albanese. The results also counter previous research that has found GLP-1 receptor agonists to be the superior antidiabetic drug for weight loss, she says.

Although the weight loss caused by the drugs is small, the findings warrant larger investigations that examine the medications' effect on weight, she says.

"These medications at doses approved for treating Type 2 diabetes are not intended for weight loss," says Albanese. "However, this should not discourage the discussion of this potential benefit, as even a small amount of weight loss is a unique advantage of these drugs, especially when compared to potential weight gain caused from other treatment options."

Credit: 
University at Buffalo

Using computation to improve words: Novel tool could improve serious illness conversations

image: In this visualization of a serious illness conversation, each vertical bar represents a speaker turn, and the height of each bar is proportional to the length of the turn, with patient turns in red and clinician turns in blue. The alternating short-long pattern in turn length that dominates these conversations is apparent, and CODYM analysis helps us understand this and other patterns, revealing insights about information flow in conversation.

Image: 
Courtesy of Laurence Clarfeld, University of Vermont

Conversations between seriously ill people, their families and palliative care specialists lead to better quality-of-life. Understanding what happens during these conversations - and particularly how they vary by cultural, clinical, and situational contexts - is essential to guide healthcare communication improvement efforts. To gain true understanding, new methods to study conversations in large, inclusive, and multi-site epidemiological studies are required. A new computer model offers an automated and valid tool for such large-scale scientific analyses.

Research results on this model were published today in PLOS ONE.

Developed by a team of computer scientists, clinicians and engineers at the University of Vermont, the approach - called CODYM (COnversational DYnamics Model) analysis - uses simple behavioral state-based models (Markov Models) to capture the flow of information during different conversations, based on patterns in the lengths of alternating speaker turns.

To date, the conversation analysis process has typically relied on time-consuming manual transcription, detailed annotations, and required access to the highly private content of conversations.

"CODYMs are the first Markov Model to use speaker turn length as the fundamental unit of information and the first model of any type to provide concise, high-level, quantitative summaries of overall dependencies in sequences of speaker turn lengths," says Laurence Clarfeld, Ph.D., lead author on the study and a University of Vermont postdoctoral associate whose doctoral dissertation in computer science focused on this research topic.

Using a time-based definition of speaker turn length means that real-time automation and analysis of conversational dynamics can occur without transcription or stored audio, thus protecting the privacy of the conversation content, add the authors.

"We developed a computational model of information flow in serious illness that could become a fundamental tool in conversational epidemiology," says coauthor Robert Gramling, M.D., D.Sc., professor of family medicine, Miller Chair in Palliative Medicine, and director of the Vermont Conversation Lab at the University of Vermont's Larner College of Medicine. "It predicts important and complex conversational processes, like emotion expression and future patterns of speaker turns."

For the study, the researchers performed analyses to validate the CODYM model, "identify normative patterns of information flow in serious illness conversations and show how these patterns vary across narrative time and differ under expressions of anger, fear and sadness," the authors write.

In addition to serving as a means for assessing and training healthcare providers, CODYMs could also be used to compare "conversational dynamics across language and culture, with the prospect of identifying universal similarities and unique 'fingerprints' of information flow," the study authors state.

This publication represents the latest of several serious illness conversation dynamics studies conducted collaboratively by members of the University of Vemont's Larner College of Medicine (Gramling) and College of Engineering and Mathematical Sciences (Margaret Eppstein, Ph.D., Laurence Clarfeld, Ph.D., and Donna Rizzo, Ph.D.) over the past several years.

Credit: 
Larner College of Medicine at the University of Vermont

UMaine-led study: Imaging spectroscopy can predict water stress in wild blueberry fields

Imaging spectroscopy can help predict water stress in wild blueberry barrens, according to a University of Maine-led study.

The technology involves measuring the light reflected off of objects depicted in images captured by drones, satellites and other remote sensing technology to classify and gather pertinent information about the objects. According to researchers, it can precisely measure light across dozens, if not hundreds, of bands of colors. The reflectance spectra can depict nutrient levels, chlorophyll content and other indicators of health for various crops, according to researchers.

Scientists from UMaine, the Schoodic Institute and Wyman's, one of the world's largest purveyors of wild blueberries and the number one brand of frozen fruit in the country, found in their research that when incorporated into models, imaging spectroscopy can help predict whether wild blueberry fields will lack sufficient water for growing. Not only can the technology help inform growers as they evaluate irrigation routines and manage their water resources in a way that avoids damaging the crop, researchers say.

The team collected imaging spectroscopy data by deploying a drone equipped with a spectrometer for capturing visible and near-infrared light to photograph wild blueberry fields owned by Wyman's in Debois, Maine. Researchers then processed the images to measure reflected light spectra from the plants for indications of chlorophyll levels and other properties that would help estimate their water potential, which, they say, is the primary force driving water flow and an indicator of water stress. At the same time, the group collected small branches with leaves from wild blueberry plants in the plots to assess their water potential and validate the spectra-based estimation. Pictures and samples were collected in the spring and summer of 2019 when the plants experienced peak bloom, green fruit and color break.

The data from both drone images and ground samples were incorporated into models, which they developed using machine learning and statistical analysis, to estimate water potential, and thereby predict water stress, of the plants in the barrens. Models from the ground sample data were used to help guide the development of and validate the model created with data from the images. The results of both sets of models were comparable, demonstrating that imaging spectroscopy can accurately predict water stress in wild blueberry barrens at different times of the growing season. With the efficacy of the technology confirmed, researchers say scientists can capitalize on the benefits of it, such as conducting repeated measurements on small objects like blueberry leaves with ease.

Graduate student Catherine Chan led the study, joined by UMaine faculty Daniel Hayes and Yongjiang Zhang, Schoodic Institute forest ecologist Peter Nelson and Wyman's agronomist Bruce Hall. The journal Remote Sensing published a report of their findings.

"We couple spectral data and areas of known water potential in wild blueberry fields through machine learning, creating a model to further predict areas that may be water stressed," Chan says.

Understanding how to sustainably manage water resources to mitigate risk associated with current and increasing drought frequency is crucial to wild blueberry growers, researchers say.

"This research provides key learnings to ensure the continued viability of wild blueberry crops for generations to come," Hall says.

Warming and drought exacerbated by climate change have compounded their struggles in recent years, alongside freezing and pathogens. Researchers say as a result, there has been an increased need for predictive tools, like imaging spectroscopy and models that rely on it, for land conditions to inform mitigation strategies.

Nelson says the study was conducted in cooperation with his laboratory of ecological spectroscopy (lecospec) at the Schoodic Institute, which was financed by the Maine Economic Improvement Fund, Maine Space Grant Consortium, the National Aeronautics and Space Administration (NASA) and other University of Maine System funds. The research team used a software he developed with Chan and other students that allows drones and spectrometers to measure light across dozens or hundreds of more bands of color than an average camera, Nelson says.

"We envisioned and continue to promote this as a research and application tool to produce data and algorithms applied to questions and problems in forest, agricultural and marine sectors of Maine's economy," he says.

Credit: 
University of Maine

Earth's cryosphere shrinking by 87,000 square kilometers per year

image: The percentage of each area that experiences ice, snow or frozen ground at some point during the year (1981-2010).

Image: 
Peng et al. (2021) Earth's Future https://doi.org/10.1029/2020EF001969

WASHINGTON--The global cryosphere--all of the areas with frozen water on Earth--shrank by about 87,000 square kilometers (about 33,000 square miles), a area about the size of Lake Superior, per year on average, between 1979 and 2016 as a result of climate change, according to a new study. This research is the first to make a global estimate of the surface area of the Earth covered by sea ice, snow cover and frozen ground.

The extent of land covered by frozen water is just as important as its mass because the bright white surface reflects sunlight so effectively, cooling the planet. Changes in the size or location of ice and snow can alter air temperatures, change the sea level and even affect ocean currents worldwide.

The new study is published in Earth's Future, AGU's journal for interdisciplinary research on the past, present and future of our planet and its inhabitants.

"The cryosphere is one of the most sensitive climate indicators and the first one to demonstrate a changing world," said first author Xiaoqing Peng, a physical geographer at Lanzhou University. "Its change in size represents a major global change, rather than a regional or local issue."

The cryosphere holds almost three-quarters of Earth's fresh water, and in some mountainous regions, dwindling glaciers threaten drinking water supplies. Many scientists have documented shrinking ice sheets, dwindling snow cover and loss of Arctic sea ice individually due to climate change. But no previous study has considered the entire extent of the cryosphere over Earth's surface and its response to warming temperatures.

Contraction in space and time

Peng and his co-authors from Lanzhou University calculated the daily extent of the cryosphere and averaged those values to come up with yearly estimates. While the extent of the cryosphere grows and shrinks with the seasons, they found that the average area covered by Earth's cryosphere has contracted overall since 1979, correlating with rising air temperatures.

The shrinkage primarily occurred in the Northern Hemisphere, with a loss of about 102,000 square kilometers (about 39,300 square miles), or about half the size of Kansas, each year. Those losses are offset slightly by growth in the Southern Hemisphere, where the cryosphere expanded by about 14,000 square kilometers (5,400 square miles) annually. This growth mainly occurred in the sea ice in the Ross Sea around Antarctica, likely due to patterns of wind and ocean currents and the addition of cold meltwater from Antarctic ice sheets.

The estimates showed that not only was the global cryosphere shrinking but that many regions remained frozen for less time. The average first day of freezing now occurs about 3.6 days later than in 1979, and the ice thaws about 5.7 days earlier.

"This kind of analysis is a nice idea for a global index or indicator of climate change," said Shawn Marshall, a glaciologist at the University of Calgary, who was not involved in the study. He thinks that a natural next step would be to use these data to examine when ice and snow cover give Earth its peak brightness, to see how changes in albedo impact the climate on a seasonal or monthly basis and how this is changing over time.

To compile their global estimate of the extent of the cryosphere, the authors divided up the planet's surface into a grid system. They used existing data sets of global sea ice extent, snow cover and frozen soil to classify each cell in the grid as part of the cryosphere if it contained at least one of the three components. Then they estimated the extent of the cryosphere on a daily, monthly and yearly basis and examined how it changed over the 37 years of their study.

The authors say that the global dataset can now be used to further probe the impact of climate change on the cryosphere, and how these changes impact ecosystems, carbon exchange and the timing of plant and animal life cycles.

Credit: 
American Geophysical Union

Vaccines grown in eggs induce antibody response against an egg-associated glycan

Over years of studying antibody responses against the flu in the Wilson lab at the University of Chicago, researchers kept coming up with a strange finding: antibodies that seemed to bind not only to the flu virus, but to every virus the lab could throw at them. Since antibodies are usually highly specific to individual pathogens, in order to maximize their targeted protective response, this pattern was extremely unusual.

Until finally, they realized: The antibodies weren't responding to the viruses, but rather to something in the biological material in which the viruses had been grown. In every case, the virus had been propagated in chicken eggs -- more specifically, in a part of the egg called the allantois. The findings were published on June 15 in mBio.

"Growing vaccines in eggs is the old school way of doing things because it's cheap and you can grow a lot of virus in eggs," said first author Jenna Guthmiller, PhD, a postdoctoral fellow at UChicago. "Now we're finding that these antibodies bind to this glycan - a sugar molecule - found in eggs, which means that people who are getting vaccinated are producing an antibody response against this egg component that's not related to the virus at all."

The fact that vaccines grown in eggs can lead to this off-target antibody response is unexpected, but the implications aren't yet known. It could mean that the immune system diverts resources away from developing protective antiviral antibodies to produce these egg sugar antibodies instead, which could have implications for vaccine effectiveness.

It's important to note that these antibodies do not bind to known egg allergens, indicating that they likely are not the culprits behind egg allergies, Guthmiller said. "It doesn't seem to be harmful, but it may not be beneficial, and it may be affecting immunity, and that's the important next step."

It took the team years to determine that the antibodies were linked, not to the viruses they were studying, but rather to the eggs in which they were grown. "No joke, we spent years thinking about this," said Guthmiller. "But once we figured it out, it was straightforward. And we found that it's very specific to the flu vaccine grown in this one compartment, in the allantois. This isn't seen with vaccines grown in other chicken cells."

The antibodies target a sugar molecule, known as a glycan, called N-acetyllactosamine (LacNAc), with a sulfur modification. LacNAcs are a common glycan in humans, but the specific sulfur modification of LacNAc found in eggs is not known to be expressed in humans. Because of this, humans can produce antibodies against this sulfur-modified glycan.

When the researchers dug into past studies on flu antibody responses, they found that this antibody response against LacNAc appears to be fairly common following flu vaccination. However, some people do not seem to develop the anti-egg antibodies, and it doesn't appear that producing the anti-egg antibodies reduces the immune system's ability to produce anti-flu antibodies -- though it's not clear whether or not there is an impact on vaccine effectiveness.

"There's a little bit of evidence so far that suggests vaccines prepared by other methods are more effective than those grown in eggs, but the precise reasons aren't known," said Guthmiller. "This could be a potential mechanism, but we weren't able to address that in this study."

So far, there is no evidence that the presence of these antibodies has any negative impact on an individual's health. "We just really don't know what function these antibodies have," said Guthmiller. "So many people get the flu vaccine every year, and adverse events are extremely uncommon, so there's no reason to suspect that this might cause any problems."

More research is needed to determine what, if anything, these anti-egg antibodies mean for the effectiveness of the flu vaccine. "We don't know how these antibodies impact our flu-specific response. There may be competition between B cells against the flu and these egg glycans, which could be impacting immunity. And if there is an association between egg antibodies and reduced immunity, we need to look at alternative methods for flu vaccine production. Anything that can improve vaccine production is something that we should be considering seriously."

Credit: 
University of Chicago Medical Center