Tech

Scientists shed new light on viral protein shell assembly

New insight on the conditions that control self-assembly in the protective shell of viruses has been published today in eLife.

The study also highlights the factors that can cause incorrect self-assembly in the viral protein shell, otherwise known as the capsid, preventing viruses from being able to replicate. The findings suggest that manipulating these factors to induce misassembly in viral capsids could be a promising new approach to hindering viral infections.

Viruses are formed by a chain of the nucleic acids DNA or RNA that are encased in a protein shell made, in the simplest cases, from multiple copies of a single protein. This capsid protects, carries and delivers viruses to their host. Despite this apparent simplicity in their make-up, viruses are able to perform many complex functions that are essential to their replication cycle - one of these being the ability of the viral capsid to assemble itself. The resulting structure of a correctly self-assembled capsid has a very precise architecture, which in most cases is spherical and similar to an icosahedron, with 20 identical triangular faces.

"During self-assembly, a favourable binding energy competes with the energetic cost of the growing edge and the elastic stresses generated by the curvature of the capsid," explains lead author Carlos Mendoza, a researcher at Universidad Nacional Autónoma de México (the National Autonomous University of Mexico). "As a result, incomplete structures such as open capsids and cylindrical or ribbon-shaped shells may emerge during assembly, preventing the successful replication of viruses."

Mendoza says that previous studies of self-assembly in capsids have mostly focused on the templated growth on the surface of a sphere, or on analysing the optimal shape of the resulting capsid. They have not considered the potential importance of other ingredients on capsid stability and formation, such as the line tension (energy penalty per unit length at the rim of a growing capsid), the chemical potential difference (free-energy gain of the proteins upon assembly) or the preferred curvature.

To address this gap, Mendoza and co-author David Reguera, Professor at the University of Barcelona and the UB Institute of Complex Systems, Spain, analysed the conditions and mechanisms leading to the misassembly of empty viral capsids, taking into account all these 'ingredients'. Their analyses revealed that capsid self-assembly depends on three factors that can be manipulated to cause the formation of non-spherical and open shells.

"We found that the outcome of self-assembly can be recast into a universal phase diagram, a type of chart that highlights the conditions for successful viral assembly and the key factors that prevent it," Reguera explains. "Our findings advance our understanding of the physics controlling the assembly of curved shells, and explain why viruses with high mechanical resistance cannot be assembled directly and need a maturation process to stiffen the capsid and become infective."

The authors add that their results can only be applied directly to icosahedral viruses, which include papillomavirus, polyomavirus and poliovirus, and not to viruses with helical nucleocapsids, such as SARS-CoV-2, the virus that causes COVID-19. However, their work lays the foundation for future studies into the conditions and chemical agents needed to hinder different types of viral infections by preventing capsid assembly or by inducing misassembly.

Credit: 
eLife

Nearly half of US breathing unhealthy air; record-breaking air pollution in nine cities

This year marks the 50th anniversary of the Clean Air Act, which is responsible for dramatic improvements in air quality. Despite this, a new report from the American Lung Association finds nearly half of the nation's population - 150 million people - lived with and breathed polluted air, placing their health and lives at risk. The 21st annual "State of the Air" report finds that climate change continues to make air pollution worse, with many western communities again experiencing record-breaking spikes in particle pollution due to wildfires. Amid the COVID-19 pandemic, the impact of air pollution on lung health is of heightened concern.

The 2020 "State of the Air" report analyzed data from 2016, 2017 and 2018, the three years with the most recent quality-assured air pollution data. Notably, those three years were among the five hottest recorded in global history. When it comes to air quality, changing climate patterns fuel wildfires and their dangerous smoke, and lead to worsened particle and ozone pollution. This degraded air quality threatens everyone, especially children, older adults and people living with a lung disease.

"The report finds the air quality in some communities has improved, but the 'State of the Air' finds that far too many people are still breathing unhealthy air," said American Lung Association President and CEO Harold Wimmer. "This year's report shows that climate change continues to degrade air quality and increase the risk of air pollution harming health. To protect the advances in air quality we fought for 50 years ago through the Clean Air Act, we must again act today, implementing effective policies to protect our air quality and lung health against the threat of climate change."

"Air pollution is linked to greater risk of lung infection," Wimmer added. "Protecting everyone from COVID-19 and other lung infections is an urgent reminder of the importance of clean air."

Each year, "State of the Air" reports on the two most widespread outdoor air pollutants, ozone pollution and particle pollution. Each is dangerous to public health and can be lethal. The 2020 "State of the Air" report found that more than 20.8 million people lived in counties that had unhealthy levels of air pollution in all categories from 2016 to 2018. Below are the report findings for each category graded.

Particle Pollution

Unhealthy particles in the air come from wildfires, wood-burning stoves, coal-fired power plants, diesel engines and other sources. Particle pollution can be deadly. Technically known as PM2.5, these microscopic particles lodge deep in the lungs and can even enter the bloodstream. Particle pollution can trigger asthma attacks, heart attacks and strokes and cause lung cancer. New research also links air pollution to the development of serious diseases, such as asthma and dementia.

The report has two grades for particle pollution: one for "short-term" particle pollution, or daily spikes, and one for the annual average, "year-round" level that represents the concentration of particles day-in and day-out in each location.

Short-Term Particle Pollution

More cities experienced more days with spikes in particle pollution in this year's report. In fact, nine western cities reached their most days ever reported. These deadly spikes were driven in large part by smoke from major wildfires in 2018, especially in California, and some locations also saw spikes from woodsmoke from heating homes. Of note, 24 of the 25 most polluted cities were located in the western region of the U.S. Nationwide, more than 53.3 million people experienced these unhealthy spikes in particle pollution.

Top 10 U.S. Cities Most Polluted by Short-term Particle Pollution (24-hour PM2.5):

1. Fresno-Madera-Hanford, California

2. Bakersfield, California

3. San Jose-San Francisco-Oakland, California

4. Fairbanks, Alaska

5. Yakima, Washington

6. Los Angeles-Long Beach, California

7. Missoula, Montana

8. Redding-Red Bluff, California

9. Salt Lake City-Provo-Orem, Utah

10. Phoenix-Mesa, Arizona

Year-Round Particle Pollution

More than 21.2 million people lived in counties with unhealthy levels of year-round particle pollution, which is more than in the last three "State of the Air" reports. Progress toward healthy air continued in many places thanks to steps taken to clean up emissions that lead to particle pollution, but 13 of the 26 most polluted cities faced worse levels of year-round particle pollution. Some cities had so many days of short-term particle pollution spikes that the sheer number led to them having higher annual averages as well.

Many cities experienced their cleanest ever annual average, yet remained on the nation's most polluted list. Despite making the top 10 most polluted list, both Fresno-Madera-Hanford, California and Pittsburgh metro area tied with their previous record of cleanest air in the 21-year history of the report. And while Chicago, Cincinnati and Indianapolis made the top 25 most polluted list, each hit their cleanest ever annual average.

Top 10 U.S. Cities Most Polluted by Year-Round Particle Pollution (Annual PM2.5):

1. Bakersfield, California

2. Fresno-Madera-Hanford, California

3. Visalia, California

4. Los Angeles-Long Beach, California

5. San Jose-San Francisco-Oakland, California

6. Fairbanks, Alaska

7. Phoenix-Mesa, Arizona

8. El Centro, California

8. Pittsburgh-New Castle-Weirton, PA-OH-WV

10. Detroit-Warren-Ann Arbor, Michigan

Ozone Pollution

Ozone pollution, often referred to as smog, is a powerful respiratory irritant whose effects have been likened to a sunburn of the lung. Inhaling ozone can cause shortness of breath, trigger coughing and asthma attacks and may shorten life. Warmer temperatures driven by climate change make ozone more likely to form and harder to clean up.

Significantly more people suffered unhealthy ozone pollution in the 2020 report than in the last three "State of the Air" reports. More than 137 million people lived in area county earning a failing grade for ozone pollution. This shows the changing climate's impact on air quality, as ozone pollution worsened during the global record-breaking heat years tracked in the 2020 report. However, despite making the top ten list of most ozone-polluted cities, San Jose-San Francisco-Oakland, California experienced its best-ever air quality for ozone.

Top 10 Most Ozone-Polluted Cities:

1. Los Angeles-Long Beach, California

2. Visalia, California

3. Bakersfield, California

4. Fresno-Madera-Hanford, California

5. Sacramento-Roseville, California

6. San Diego-Chula Vista-Carlsbad, California

7. Phoenix-Mesa, Arizona

8. San Jose-San Francisco-Oakland, California

9. Las Vegas-Henderson, Nevada

10. Denver-Aurora, Colorado

Cleanest Cities

The "State of the Air" also recognizes the nation's four cleanest cities. To make the list, a city must experience no high ozone or high particle pollution days and must rank among the 25 cities with the lowest year-round particle pollution levels.

Cleanest U.S. Cities (listed in alphabetical order)

Bangor, Maine

Burlington-South Burlington-Barre, Vermont

Honolulu, Hawaii

Wilmington, North Carolina

"The science is clear: the nation needs stronger limits on ozone and particle pollution to safeguard health, especially for children and people with lung disease," Wimmer said. "Every family has the right to breathe healthy air - and the right to know when air pollution levels are unhealthy. The Clean Air Act is a powerful protector of public health and Americans breathe healthier air today because of this landmark law. But climate change poses increasingly dire threats to air quality and lung health, and our leaders must take immediate, significant action to safeguard the air we all breathe."

Credit: 
American Lung Association

Novel class of specific RNAs may explain increased depression susceptibility in females

Researchers at Mount Sinai have found that a novel class of genes known as long non-coding RNAs (lncRNAs) expressed in the brain may play a pivotal role in regulating mood and driving sex-specific susceptibility versus resilience to depression. In a study published online, in the journal Neuron on April 17, the team highlighted a specific gene, LINC00473, that is downregulated in the cerebral cortex of women only, shedding light on why depression affects females at twice the rate of men.

"Our study provides evidence of an important new family of molecular targets that could help scientists better understand the complex mechanisms leading to depression, particularly in women," says Orna Issler, PhD, a postdoctoral researcher in the Nash Family Department of Neuroscience and The Friedman Brain institute, Icahn School of Medicine at Mount Sinai, and lead author of the study. "These findings into the biological basis of depression could promote the development of more effective pharmacotherapies to address a disease that's the leading cause of disability worldwide."

Past research has shown that about 35 percent of the risk for depression in both sexes can be traced to genetic factors, and the remainder to environmental factors, primarily stress exposure. Long non-coding RNAs fall into a third category: epigenetic factors, which are biological processes that lead to changes in gene expression not caused by changes in the genes themselves. While research focusing on the role of lncRNAs in mood and depression is in its infancy, Mount Sinai has pushed the boundaries of the science by showing the robust regulation of this class of molecules linked to depression in a brain-region and sex-specific manner.

"Our work suggests that the complex primate brain especially uses lncRNAs to facilitate regulation of higher brain function, including mood," explains Dr. Issler, "and that malfunction of these processes can contribute to pathologies like depression and anxiety in a sex-specific manner." Researchers found, for example, that the LINC00473 gene is a female-specific driver of stress resilience that is impaired in female depression. They also learned it is a key regulator of mood in females, in whom it acts on the prefrontal cortex of the brain by regulating gene expression, neurophysiology, and behavior.

To evaluate the contribution of lncRNAs to depression, the Mount Sinai team screened thousands of candidate molecules, and using advanced bioinformatics narrowed the field to LINC00473. Through viral-mediated gene transfer, researchers expressed LINC00473 in adult mouse neurons, and showed that it induced stress resilience solely in female mice. They found that this sex-specific phenotype was accompanied by changes in synaptic function and gene expression selectively in female mice. That discovery, along with studies of human neuron-like cells in culture, led to selection of LINC00473 as the lead candidate. Other genes considered strong candidates are also being actively investigated.

"Our study opens the window to a whole new class of molecular targets that could help explain the mechanisms governing depression susceptibility and resilience, particularly in females," says corresponding author Eric J. Nestler, MD, PhD, Nash Family Professor of Neuroscience at Icahn School of Medicine, Director of The Friedman Brain Institute, and Dean for Academic and Scientific Affairs. "Long non-coding RNAs could guide us toward better, more effective ways to treat depression and, just as importantly, to diagnose this debilitating condition. Much work remains, but we've provided a very promising roadmap to follow moving forward."

Credit: 
The Mount Sinai Hospital / Mount Sinai School of Medicine

A biological mechanism for depression

image: This is Mark Rasenick, UIC professor of physiology and psychiatry at the College of Medicine.

Image: 
UIC

University of Illinois at Chicago researchers report that in depressed individuals there are increased amounts of an unmodified structural protein, called tubulin, in lipid rafts -- fatty sections of a cell membrane -- compared with non-depressed individuals.

Their findings are published in the Journal of Neuroscience.

Tubulin is part of a protein complex that provides structure to cells. This complex also is involved in binding a specific protein called Gs alpha, or Gsa, which is a signaling molecule that conveys the action of neurotransmitters like serotonin.

Previous work established that high levels of Gsa were in the lipid rafts of depressed individuals and that Gsa lost effectiveness when in those lipid rafts. Gsa moved out of the lipid rafts after treatment with antidepressants or after using a drug inhibitor for histone deacetylase 6 (HDAC-6) -- an enzyme that removes modifications from tubulin -- suggesting a role of tubulin in depression as well.

Mark Rasenick, UIC professor of physiology and psychiatry at the College of Medicine, and colleagues analyzed post-mortem brain tissue from individuals with and without a diagnosis of depression to determine the location and degree of modified tubulin. A subgroup for depression also included individuals who died by suicide.

"While some studies suggest that there are differences between depressed individuals who died by suicide and those that did not, we observed a profound decrease in the extent of acetylated tubulin in lipid raft membranes from brains of all depressed subjects," said Rasenick, who is also a research career scientist at the Jesse Brown Veteran Affairs Medical Center.

In all groups, there were no significant changes in the amounts of modified alpha tubulin, HDAC-6, or alpha tubulin acetyltransferase 1 -- a protein that adds modifications to tubulin. However, there was less modified tubulin found in the lipid rafts of both depression groups versus the non-depressed group. According to Rasenick, the data suggest that diminished tubulin acetylation may be essential in harnessing Gsa to lipid rafts during depression. Antidepressants reverse this process and allow fuller function of Gsa and the neurotransmitters that activate it.

PAX Neuroscience Inc., a company created by Rasenick, researches how Gsa localization could be a diagnostic biomarker in blood cells to identify individuals with depression. Now the company will explore tubulin-related depression therapeutics and diagnostics. This may add new approaches to therapy and create diagnostic platforms to determine, in just a few days, whether an antidepressant is working.

"This work is important because half of the people who appear to be suffering from depression don't seek treatment," Rasenick said. "This is likely due to the social stigma surrounding people with psychiatric problems. But having an identified biological basis for depression could help people to realize that depression is not 'their failure' and allow them to treat it like any other illness."

Credit: 
University of Illinois Chicago

Scientists uncover major cause of resistance in solid electrolytes

image: Seeing the invisible: An electron hologram of a grain boundary in a lightly doped solid electrolyte sample from which electric potential at the grain boundary can be recovered.

Image: 
Argonne National Laboratory

Solid electrolyte materials consist of hundreds of thousands of small crystalline regions, called grains, with various orientations. The materials, used in fuel cells and batteries, transport ions, or charged atoms, from one electrode to the other electrode. Boundaries between the grains in the materials are known to impede the flow of ions through the electrolyte, but the exact properties that cause this resistance have remained elusive.

Scientists from the U.S. Department of Energy’s (DOE) Argonne National Laboratory contributed to a recent study led by Northwestern University to investigate grain boundaries in a solid electrolyte material. The study involved two powerful techniques — electron holography and atom probe tomography — that allowed scientists to observe the boundaries at an unprecedentedly small scale. The resulting insights provide new avenues for tuning chemical properties in the material to improve performance.

“When scientists study the conductivity of these electrolytes, they typically measure the average performance of all of the grains and grain boundaries together,” said Charudatta Phatak, a scientist in Argonne’s Materials Science Division (MSD), “but strategically manipulating the material properties requires deep knowledge of the origins of the resistance at the level of individual grain boundaries.”

To explore the grain boundaries, the scientists performed electron holography of a common solid electrolyte at Argonne’s Center for Nanoscale Materials (CNM), a DOE Office of Science User Facility. In this process, a beam of electrons hits a thin sample of the material and experiences a phase shift due to the presence of a local electric field in and around it. An external electric field then causes a portion of the electrons passing through the sample to be deflected, creating an interference pattern.

“Strategically manipulating the material properties requires deep knowledge of the origins of the resistance at the level of individual grain boundaries.” — Charudatta Phatak, materials scientist at Argonne’s Materials Science division

The scientists analyzed these interference patterns, created on the same principles as holograms in optical physics, to determine the electric field inside the material at the grain boundaries. They measured the local electric fields at ten types of grain boundaries with different degrees of misorientation.

Before this study, scientists thought that resistance at grain boundaries arose due to internal thermodynamic effects alone, such as the limit on the buildup of charge in an area. However, the large and varied electric fields they observed indicated the existence of previously undetected impurities in the material that explain the resistance.

“If the resistance was only due to thermodynamic limits, we should have seen the same fields across different boundary types,” said Phatak, “but since we saw differences of almost an order of magnitude, there had to be another explanation.”

To further study the trace impurities, the scientists used the Northwestern University Center for Atom Probe Tomography (NUCAPT) to determine the chemical identity of individual atoms at the grain boundaries. The electrolyte material in the study, made of ceria and often used in solid oxide fuel cells, was thought to be almost completely pure, but the tomography revealed the existence of impurities including silicon and aluminum — produced during material synthesis.

“On the one hand, it shows that if you make your materials cleaner, you can lessen these interfacial problems with electrolytes,” said Sossina Haile, Walter. P. Murphy Professor of Materials Science and Engineering at Northwestern’s McCormick School of Engineering. “Realistically though, you can’t make a sample at an industrial scale cleaner than what we had prepared.”

These inherent impurities are configured at the grain boundaries in a way that causes the electric fields across the boundaries to resist the flow of ions. The footprints that the impurities leave on the overall resistance of the electrolyte closely resemble what scientists would expect from thermodynamic effects alone. Understanding the true cause of the resistance — the impurities — can help the scientists to correct for it.

“Based on our findings, we can intentionally insert elements into the material that negate the effects of the impurities, lowering the resistance at the grain boundaries,” said Phatak.

Funding for the study, in part, came from a Northwestern-Argonne Early Career Investigator Award for Energy Research awarded to Phatak. The program, which was matched by funds from the Institute of Sustainable Energy at Northwestern, fostered a collaboration between Phatak and Haile and supported Northwestern graduate student Xin Xu, first author on the study.

The use of these two techniques enabled scientists to visualize the systems in 3D and to resolve confusion surrounding the properties of grain boundaries and how they affect resistance in this electrolyte. The new information could help scientists to increase the efficiency of solid electrolytes in general, which could help to improve the performance of many types of sustainable and renewable energy sources.

“If ions can move across the interfaces of these solid-state electrolytes more effectively, batteries will become much more efficient,” Haile said. “The same is true of fuel cells, which is closer to the material system we studied. There’s a potential to really impact fuel efficiency by making it easier to operate at temperatures that aren’t extremely high.”

A study, titled “Variability and origins of grain boundary electric potential detected by electron holography and atom-probe tomography,” was published on April 13 in Nature Materials.

Credit: 
DOE/Argonne National Laboratory

The best things come in small packages

image: Diagram of CO2 emissions and transport between city and suburban areas and vertical observations of CO2 using the low-cost sensor and tethered balloon.

Image: 
Pengfei Han

Carbon dioxide is the most important greenhouse gas in the atmosphere and its vertical concentration gradient is important for an accurate understanding and interpretation of global warming, the inversion of carbon sources and sinks, the calibration and validation of atmospheric transport models, and remote sensing measurements.

Conventional high-precision carbon dioxide measurement instruments, due to the large size, heavy mass of supporting systems and high price, are difficult to apply to vertical observation, so in situ measurement of carbon dioxide vertical profiles within the boundary layer is rare.

After years of effort, the Carbon Cycle and Climate Change Group from the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences, and Prof. Ning Zeng from the University of Maryland, USA, has successfully developed a low-cost miniaturized carbon dioxide monitoring instrument based on non-dispersive infrared technology. The small size and light weight of the instrument make it easy to be mounted on a sounding device for vertical observation of carbon dioxide concentrations (Figure 1).

During 8-14 January 2019, the research team loaded the instrument onto a vertical detection system of the tethered balloon from the Sub-Center of Atmospheric Sciences, Chinese Ecosystem Research Network, IAP and successfully conducted in situ observations of the carbon dioxide vertical distribution in the boundary layer (0-1000 m) in the southwestern part of Shijiazhuang City, Hebei Province, China. Dr. Yinghong Wang, a researcher from Prof. Yuesi Wang's group, collected gas samples of different heights and measured them precisely with a gas chromatograph in the laboratory.

From data analysis, the miniaturized instrument produced very consistent carbon dioxide vertical profile results with those obtained by conventional gas chromatography analyzers. "After one week of continuous observations, we found that the carbon dioxide concentration basically tends to decrease with increasing altitude and that the vertical distribution of carbon dioxide concentration in the boundary layer is mainly influenced by the stability of the boundary layer and emission sources, and these results were further confirmed by atmospheric model simulations", explains the correspongding author, Dr. Pengfei Han, from the IAP. All these results have recently been published in Atmospheric and Oceanic Science Letters.

The miniaturized instrument provides a simple and efficient method for vertical carbon dioxide observation, which is promising for applications and has high replication value. "We set up a high-density observation network in the Beijing, Tianjin, and Hebei regions by using the advantages of such a low-cost and high-precision instrument, and observed the temporal and spatial characteristics of urban carbon dioxide changes at the kilometer scale, which will provide scientific data for policymaking with respect to carbon emissions reduction and low carbon development in China", concludes Dr. Han.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Yale-NUS College scientists find bisulphates that curb efficacy of diesel engine catalysts

A team of researchers from Yale-NUS College, in collaboration with scientists in Sweden, has found that bisulphate species in the exhaust stream are strongly connected to decreasing the effectiveness of exhaust remediation catalysts in diesel engines. Their findings pave the way for synthesising more sulphur-tolerant catalysts and developing regeneration strategies for catalyst systems on diesel-powered freight vehicles. This could lead to lower emission of highly toxic nitrogen oxides from diesel engines, hence reducing pollution.

Yale-NUS College postdoctoral fellows Susanna Liljegren Bergman and Vitaly Mesilov, undergraduate researcher Xiao Yang (Class of 2021), and Professor of Science (Chemistry) Steven Bernasek, carried out this research. They worked with collaborators Sandra Dahlin and Professor Lars Pettersson in Sweden, and Dr Xi Shibo at the Singapore Synchrotron Light Source of the National University of Singapore. They utilised in-situ temperature-dependent Cu K-edge X-ray absorption spectroscopy to analyse exactly how sulphur oxides affect copper-exchanged chabazite framework (Cu-CHA) catalysts.

Catalysts composed of copper-exchanged zeolites with a chabazite framework (Cu-CHA) are currently the most efficient means to lower the emission of highly toxic nitrogen oxides from diesel engines. However, earlier studies showed that Cu-CHA catalysts' efficacy is reduced by sulphur oxides that are also present in diesel exhaust, which poses a problem as the catalysts become less effective at preventing nitrogen oxides from escaping into the atmosphere. In this study, the researchers found that the effectiveness of catalysts in diesel engines is most impacted by the presence or formation of bisulphates in the exhaust stream. Understanding the chemical mechanism of how catalysts in diesel engines are affected by sulphur oxides present in diesel exhaust would enable the development of more effective catalysts that could reduce the emission of nitrogen oxides from diesel engines.

With greater insight into the way sulphates affect catalysts, future work can be done to investigate how the negative effects can be mitigated. Additionally, the findings regarding sulphates may also be applied to other studies on the impact of phosphorous and phosphorous oxides, present in biodiesel fuel, on catalyst performance. This could lead to the creation of more effective catalysts for biodiesel-powered engines.

Prof Bernasek said, "The results of this fundamental research into the mechanisms of catalyst deactivation provide the basis for developing new catalysts and new catalyst regeneration protocols. More efficient and robust exhaust remediation catalysts benefit the environment by reducing the emission of nitrogen oxides and enabling the use of more efficient engines, cutting overall carbon emission. This helps to reduce the impact of the continued short-term use of fossil fuels, and speed our transition to carbon neutral biofuels."

Credit: 
Yale-NUS College

Early screening based on family history may have dramatic effects on colorectal cancer detection

In an analysis that included information on adults diagnosed with colorectal cancer between 40 and 49 years of age, almost all patients could have been diagnosed earlier if they had been screened according to current family history-based screening guidelines. The findings are published early online in CANCER, a peer-reviewed journal of the American Cancer Society (ACS).

In many countries, colorectal cancer rates are rising in adults under 50 years of age. To identify those at risk, current guidelines recommend early screening for colorectal cancer among individuals with a family history of the disease. For example, for individuals with a first-degree relative with colorectal cancer, several medical societies recommend initiating screening at 40 years of age or 10 years prior to the age at diagnosis of the youngest relative diagnosed with colorectal cancer.

To estimate the potential impact of family history-based guidelines for screening, Samir Gupta, MD, of the VA San Diego Healthcare System and the University of California San Diego, and his colleagues examined information on individuals 40 to 49 years of age--2,473 with colorectal cancer and 772 without--in the Colon Cancer Family Registry from 1998 to 2007. (The Colon Cancer Family Registry contains information and specimens contributed by more than 15,000 families around the world and across the spectrum of risk for colorectal cancer).

The investigators found that 25 percent of individuals with colorectal cancer and 10 percent of those without cancer met the criteria for family history-based early screening. Almost all (98.4 percent) patients with colorectal cancer who met these criteria should have been screened at a younger age than when their cancer was diagnosed. Therefore, they could have had their cancer diagnosed earlier, or possibly even prevented, if earlier screening had been implemented based on family history-based guidelines.

"Our findings suggest that using family history-based criteria to identify individuals for earlier screening is justified and has promise for helping to identify individuals at risk for young-onset colorectal cancer," said Dr. Gupta. "We have an opportunity to improve early detection and prevention of colorectal cancer under age 50 if patients more consistently collect and share their family history of colorectal cancer, and healthcare providers more consistently elicit and act on family history."

Credit: 
Wiley

Diamonds shine in energy storage solution

image: QUT researchers have proposed the design of a new carbon nanostructure made from diamond nanothreads that could one day be used for mechanical energy storage, wearable technologies, and biomedical applications.

Image: 
QUT

QUT researchers have proposed the design of a new carbon nanostructure made from diamond nanothreads that could one day be used for mechanical energy storage, wearable technologies, and biomedical applications.

Dr Haifei Zhan, from the QUT Centre for Materials Science, and his colleagues successfully modelled the mechanical energy storage and release capabilities of a diamond nanothread (DNT) bundle - a collection of ultrathin one-dimensional carbon threads that store energy when twisted or stretched.

"Similar to a compressed coil or children's wind-up toy, energy can be released as the twisted bundle unravels," Dr Zhan said.

"If you can make a system to control the power supplied by the nanothread bundle it would be a safer and more stable energy storage solution for many applications."

The new carbon structure could be a potential micro-scale power supply for anything from implanted biomedical sensing systems monitoring heart and brain functions, to small robotics and electronics.

"Unlike chemical storage such as lithium ion batteries, which use electro-chemical reactions to store and release energy, a mechanical energy system itself would carry much lower risk by comparison," Dr Zhan said.

"At high temperatures chemical storage systems can explode or can become non-responsive at low temperatures. These can also leak upon failure, causing chemical pollution.

"Mechanical energy storage systems don't have these risks so make them more suited to potential applications within the human body.

"Carbon nanothread bundles could be made into twist-spun yarn-based artificial muscles that respond to electrical, chemical or photonic excitations.

"Previous research has shown such a structure made with carbon nanotubes could lift 50,000 times its own weight."

Dr Zhan's team found the nanothread bundle's energy density - how much energy it could store for its mass - was 1.76 MJ per kilogram, which was 4-5 orders higher than a conventional steel spring, and up to 3 times compared to Li-ion batteries.

"Energy dense materials are very important to many applications, which is why we are always looking for lightweight materials that still perform well.

"The benefits for aerospace applications are obvious. If we can reduce the weight of a system, we can significantly reduce its fuel requirements and costs."

The application of carbon nanothread bundles as an energy source could be endless, according to Dr Zhan.

"The nanothread bundles could be used in next-generation power transmission lines, aerospace electronics, and field emission, batteries, intelligent textiles and structural composites such as building materials.

Research findings were published by Nature Communications in the paper: 'Ultra-high Density Mechanical Energy Storage with Carbon Nanothread Bundle', and form the basis of Dr Zhan's ARC Discovery project - 'A Novel Multilevel Modelling Framework to Design Diamond Nanothread Bundles'.

Dr Zhan and his team are now planning production of an experimental nanoscale mechanical energy system as proof of concept.

Dr Zhan said the research team would spend the next two to three years building the control mechanism for the system to store energy - the system which controls twisting and stretching of the nanothread bundle.

Credit: 
Queensland University of Technology

Researchers delay onset of amyotrophic lateral sclerosis (ALS) in laboratory models

image: Chemogenetics rescues the health of the molecular network (pink) that surrounds neurons (green) in laboratory models. Scientists used the technique to delay the onset of ALS symptoms.

Image: 
Courtesy of University of Toronto

TORONTO, ON - A team of researchers led by scientists at the University of Toronto (U of T) has delayed the onset of amyotrophic lateral sclerosis (ALS) in mice. They are cautiously optimistic that the result, combined with other clinical advances, points to a potential treatment for ALS in humans.

Commonly known as Lou Gehrig's disease, ALS is caused by the degeneration and loss of neurons that control muscles. There is no cure for ALS which currently affects between 2,500 and 3,000 Canadians.

"Our experiment profoundly delayed the disease by preventing the degeneration of neurons in the cortex of the brain," says Melanie Woodin, a professor in the Department of Cell & Systems Biology (CSB) at U of T and a co-author of a study published recently in Brain.

"It delayed typical symptoms of ALS like the deterioration of motor skills and weight loss. It also increased the survival rate."

The result was achieved in mice that possessed the same gene mutation (SOD1) found in some human ALS patients. The researchers targeted neurons in the motor cortex -- the region of the brain that controls muscles -- with an engineered protein designed to correct an imbalance in neurons referred to as hyperexcitability.

"Neurons communicate with each other through synaptic transmission, which involves both the release of chemical neurotransmitters and electrical activity" explains Woodin. "This communication can be either excitatory or inhibitory. Excitation is like the gas pedal in your car and inhibition is the brake pedal. Too much gas and you'll speed off the road; too much brake and you don't go anywhere. So, to drive properly, you need a balance between the two."

In a healthy brain, a balance between excitation and inhibition ensures proper brain function -- enabling us to solve math problems, retrieve memories and feel emotion. But too much excitation in the brain's neurons can lead to neurological disorders like seizures, epilepsy, neuropathic pain, autism spectrum disorders, schizophrenia and ALS.

While human SOD1 gene mutation carriers display pronounced cortical hyperexcitability in the decade prior to the onset of ALS, it wasn't clear it was a cause of neuronal degeneration. "We knew before that there was a very profound imbalance between excitation and inhibition in the region of the brain that controls movement," says Woodin. "But that didn't tell us whether this hyperexcitability caused the onset of symptoms."

"Now we know," says Woodin. "That in ALS mice with the SOD1 mutation, hyperexcitability in the motor cortex is causal to the onset of the disease."

A path to a potential treatment in humans

"The result is important because it points down a path for a potential treatment in humans," says Woodin, who is also the dean of U of T's Faculty of Arts & Science.

The optimism that the result could eventually lead to a treatment in humans is bolstered by the fact that it comprises advances which have yet to be used together but that are proven on their own.

Woodin and her colleagues are combining advances in viral technology with a revolutionary technique in neuroscience called chemogenetics. Proteins that had their structure altered were introduced into mice via a virus and delivered to neurons in the primary motor cortex.

Once there, they were activated with a pharmaceutical drug -- but one which isn't approved for use in humans. However, other scientists demonstrated that a drug called clozapine, which is approved for use in humans for the treatment of certain psychiatric disorders, could also activate the protein.

"The clozapine discovery was a game-changer for our work," says Woodin. "It revealed a clear path for clinical translation which just wasn't there when we first developed our hypothesis."

And while chemogenetics was employed in the current study, it isn't currently used in human patients in part because of the challenge in delivering the chemogenetic "tool" to the right neurons. But an innovation being pioneered for human use by Dr. Lorne Zinman and Dr. Agessandro Abrahao offers a promising alternative.

Zinman and Abrahao are testing a non-invasive procedure to deliver therapeutic agents to the motor cortex of ALS patients. The brain is protected by a natural barrier that keeps out pathogens like bacteria and viruses -- but that also keeps out therapeutics like drugs and proteins. With the new technique, the blood brain barrier can be temporarily and safely opened to deliver a protein to targeted regions of the brain.

Zinman, a co-author on the paper, runs the ALS clinic at Sunnybrook Health Sciences Centre and is an associate professor at the University of Toronto. Abrahao is an assistant professor in the Department of Medicine at U of T and an associate scientist at Sunnybrook.

"This advancement in decreasing cortical hyperexcitability has the potential to have a major impact on treating ALS in humans," says Zinman. "Much more work is needed but this advance shows great promise toward a path to stopping this disease."

According to Dr. David Taylor, vice president of research at ALS Canada, "Despite the fact that both upper motor neurons in the cortex and lower motor neurons in the body are degenerating in ALS, much of the research to date has ignored the role of upper motor neurons."

"Excessive activity of the upper motor neurons could be an important contributor to the disease and Professor Woodin's work focused on a novel way to stimulate neighbouring neurons that can put the brakes on this abnormal biology," says Taylor. "Her results in ALS model mice are exciting and hopefully this can someday be a treatment strategy tested in human clinical trials."

Credit: 
University of Toronto

First official ATS practice guidelines for Sarcoidosis cover diagnosis and detection

image: First ATS practice guidelines o sarcoidosis.

Image: 
ATS

April 20, 2020--New guidance is available for physicians who must go through a number of steps to provide a probable diagnosis of sarcoidosis - an inflammatory disease that affects the lungs, lymph glands, and other organs. The American Thoracic Society has published an official clinical practice guideline in which a panel of experts strongly recommended a baseline serum test to screen for hypercalcemia, a potentially serious disease manifestation, along with 13 conditional recommendations and a best practice statement to improve diagnosis and detection of sarcoidosis in vital organs. The complete guideline detailing these recommendations was posted online ahead of print in the American Journal of Respiratory and Critical Care Medicine.

"There are no universally accepted measures to determine whether each diagnostic criterion has been satisfied," said Elliott D. Crouser, MD, co-chair of the guideline committee and professor of pulmonary, critical care & sleep medicine, The Ohio State University Wexner Medical Center.

"Therefore, the diagnosis of sarcoidosis is never fully certain."

The diagnosis of sarcoidosis is not standardized, but based on three major criteria: a compatible clinical presentation, finding non-necrotizing granulomatous inflammation in one or more tissue samples, and the exclusion of alternative causes of granulomatous disease. In this new clinical practice guideline, an expert panel conducted systematic reviews and meta-analyses to summarize the best available evidence.

The multidisciplinary panel appraised this evidence using the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) approach, and discussed their findings. They then formulated and graded recommendations for or against various diagnostic tests after weighing desirable and undesirable consequences, certainty of estimates, feasibility and acceptability. The following recommendations were agreed upon for each of the most significant clinical considerations:

Lymph node sampling

1. In patients for whom there is a high clinical suspicion for sarcoidosis (e.g., Löfgren's syndrome, lupus pernio, or Heerfort's syndrome), we suggest NOT sampling lymph nodes (conditional recommendation, very low-quality evidence).
2. For patients presenting with asymptomatic bilateral hilar lymphadenopathy, we make no recommendation for or against obtaining a lymph node sample.
3. For patients with suspected sarcoidosis and mediastinal and/or hilar lymphadenopathy for whom it has been determined that tissue sampling is necessary, we suggest EBUS-guided lymph node sampling, rather than mediastinoscopy, as the initial mediastinal and/or hilar lymph node sampling procedure (conditional recommendation, very low-quality of evidence).

Screening for extra-pulmonary disease

1. For patients with sarcoidosis who do not have ocular symptoms, we suggest a baseline eye examination to screen for ocular sarcoidosis (conditional recommendation, very low-quality of evidence).
2. For patients with sarcoidosis who have neither renal symptoms nor established renal sarcoidosis, we suggest baseline serum creatinine testing to screen for renal sarcoidosis (conditional recommendation, very low-quality of evidence).
3. For sarcoidosis patients who have neither hepatic symptoms nor established hepatic sarcoidosis, we suggest baseline serum alkaline phosphatase testing to screen for hepatic sarcoidosis (conditional recommendation, very low-quality of evidence).

*(This is a partial list. You may read the full recommendations online.)

Diagnostic evaluation of suspected extra-pulmonary disease

1. For patients with extra-cardiac sarcoidosis and suspected cardiac involvement, we suggest cardiac magnetic resonance imaging, rather than positron emission tomography or transthoracic echocardiography, to obtain both diagnostic and prognostic information (conditional recommendation, very low quality of evidence).
2. For patients with extra-cardiac sarcoidosis and suspected cardiac involvement who are being managed in a setting in which cardiac magnetic resonance imaging is not available, we suggest dedicated positron emission tomography, rather than transthoracic echocardiography, to obtain diagnostic and prognostic information (conditional recommendation, very low quality of evidence).
3. For patients with sarcoidosis in whom pulmonary hypertension is suspected, we suggest initial testing with transthoracic echocardiography (conditional recommendation, very low-quality evidence).

*(This is a partial list. You may read the full recommendations online.)

This guideline was developed by an ad hoc committee of experts from the American Thoracic Society with guidance from experienced methodologists to objectively identify and summarize the best available evidence on the diagnosis of sarcoidosis.

Dr. Crouser noted, "The quality of evidence was poor in most cases, reflecting the need for additional high quality research to guide clinical practice."

Credit: 
American Thoracic Society

Cholera studies reveal mechanisms of biofilm formation and hyperinfectivity

image: This image shows Vibrio cholerae cells (yellow) colonizing the intestinal villi (blue) in mouse intestine. Patterns of colonization are different for cholera cells in biofilms versus free-floating cells.

Image: 
Jin Hwan Park

Free-swimming cholera bacteria are much less infectious than bacteria in biofilms, aggregates of bacterial cells embedded in a sticky matrix that form on surfaces. This accounts for the surprising effectiveness of filtering water through cloth, such as a folded sari, which can reduce infections dramatically in places where the disease is endemic, despite the fact that individual cholera bacteria easily pass through such a filter.

A new study led by researchers at UC Santa Cruz goes a long way toward explaining the hyperinfectivity of cholera biofilms. The study, published the week of April 20 in the Proceedings of the National Academy of Sciences (PNAS), is one of several new papers on cholera biofilms from the laboratory of UCSC microbiologist Fitnat Yildiz.

"We've been working on this for so long, it is a significant body of work that is now being published, focusing on the mechanisms of biofilm formation and what makes the biofilm more infectious," said Yildiz, a professor of microbiology and environmental toxicology.

Biofilms are important not only in causing infections, but also in the survival of cholera bacteria (Vibrio cholerae) in the environment. In regions where cholera is endemic, the bacteria live in aquatic environments, typically in brackish water, causing periodic, seasonal outbreaks when sources of drinking water become contaminated.

A surprising finding in the PNAS paper is that bacteria growing in biofilms have already activated the genes for virulence factors such as toxin production, before they have even infected a host.

"Two of the main virulence factors are the toxin co-regulated pilus, which allows the bacteria to adhere to the intestine, and the cholera toxin which enters intestinal cells and makes people really sick," said Jennifer Teschler, a postdoctoral researcher in the Yildiz lab and a co-first author of the paper. "These virulence factors are more highly expressed in biofilm cells, so they are already primed for causing infections."

The study also showed differences in the colonization patterns of free-swimming ("planktonic") and biofilm-grown cholera cells in the intestines of infected mice. The researchers used a new imaging technique to make intestinal tissue transparent while preserving the spatial integrity of the infected intestines. This enabled them to see where the cholera bacteria had adhered to the villi, the finger-like projections that line the small intestine.

"Being able to see where the infections are in three dimensions is an important tool for studying intestinal pathogens," Teschler said. "In mice infected with planktonic cells, the cells were typically at the bottom of the villi, whereas biofilm cells attached at the top of the villi, closer to the lumen. We speculate that biofilm cells adhere more strongly to the villi, so they are better able to resist being swept away by the flow in the lumen of the intestine."

Two other papers, published March 25 in Nature Communications and March 16 in PLOS Genetics, focus on how free-swimming cholera bacteria attach to surfaces and initiate biofilm formation.

"The bacterium has to attach to a surface, stop swimming, and start building a matrix," Yildiz said. "Understanding the mechanisms involved in biofilm formation, as well as the role of biofilms in the overall biology of Vibrio cholerae, will pave the way for developing strategies to predict and control cholera epidemics. It may also help in identification of novel drug targets for inhibiting biofilm formation during infection."

The Nature Communications paper explores the cellular signaling pathways that control the attachment process through the regulation of hair-like appendages called pili that grow out from the cell surface.

"Attachment is the initiating step of biofilm formation," explained first author Kyle Floyd, a postdoctoral researcher in the Yildiz lab. "As a swimming cell nears a surface, the pilus will bind to the surface, and retraction of the pilus helps pull the cell closer to the surface. The cell then makes more pili to anchor it down to the surface."

There are different classes and subclasses of bacterial pili, and the one required for biofilm formation in many Vibrio cholerae strains (the type IV MSHA pilus) is regulated by a signaling molecule called c-di-GMP. The new study showed that the MSHA pilus is a dynamic system that extends and retracts and is directly controlled by c-di-GMP. The study showed how pilus activity is modulated by the interactions of c-di-GMP with other components of the pilus system.

The PLOS Genetics paper further elucidates the c-di-GMP signaling pathways that promote biofilm formation. In particular, the study looked at the role of the flagellum, a whip-like appendage the bacteria use to swim, in c-di-GMP signaling. The researchers found that loss of the flagellum leads to elevated levels of c-di-GMP in the cell and increased expression of biofilm genes.

"It required powerful and elegant genetics to work out the connections between flagellum assembly, production of pili on the cell surface, biofilm matrix production, and c-di-GMP signaling," Yildiz said. "There are different steps where this signaling molecule can control the transition to biofilm formation."

Credit: 
University of California - Santa Cruz

Chocolate 'fingerprints' could confirm label claims

WASHINGTON, April 20, 2020 -- The flavor and aroma of a fine chocolate emerge from its ecology, in addition to its processing. But can you be certain that the bar you bought is really from the exotic locale stated on the wrapper? Now, researchers are presenting a method for determining where a particular chocolate was produced -- and someday, which farm its beans came from -- by looking at its chemical "fingerprint."

The researchers are presenting their results through the American Chemical Society (ACS) SciMeetings online platform.

"The project originated out of an idea I had for a lab in one of the courses I teach," says Shannon Stitzel, Ph.D., the project's principal investigator. "The method we used to analyze chocolate bars from a grocery store worked well in the class, and the exercise piqued the students' curiosity. So, I started reaching out for more interesting samples and tweaking the technique."

Many factors can affect a chocolate's flavor and contribute to its unique chemical make-up. The process of making chocolate begins with the pods of the cacao tree. The genes of the tree the pods are harvested from, as well as the environment the tree is grown in, can affect the composition of the final product. Processing steps can also change a chocolate's complex chemistry. Generally, after cocoa beans obtained from the pods have been fermented, dried and roasted, they are ground into a paste, called cocoa liquor, which contains cocoa solids and cocoa butter. Sugar and other ingredients are added to the liquor to make chocolate. Any of these steps could be varied slightly by the company performing them, leading to differences in chocolate composition. Even more variation between chocolates from different regions can come from naturally occurring yeast in the pods that surround the beans, which can affect the fermentation process, thereby influencing the flavor compounds in chocolate.

Authenticating the country of a chocolate's origin is an important piece of information. Drilling down to the specific farm where a chocolate's beans were grown could help verify that the product is "fair trade" or "organic," as its label might suggest, or that it has not been adulterated along the way with inferior ingredients.

Early on, Stitzel's experiments at Towson University involved a well-known method for geographic determination. She used elemental analysis, which has been used to identify the source of a myriad of unknown materials. However, Stitzel wanted to go further and analyze the organic compounds in cocoa liquor to see if any of them remained after various processing steps. If so, they could be used as markers for more precise authentication testing.

Through a friend in the industry, Stitzel acquired single-source samples of cocoa liquor from all over the world. Her undergraduate student, Gabrielle Lembo, used liquid chromatography (LC) to separate the cocoa liquor compounds from various samples and mass spectrometry (MS) to identify their chemical signatures. Lembo’s results showed that LC-MS is a robust analysis technique. Compounds, such as caffeine, theobromine and catechins, are detected in different patterns that make up a signature fingerprint. This fingerprint indicates provenance and cannot easily be finagled by nefarious producers.

Stitzel says that eventually their method could be used to help map out the expected flavor profiles of a chocolate, given its chemical signature. And she says it would be interesting to first determine the fingerprint of a cocoa bean, then gather fingerprints with each consecutive processing step to see how they change. For now, her students are expanding the application of the analysis method by looking at the chemical signatures of various forms of fair-trade and organic coffee.

Credit: 
American Chemical Society

Faster-degrading plastic could promise cleaner seas

ITHACA, N.Y. -To address plastic pollution plaguing the world's seas and waterways, Cornell University chemists have developed a new polymer that can degrade by ultraviolet radiation, according to research published in the Journal of the American Chemical Society.

"We have created a new plastic that has the mechanical properties required by commercial fishing gear. If it eventually gets lost in the aquatic environment, this material can degrade on a realistic time scale," said lead researcher Bryce Lipinski, a doctoral candidate in the laboratory of Geoff Coates, professor of chemistry and chemical biology at Cornell University. "This material could reduce persistent plastic accumulation in the environment."

Commercial fishing contributes to about half of all floating plastic waste that ends up in the oceans, Lipinski said. Fishing nets and ropes are primarily made from three kinds of polymers: isotactic polypropylene, high-density polyethylene, and nylon-6,6, none of which readily degrade.

"While research of degradable plastics has received much attention in recent years," he said, "obtaining a material with the mechanical strength comparable to commercial plastic remains a difficult challenge."

Coates and his research team have spent the past 15 years developing this plastic called isotactic polypropylene oxide, or iPPO. While its original discovery was in 1949, the mechanical strength and photodegradation of this material was unknown before this recent work. The high isotacticity (enchainment regularity) and polymer chain length of their material makes it distinct from its historic predecessor and provides its mechanical strength.

Lipinski noted that while iPPO is stable in ordinary use, it eventually breaks down when exposed to UV light. The change in the plastic's composition is evident in the laboratory, but "visually, it may not appear to have changed much during the process," he said.

The rate of degradation is light intensity-dependent, but under their laboratory conditions, he said, the polymer chain lengths degraded to a quarter of their original length after 30 days of exposure.

Ultimately, Lipinski and other scientists want to leave no trace of the polymer in the environment. He notes there is literature precedent for the biodegradation of small chains of iPPO which could effectively make it disappear, but ongoing efforts aim to prove this.

Credit: 
Cornell University

Arctic research expedition likely faces extreme conditions in fast-changing Arctic

In October 2019, scientists trapped a ship filled with equipment in Arctic sea ice with the intention of drifting around the Arctic Ocean for a full year, gathering data on the polar regions and sea ice floes. However, a new study indicates there is a chance the expedition may melt out months before the year-end goal.

The MOSAiC (Multidisciplinary drifting Observatory for the Study of Arctic Climate) research team went through extensive preparation and training for the expedition, including analyzing historic conditions. The new research shows, however, that Arctic conditions have been changing so rapidly that the past may no longer be a guide to today.

Scientists at the National Center for Atmospheric Research (NCAR) have used an ensemble of multiple climate model runs to simulate conditions along potential routes for the polar expedition, using today's conditions in the "new Arctic." The results suggest that thinner sea ice may carry the ship farther than would be expected compared to historical conditions and the sea ice around the ship may melt earlier than the 12-month goal. Of the 30 model runs analyzed in the new study, five (17%) showed melt-out in less than a year.

The research, published in the journal The Cryosphere, was funded by the National Science Foundation, which is NCAR's sponsor. The study's co-authors are from the University of Colorado Boulder and the school's Cooperative Institute for Research in Environmental Sciences, as well as Dartmouth College and the University of Alaska Fairbanks.

The ensemble of 30 model runs used current climate conditions and reflected the breadth of ways sea ice could form, drift, and melt in a 2020 climate. The study did not incorporate 2019 ice conditions and is not a forecast of the track the ship will take over its year-long expedition.

"The whole point of MOSAiC is to understand the new Arctic and how things have changed over the last 10 years," said Alice DuVivier, an NCAR climate scientist and lead author of the new study. "This model gives us an understanding of the range of drifting possibilities the expedition could face in the new ice regime."

Scientists have been gathering data on Arctic sea ice extent, which can cover millions of square miles in winter, since 1979 when satellites began capturing annual changes in ice cover. "The changes in the Arctic system are so incredibly rapid that even our satellite observations from 15 years ago are unlike the Arctic today," said Marika Holland, NCAR scientist and co-author of the study. "Now there is thinner ice, which moves more quickly, and there is less snow cover. It is a totally different ice regime."

Going with the floe

To compare the differences between the "old Arctic" and "new Arctic," the scientists created ice floe tracks for the expedition's ship, a German research icebreaker called the Polarstern, using the sea ice model in the NCAR-based Community Earth System Model (CESM). First, the scientists ran 28 model tracks based on historic satellite data of sea ice conditions. Then they compared the results to 30 model tracks under the young, thin, and more seasonal Arctic ice conditions that are more reflective of recent conditions.

In these "new Arctic" conditions, the ice floes moved more quickly so their paths extend farther and varied more from each other compared to "old Arctic" paths, which include thicker sea ice and shorter, slower tracks.

Most notably, the study finds that in the seasonal Arctic scenario, 17% of the simulated tracks show the Polarstern melting out of the ice altogether, months before the October 2020 finish line. The model runs estimate July 29, 2020, as the earliest potential melt date, highlighting that the present-day Arctic has an increased risk of extreme events, such as melt-out. In contrast, none of the tracks run under the historic satellite data showed a melt-out scenario.

While the results provide additional insight into the potential outcome of MOSAiC's path, the model runs are not a forecast of the expedition's track, said DuVivier, who presented the study's results to the MOSAiC science team as they prepared for the campaign. Rather, the results are a way to explore the many scenarios a ship could potentially face over the course of the journey in the current climate. "Modeling is a way to explore many worlds," said DuVivier. "Previous experience isn't always indicative of what is going to happen."

By the first phase of the expedition, in fall 2019, the researchers had already encountered thin ice conditions, having initially struggled to find a thick enough ice floe to anchor the Polarstern, and experiencing storms that have challenged the expedition. According to the National Snow & Ice Data Center, the Arctic sea ice in 2019 reached the second-lowest minimum in the satellite record, meaning the expedition began under extremely low ice conditions.

Since then, the ship's current track has been drifting farther than expected. "The model experiments from this study have comparable tracks to what the Polarstern has been experiencing in the last few months," said DuVivier. "We are not making a prediction for the expedition, but those types of tracks melt out early in our climate model."

Time will tell what the Polarstern's ultimate track and destination will be, but the expedition will still provide scientists with a wealth of data. The information will ultimately be hugely beneficial for improving climate models like CESM and helping scientists understand the changes in the Arctic that future polar expeditions may go on to observe.

"This is why we need MOSAiC. Models can inform these kinds of campaigns and these campaigns are going to inform our models," said Holland, who has her own project on the MOSAiC expedition, collecting data about snow on sea ice and upper-ocean heating that affects sea ice thickness.

"We don't have a lot of new observations taken in this new regime, and this will be fundamental to our future understanding of the Arctic," said Holland.

Credit: 
National Center for Atmospheric Research/University Corporation for Atmospheric Research