Earth

Towards climate resilient urban energy systems

image: A general framework for assessing the climate resilience of urban energy systems, illustrating different components that should be considered. Assessing the climate resilience of energy systems in the urban context requires considering a wide range of future climate projections and accounting for different components/levels (e.g. building, urban, generation, demand and supply, centralized and decentralized energy systems) and their complex interactions.

Image: 
©Science China Press

Climate change and increased urban population are two major concerns for society. Moving towards more sustainable energy solutions in the urban context by integrating renewable energy technologies supports decarbonizing the energy sector and climate change mitigation. A successful energy transition is not possible without proper climate change adaptation and considering climate uncertainties and extremes. Failing in climate change adaptation lead to irreversible environmental conditions and heavy economic losses. It is necessary to thoroughly assess the transition pathways and quantify the risks and uncertainties. In this regard, climate resilience can be considered a critical part of climate change adaptation, mostly addressing extreme events and climate disasters. Climate resilience is an emerging concept that is increasingly used to represent the durability and stable performance of energy systems against extreme climate events. However, it has not yet been adequately explored and widely used, as its definition has not been clearly articulated and assessment is mostly based on qualitative aspects.

Nik and colleagues provide an overview of and insight into the progress achieved in the energy sector to adapt to climate change, focusing on the climate resilience of urban energy systems. The state-of-the-art methodology to assess impacts of climate change including extreme events and uncertainties on the design and performance of energy systems is described and discussed. This study reveals that a major limitation in the state-of-the-art is the inadequacy of climate change adaptation approaches in designing and preparing urban energy systems to satisfactorily address plausible extreme climate events. Impacts of climate change, including extreme conditions (and the consequent uncertainties), on urban energy systems are mostly neglected in the energy system design phase, though there are very limited discussions on the operation phase. Most of the available resilience studies are focused on the energy supply, considering traditional and fossil fuel-based energy sources. The available approaches for assessing the resilience are mostly for the spatial scales larger than urban scale and focused on a single aspect. The complexity of the climate and energy models and the mismatch between their temporal and spatial resolutions are two major reasons that the linkage between climate and energy models has not been strong. In this regards some major limitations are inadequate representation of probable future conditions due to limited number of scenarios for future conditions and not considering extreme events, methodological limitations in the energy system design and optimization to address urban complexities, climate uncertainties, extreme weather events, as well as a lack of a clear definition and quantification of resilience and performance gap. All the aforementioned limitations make it difficult to understand the changes required in the superstructure of urban energy infrastructure. This may lead to either over conservative limits for renewable energy integration or even cascade failures and blackouts. The consequences are not limited to the energy sector, but even propagate to other interconnected infrastructures.

Assessing the climate resilience of urban energy systems is possible by considering a wide range of future climate projections and adopting a suitable framework to account for the different components that affect the energy flow in the urban scale, from generation to demand. The multivariate and multiscale variations as well as complex interactions and uncertainties that affect the energy flow in urban areas should be considered in the design and operation of energy systems and grids, especially those linked with extreme climate and high impacts events. This requires further development of the energy modelling techniques and frameworks to consider climate change variations and quantify energy demand and renewable generation potential at the building, neighborhood, district, and urban scale. Moreover, it is needed to synthesize proper weather data sets that account for plausible extreme events. This work suggests a general framework for assessing the climate resilience of urban energy systems, while the adopted details and methods can vary depending on the specific needs of a case. In practice, the developed framework can be used as an early warning system by being connected to weather forecasts. Before making decisions, relevant data from multiple sources needs to be gathered and analyzed to create a valid representation of the upcoming conditions. Such an analysis is possible through data analytics and having an ever-updating model of the urban energy system or its digital twin.

Credit: 
Science China Press

Breakthrough machine learning approach quickly produces higher-resolution climate data

Researchers at the U.S. Department of Energy's (DOE's) National Renewable Energy Laboratory (NREL) have developed a novel machine learning approach to quickly enhance the resolution of wind velocity data by 50 times and solar irradiance data by 25 times--an enhancement that has never been achieved before with climate data.

The researchers took an alternative approach by using adversarial training, in which the model produces physically realistic details by observing entire fields at a time, providing high-resolution climate data at a much faster rate. This approach will enable scientists to complete renewable energy studies in future climate scenarios faster and with more accuracy.

"To be able to enhance the spatial and temporal resolution of climate forecasts hugely impacts not only energy planning, but agriculture, transportation, and so much more," said Ryan King, a senior computational scientist at NREL who specializes in physics-informed deep learning.

King and NREL colleagues Karen Stengel, Andrew Glaws, and Dylan Hettinger authored a new article detailing their approach, titled "Adversarial super-resolution of climatological wind and solar data," which appears in the journal Proceedings of the National Academy of Sciences of the United States of America.

Accurate, high-resolution climate forecasts are important for predicting variations in wind, clouds, rain, and sea currents that fuel renewable energies. Short-term forecasts drive operational decision-making; medium-term weather forecasts guide scheduling and resource allocations; and long-term climate forecasts inform infrastructure planning and policymaking.

However, it is very difficult to preserve temporal and spatial quality in climate forecasts, according to King. The lack of high-resolution data for different scenarios has been a major challenge in energy resilience planning. Various machine learning techniques have emerged to enhance the coarse data through super resolution--the classic imaging process of sharpening a fuzzy image by adding pixels. But until now, no one had used adversarial training to super-resolve climate data.

"Adversarial training is the key to this breakthrough," said Glaws, an NREL postdoc who specializes in machine learning.

Adversarial training is a way of improving the performance of neural networks by having them compete with one another to generate new, more realistic data. The NREL researchers trained two types of neural networks in the model--one to recognize physical characteristics of high-resolution solar irradiance and wind velocity data and another to insert those characteristics into the coarse data. Over time, the networks produce more realistic data and improve at distinguishing between real and fake inputs. The NREL researchers were able to add 2,500 pixels for every original pixel.

"By using adversarial training--as opposed to the traditional numerical approach to climate forecasts, which can involve solving many physics equations--it saves computing time, data storage costs, and makes high-resolution climate data more accessible," said Stengel, an NREL graduate intern who specializes in machine learning.

This approach can be applied to a wide range of climate scenarios from regional to global scales, changing the paradigm for climate model forecasting.

NREL is the U.S. Department of Energy's primary national laboratory for renewable energy and energy efficiency research and development. NREL is operated for the Energy Department by the Alliance for Sustainable Energy, LLC.

Credit: 
DOE/National Renewable Energy Laboratory

Oncotarget: Clonality and antigen-specific responses shape prognostic effects

image: Prognostic effects of NY-ESO-1 expression and serology with TCR repertoire features. (A) Kaplan?Meier plot evaluating PFS based on NY-ESO-1 expression and NY-ESO-1 serology, unadjusted for TCR features; p = 0.013. (B) Expected restricted mean PFS as a function of TIL/PBMC overlap. (C) Sorted bar chart of overlap across patients colored by expression/serology.

Image: 
Correspondence to - Kunle Odunsi - kunle.odunsi@roswellpark.org

Oncotarget Volume 11, Issue 27 published "Clonality and antigen-specific responses shape the prognostic effects of tumor-infiltrating T cells in ovarian cancer" by Tsuji et al. which reported that to delineate the complexity of anti-tumor T-cell responses, the author's utilized a computational method for de novo assembly of sequences from CDR3 regions of 369 high-grade serous ovarian cancers from TCGA, and then applied deep TCR-sequencing for analyses of paired tumor and peripheral blood specimens from an independent cohort of 99 ovarian cancer patients.

In the validation cohort, the authors' discovered that patients with low T-cell infiltration but low diversity or focused repertoires had clinical outcomes almost indistinguishable from highly-infiltrated tumors.

They also found that the degree of divergence of the peripheral repertoire from the TIL repertoire, and the presence of detectable spontaneous anti-tumor immune responses are important determinants of clinical outcome.

Also that the prognostic significance of TILs in ovarian cancer is dictated by T-cell clonality, degree of overlap with peripheral repertoire, and the presence of detectable spontaneous anti-tumor immune response in the patients.

These immunological phenotypes defined by the TCR repertoire may provide useful insights for identifying “TIL-low” ovarian cancer patients that may respond to immunotherapy.

"These immunological phenotypes defined by the TCR repertoire may provide useful insights for identifying “TIL-low” ovarian cancer patients that may respond to immunotherapy."

Dr. Kunle Odunsi from The Center for Immunotherapy as well as The Department of Gynecologic Oncology at Roswell Park Comprehensive Cancer Center said, "The presence of tumor-infiltrating lymphocytes (TILs) is a key determinant of clinical outcome in a wide range of solid tumors including ovarian cancer."

High-throughput next-generation sequencing has made it possible to read the entire CDR3 to uniquely identify specific T cell clones and to estimate the absolute frequency of T cell clones in tumor tissue from the copy number of TCR sequences.

The importance of TCR repertoire in shaping anti-tumor immunity in ovarian cancer was recently demonstrated using unbiased functional analysis of TCR repertoires from TILs derived from two patients.

Tumor reactivity was revealed in 0–5% of tested TCRs indicating that the vast majority of T cells infiltrating ovarian tumors were irrelevant for tumor recognition.

To determine how the TCR repertoire of TILs shapes the prognosis of ovarian cancer patients, the authors' utilized a new computational method for de novo assembly of sequences from CDR3 regions using paired-end RNA-seq data from the Cancer Genome Atlas study of high-grade serous ovarian cancers.

The author's examined TCR repertoire in the context of the degree of tumor infiltration by T cells, spontaneous immune responses against bona fide TAAs, and clinical outcome.

The Odunsi Research Team concluded in their Oncotarget Research Paper that despite these limitations, this study highlights the extraordinary diversity of the T-cell repertoire in ovarian cancer patients, and demonstrates that pre-existing immunity against cancer antigen is a critical prerequisite to correctly understand the prognostic significance of the T-cell repertoire in the tumor and peripheral blood of patients with ovarian cancer.

They have distilled TCR repertoire information into candidate biomarkers that may critically influence the prognosis of ovarian cancer patients.

Conceptually, ovarian cancers may not fit into the classic paradigm of ?cold' and ?hot' based on the number of T cells they contain, but also by the TCR repertoire information, which serves as a surrogate for tumor recognition.

The latest technologies put these prognostic features in clinical reach not only for predicting prognosis but potentially for determining the best immunotherapeutic strategy for each patient.

Credit: 
Impact Journals LLC

The complex relationship between deforestation and diet diversity in the Amazon

image: Commercial agriculture, such as palm oil and cocoa plantations, is expanding along the Amazonian forest frontier.

Image: 
© CIAT

Ten years ago, non-indigenous households from three communities in the Ucayali region in Peru regularly ate fish, wild fruits and other products collected from the Amazon forest. Combined with whatever they grew and harvested on their lands, this contributed to a relatively diverse diet. Today, the same households have changed their production strategy and how they get food on the table. Agricultural production, complemented by hunter-gatherer activities, aimed to satisfy both household consumption and income generation. However, this has been largely replaced by commercial agriculture such as palm oil and cocoa. This shift in agricultural production objectives has affected the sources of food for local communities and appears to be associated with relatively less diverse diets, according to a new study authored, among others, by CIAT (now the Alliance of Bioversity International and CIAT) scientists.

"Our objective was to test the hypothesis that the economic transformations linked to the expansion of cash crops in mestizo communities, especially oil palm, were associated with deforestation and reduced agricultural biodiversity and that this was likely to be associated with changes in food access," says Genowefa Blundo Canto, co-author and Post Doc researcher at CIAT at the time of the study.

The study represents one of rather few attempts to trace changes in food access, livelihood strategies, deforestation and agricultural biodiversity over time. The scientists collected data on livelihood strategies and nutritional health among 53 families in the Ucayali region in Peru and compared the results with data gathered from the same families in the early 2000s. Despite the small sample, caused by significant outmigration from these communities, the results were remarkable.

"We found that in the 15-year study period, farming households shifted from diets based on limited consumption of meat and dairy items and high consumption of plant-based foods from their own production, towards diets with high protein and fat content, with food items increasingly purchased in the market. In parallel, production systems became less diversified, more market-orientated and specialised toward commercial crops, oil palm and cacao in particular," says Blundo Canto.

The scientific team concluded that the expansion of commercial agriculture, such as palm oil and cocoa plantations at the Amazonian forest frontier, appears to be associated with simplified food production systems, reduced agricultural diversity and less access to food, measured in terms of the household dietary diversity score.

"This study is crucial to understand how deforestation not only affects the climate, but also has profound socio-economic and nutritional impacts on the communities living on the forest frontier. Even though Peru and other Latin American countries have progressed in economic terms, there are high malnutrition percentages especially among children. Something tells us that even though farmers might now make more money from, for example, oil palm farming, this might not improve other life quality aspects such as nutrition for children," explains another co-author Marcela Quintero, Multifunctional Landscapes Research Area Director at the Alliance.

The marked rise in obesity in rural areas of Peru reflects a worldwide trend. While the study only looked at the diversity of household diets and not the nutritional value, the increased consumption of foods high in saturated fats and ultra-processed foods demands the attention of local policy makers.

"These results, which are consistent with emerging evidence for a dietary transition in the Amazon, have major implications for land use and food policies in the region as well as for health policies, since it has recently been highlighted that unhealthy diets are the main cause of disease worldwide. We therefore recommend that future development actions at the Amazonian forest-agriculture interface should address deforestation and promote agrobiodiversity for more diverse diets and local markets over the expansion of cash crops, in order to ensure long-term food and nutritional security among farmers and the rural communities that they supply," concludes Blundo Canto.

The research team wants to complement the research with a specific study on how the nutritional quality of the diets might have changed to further argue for focused research and policy development that will work for the benefit and well-being of communities living on the borders of forests around the world. Likewise, the team is seeking opportunities to replicate this study with indigenous communities. Meanwhile, the Alliance is working with oil palm producers and the regional government of Ucayali to re-design their business models in a way that are deforestation-free.

Credit: 
The Alliance of Bioversity International and the International Center for Tropical Agriculture

Study reveals science behind traditional mezcal-making technique

video: Artisans who make mezcal, a traditional Mexican spirit, have known for centuries that long-lasting bubbles on the surface of the beverage is an indicator that it has been distilled to the right alcohol level. A new study reveals the science behind why that is. In part, it has to do with what's known as the Marangoni effect. Certain chemicals in mezcal cause liquid to convect upwards into the membranes of bubbles, which helps sustain them for a longer period.

Image: 
Zenit Lab / Brown University

PROVIDENCE, R.I. [Brown University] -- Artisanal makers of mezcal have a tried and true way to tell when the drink has been distilled to the right alcohol level. They squirt some into a small container and look for little bubbles, known as pearls. If the alcohol content is too high or too low, the bubbles burst quickly. But if they linger for 30 seconds or so, the alcohol level is perfect and the mezcal is ready to drink.

Now, a new study by a team of fluid dynamics researchers reveals the physics behind the trick. Using laboratory experiments and computer models, the researchers show that a phenomenon known as the Marangoni effect helps mezcal bubbles linger a little longer when the alcohol content is around the sweet spot of 50%. In addition to showing the scientific underpinnings of something artisans have known for centuries, the researchers say the findings reveal new fundamental details about the lifetimes of bubbles on liquid surfaces.

The study, a collaboration between researchers at Brown University, Universidad Nacional Autónoma de México, Université de Toulouse and elsewhere, was published on July 3 in the journal Scientific Reports.

When Roberto Zenit, a professor in Brown's School of Engineering and the study's senior author, first heard about the bubble trick, he said he was instantly intrigued.

"One of my main research interests is bubbles and how they behave," Zenit said. "So when one of my students told me that bubbles were important in making mezcal, which is a drink that I really enjoy with my friends, it was impossible for me not to investigate how it works."

The researchers started by doing experiments to see how changing the alcohol level of mezcal changed bubble lifetimes. They watered down some samples of mezcal and added pure ethyl alcohol to others. They then reproduced the squirting trick in the lab while carefully timing the bubbles. They found that, sure enough, alcohol level dramatically affected bubble lifetimes. In unaltered samples, bubbles lasted from 10 to 30 seconds. In both the fortified and watered-down samples, the bubbles burst instantly.

Having shown that bubbles really can be a gauge of alcohol content, the next step was to figure out why.

To do that, the Zenit and his students started by simplifying the fluid -- performing experiments with mixtures of just pure water and alcohol. Those experiments showed that, as with mezcal, bubbles tended to last longer when the mixture was near 50% water and 50% alcohol. The researchers determined that the extra bubble life was due largely to viscosity. Bubbles tend to last longer in more viscous fluids, and the viscosity of alcohol-water mixtures peaks right around 50%.

However, the bubbles in the 50-50 water and alcohol mixtures still didn't last as long as those in mezcal. Zenit and his students realized there must be something about mezcal that amplifies the viscosity effect. To figure out what it was, they used high-speed video cameras to carefully watch the bubbles through their lifetimes.

The video revealed something surprising, Zenit said. It showed an upward convection of liquid from the surface of mezcal into the bubble membranes.

"Normally, gravity is causing the liquid in a bubble film to drain away, which eventually causes the bubble to burst," Zenit said. "But in the mezcal bubbles, there's this upward convection that's replenishing the fluid and extending the life of the bubble."

With the help of some computer modeling, the researchers determined that a phenomenon known as the Marangoni convection was responsible for this upward motion. The Marangoni effect occurs when fluids flow between areas of differing surface tension, which is the attractive force between molecules that forms a film surface of a fluid. Mezcal contains a variety of chemicals that act as surfactants -- molecules that change the surface tension. As a result, bubbles that form on the surface of mezcal tend to have higher surface tension than the surfactant-filled fluid below. That differing surface tension draws fluid up into the bubble, increasing its lifespan.

By amplifying the existing tendency for longer-lasting bubbles in 50% alcohol mixtures, the surfactant-driven Marangoni effect makes bubbles a reliable gauge of alcohol content in mezcal.

Zenit, who hails from Mexico, said it was gratifying to shed new light on this artisanal technique.

"It's fun to work on something that has both scientific value and cultural value that's part of my background," he said. "These artisans are experts in what they do. It's great to be able to corroborate what they already know and to demonstrate that it has scientific value beyond just mezcal making."

The insights generated from the work could be useful in a variety of industrial processes that involve bubbles, the researchers said. It could also be useful in environmental research.

"For example," the researchers write, "the lifetime of surface bubbles could be used as a diagnostic tool to infer the presence of surfactants in a liquid: If the lifetime is larger than that expected of a pure/clean liquid, then the liquid is most likely contaminated."

Credit: 
Brown University

Iodine exposure in the NICU may lead to decrease in thyroid function, NIH study suggests

Exposure to iodine used for medical procedures in a neonatal intensive care unit (NICU) may increase an infant's risk for congenital hypothyroidism (loss of thyroid function), suggests a study by researchers at the National Institutes of Health and other institutions. The authors found that infants diagnosed with congenital hypothyroidism following a NICU stay had higher blood iodine levels on average than infants who had a NICU stay but had normal thyroid function. Their study appears in the Journal of Nutrition.

"Limiting iodine exposure among this group of infants whenever possible may help lower the risk of losing thyroid function," said the study's first author, James L. Mills, M.D., of the Epidemiology Branch at NIH's Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD).

Congenital hypothyroidism is a partial or complete loss of thyroid function. The thyroid, located in the throat, makes iodine-containing hormones that regulate growth, brain development and the rate of chemical reactions in the body. Treatment consists of thyroid hormone therapy and must begin within four weeks after birth or permanent intellectual disability may result.

In the United States, all infants are routinely screened for the condition by collecting a small sample of blood from an infant's heel and analyzing it for thyroid stimulating hormone. Infants with a high level of thyroid stimulating hormone are referred for further testing.

To conduct the study, the researchers analyzed blood spots for their iodine content. They compared blood iodine levels from 907 children diagnosed with congenital hypothyroidism to those of 909 similar children who did not have the condition. This included 183 infants cared for in the NICU--114 of whom had congenital hypothyroidism and 69 who did not.

Overall, the researchers found no significant difference between blood iodine concentrations in those who had congenital hypothyroidism and those in the control group. Because very high or very low iodine levels increase the risk for congenital hypothyroidism, they also looked at those infants having the highest and lowest iodine levels.

Children with congenital hypothyroidism were more likely to have been admitted to a NICU than those without congenital hypothyroidism. When the researchers considered only those infants with a NICU stay, they found that the group with congenital hypothyroidism had significantly higher iodine levels than those without the condition who also had a NICU stay. Similarly, those with congenital hypothyroidism and a NICU stay tended to have higher blood iodine than children with the condition who did not have a NICU stay.

The researchers were unable to obtain information on the medical procedures the infants may have undergone during their time in the NICU. Iodine solutions are commonly used as disinfectants to prepare the skin for surgical or other procedures. Preterm infants absorb iodine more readily through their skin than older infants. Iodine also is given internally for imaging procedures used in infants.

The researchers said that the higher iodine levels seen among infants with congenital hypothyroidism and a NICU stay may have resulted from exposure to iodine during a medical procedure. Because of this possibility, they cautioned NICU staff to use disinfectants that do not contain iodine whenever possible and to avoid exposing infants to iodine unless absolutely necessary.

Credit: 
NIH/Eunice Kennedy Shriver National Institute of Child Health and Human Development

Princeton chemists resolve origin of perovskite instability

image: (Upper left) Room temperature Cs electron density from single crystal X-ray diffraction measurements showing significant elongation, a signature of rattling. (Bottom) Cs-I distances for the dominant Cs site CsA and secondary site CsB with (upper right) histogram of distances.

Image: 
Figure by Daniel Straus

Researchers in the Cava Group at the Princeton University Department of Chemistry have demystified the reasons for instability in an inorganic perovskite that has attracted wide attention for its potential in creating highly efficient solar cells.

Using single crystal X-ray diffraction performed at Princeton University and X-ray pair distribution function measurements performed at the Brookhaven National Laboratory, Princeton Department of Chemistry researchers detected that the source of thermodynamic instability in the halide perovskite cesium lead iodide (CsPbI3) is the inorganic cesium atom and its "rattling" behavior within the crystal structure.

X-ray diffraction yields a clear experimental signature of this movement.

The research, "Understanding the Instability of the Halide Perovskite CsPbI3 through Temperature-Dependent Structural Analysis," will be published next week in the journal Advanced Materials.

Daniel Straus, a postdoctoral research associate in the Cava Group and lead author on the paper, explained that while cesium occupies a single site within the structure at temperatures below 150 K, it "splits" into two sites above 175 K. Along with other structural parameters, this suggests evidence of the rattling behavior of cesium within its iodine coordination polyhedron.

In addition, the low number of cesium-iodine contacts within the structure and the high degree of local octahedral distortion also contribute to the instability.

In the research, the single-crystal measurements characterized the average structure of the material. At Brookhaven, the X-ray pair distribution function allowed researchers to determine the behavior of the structure on the length scale of the unit cell. (A unit cell is the smallest repeating unit in a crystal.) It is on this local level that the high degree of octahedral distortion became obvious, said Straus.

The room-temperature metastability of CsPbI3 has long been a known factor, but it had not previously been explained.

"Finding an explanation for a problem that so many people in the research community are interested in is great, and our collaboration with Brookhaven has been beyond fantastic," said Robert Cava, the Russell Wellman Moore Professor of Chemistry, an expert in synthesis and structure-property characterization.

"Remarkable" efficiencies

Currently, the dominant halide perovskite in solar energy conversion applications is based on methylammonium lead iodide, an organic-inorganic hybrid material that has been incorporated into solar cells with certified efficiencies of 25.2%; this rivals the efficiency of commercial silicon solar cells. While this "remarkable" efficiency drives interest, methylammonium lead iodide suffers from instability problems thought to originate from the volatile nature of the organic cation. To correct this problem, researchers have attempted to replace the organic cation with inorganic cesium, which is significantly less volatile.

However, unlike methylammonium lead iodide, the perovskite phase of cesium lead iodide is metastable at room temperature.

"If you want to make a solar cell with unmodified cesium lead iodide, it's going to be very hard to work around this and stabilize this material," said Straus. "You have to find a way to stabilize it that works around the fact that this cesium atom is a little bit too small. There are a couple ways people have tried to chemically modify CsPbI3 and they work okay. But there's no point in just trying to make solar cells out of this bulk material without doing fancy things to it."

Detailed structural information in the paper suggests methods to stabilize the perovskite phase of CsPbI3 and thus improve the stability of halide perovskite solar cells. The paper also reveals the limitations of tolerance factor models in predicting stability for halide perovskites. Most of these models currently predict that CsPbI3 should be stable.

At Brookhaven Lab

A technique known as a pair distribution function measurement, which describes the distribution of distances between atoms, helped the Princeton researchers to further understand the instability. Using Brookhaven's Pair Distribution Function (PDF) beamline at the National Synchrotron Light Source II, lead beamline scientist Milinda Abeykoon worked with samples of thermodynamically unstable CsPbI3, which he received from the Cava Lab in several sealed glass capillaries inside a container filled with dry ice.

Measuring these samples was challenging, said Abeykoon, because they would decompose quickly once removed from the dry ice.

"Thanks to the extremely bright X-ray beam and large area detectors available at the PDF beamline, I was able to measure the samples at multiple temperatures below 300 K before they degraded," said Abeykoon. "When the X-ray beam bounces off the sample, it produces a pattern characteristic of the atomic arrangement of the material. This gives us the possibility to see not only what is happening at the atomic scale, but also how the material behaves in general in one measurement."

Cava lauded the 45-year relationship he has had with Brookhaven, which began with experiments he completed there for his Ph.D. thesis in the 1970s. "We have had several great collaborations with Brookhaven," he said.

Credit: 
Princeton University

'Growing' active sites on quantum dots for robust H<sub>2</sub> photogeneration

image: Schematic diagram of site- and spatial- selective integration of metal ions into QDs for robust H2 photogeneration

Image: 
Prof. WU's Group

Very recently, Chinese researchers had achieved site- and spatial- selective integration of earth-abundant metal ions (e.g., Fe2+, Co2+, Ni2+) in semiconductor quantum dots (QDs) for efficient and robust photocatalytic H2 evolution from water.

This research, published online in Matter, was conducted by a research team led by Prof. WU Lizhu and Dr. LI Xubing from the Technical Institute of Physics and Chemistry (TIPC) of the Chinese Academy of Sciences.

Photosynthesis in nature provides a paradigm for the large-scale conversion of sunlight into chemical fuels. For example, hydrogenases in certain bacteria and algae catalyze the reversible reduction of protons to H2 with remarkable activity.

Inspired by natural photosynthesis, solar-driven H2 evolution from water is regarded as an ideal pathway to store solar energy in chemical bonds. In pursuing of highly efficient chemical transformation, QDs in conjunction with non-noble metal ions have appeared as the cutting-edge technology of H2 photogeneration.

This research successfully realized the cooperative and well-controlled loading of non-noble metal ions in QDs, thereby integrating light absorber, protecting layer and active site together in an ultra-small nanocrystal, which would show great potentials in fabricating artificial photosynthetic devices for scale-up solar-to-fuel conversion.

Time-resolved spectroscopic techniques and density functional theory calculations reveal the kinetics of interfacial charge transfer and the mechanism of H2 evolution at active species, which provides new guidance on the design of multifunctional photocatalysts for practical applications.

This work was supported by the Ministry of Science and Technology of China, the National Science Foundation of China, and the Strategic Priority Research Program of the Chinese Academy of Science.

Credit: 
Chinese Academy of Sciences Headquarters

Atmospheric turbulence affects new particle formation: Common finding on three continents

image: In the nucleation stage, the condensation of precursor gas molecules forms a cluster enhanced by supersaturation induced by turbulence. In the growth stage, the turbulence-dilution effect causes preexisting particles to decrease, leading to a lower coagulation sink and prolonging the fast growth of NPF. Presented on the right-hand side are the molecular dynamic model results illustrating the evolution of the largest particles in simulations under different turbulence conditions.

Image: 
©Science China Press

New particle formation (NPF) is a key process for haze formation, leading to the deterioration of air quality. Chemical and photochemical processes have been intensively studied over the past decades to understand their roles in NPF, but the physical process has drawn much less attention.

Observational Evidence

A ubiquitous relationship is found between the intensity of atmospheric stability in the surface layer and NPF features, based on a large number of observations made at three sites in three countries (China, Finland, and the USA). Numerous factors impacting NPF are identified and quantified in our observational analyses. Besides facilitating NPF, increasing the turbulence intensity depresses the condensation sink of NPF, preventing small particles from being scavenged on the surface of preexisting particles. The growth rate is faster under strengthening turbulence conditions (> 3.2 nm h-1) than under weakening turbulence conditions (

Proposed Mechanism

In general, enhanced turbulence generates a higher local supersaturation that facilitates the clustering of condensable vapor to form new particles, favoring the nucleation process. Enhanced turbulence also dilutes the pre-existing particle concentration, causing the condensation sink to decrease. This favors the growth of newly formed particles, which also prolongs the duration of NPF events. These findings suggest a physical mechanism that may act on top of the traditional mechanisms of NPF that are solely based on chemical and photochemical processes. This may help elucidate the NPF process from a physical perspective, leading to improvements in predicting the occurrence and duration of haze events.
Model Simulations

The hypothesis of the new physical mechanism is tested using a molecular dynamics model. Model results suggest that molecule clusters are compressed and overcome the kinetic energy barrier to form a particle. Due to turbulent diffusion, strong coherent structures of dilution effectively segregate preexisting particles, which also exerts an influence on the particle size distribution, thus favoring the growth of nucleated particles.

Credit: 
Science China Press

The study of lysosomal function during cell division and chromosomal instability

image: Dra. Caroline Mauvezin group.

Image: 
Bellvitge Biomedical Research Institute (IDIBELL)

A team from the Bellvitge Biomedical Research Institute (IDIBELL) and the University of Barcelona (UB), in collaboration with a researcher from the Mayo Clinic and the University of Minnesota, have described that lysosomes and autophagy processes are active during mitosis and are necessary for a correct cell division. Lysosomes and autophagy eliminate and recycle damaged cellular components; thus, lysosomal activity sustains the correct cell function and its dysregulation is associated to several diseases including cancer, neurodegeneration, or disorders associated with aging.

In the study published in the Autophagy journal, the team led by Dr. Caroline Mauvezin demonstrated that lysosomes are active and selectively degrade specific proteins during mitosis, and that alterations in chromosome separation occur when lysosomal function is compromised.

Understanding the role of lysosomes and autophagy in the separation of chromosomes during mitosis led to another discovery. When this separation is defective affected, the daughter cells present a nucleus with a toroidal morphology, with the appearance of a hole. This particular morphology represents a potential new biomarker for the identification of cells with chromosomal instability.

"The study connects two major fields of research in cellular biology: autophagy and cell division. In the context of cancer, drugs that specifically affect each of these processes are studied as therapeutic strategies. Our results provide a new perspective that connects these two key processes involved in tumor proliferation" says Dr. Caroline Mauvezin.

Toroidal nucleus, a new biomarker of chromosomal instability

This study describes, for the first time, that cells that have suffered errors during chromosome separation, either due to alterations of lysosomal function or by other stresses, present a nucleus with a toroidal morphology after they divided.

Dr. Eugenia Almacellas, the first author of the study and IDIBELL and Faculty of Pharmacy and Food Sciences of the UB member, indicates that "Mitosis is a rapid process that takes place in a short time, and because of that detection of errors in such small proportion of cells is very challenging". And she adds "that is why the toroidal nucleus is such an interesting biomarker, since it allows the detection of cells that have suffered mitotic errors in a much wider proportion of cells".

Until now, the only biomarker to determine chromosomal instability was the micronucleus. The work led by Dr. Mauvezin presents the toroidal nucleus as a new complementary tool to detect chromosomal instability.

Lysosomes and autophagy in mitosis

The participation of lysosomes and autophagy in mitosis is still a controversial topic, some studies propose that they are inactive during cell division to protect the genetic material from degradation, and others indicate certain activity of these organelles. The present investigation studied different tumor cell lines, and it has found that lysosomes and autophagy are active during mitosis and are necessary for the process.

The research team has identified more than 100 new proteins specifically degraded by lysosomes during mitosis. Among them, they found proteins directly involved in chromosome segregation, supporting the essential role of these organelles for the correct cell division. This work serves as a precedent for the study of the mechanisms responsible for the degradation of essential mitotic proteins to prevent chromosomal instability.

Credit: 
IDIBELL-Bellvitge Biomedical Research Institute

Tree rings show unprecedented rise in extreme weather in South America

image: Araucaria araucana trees in northern Patagonia, Argentina, some of which were used in the study. Some trees can live 1,000 years.

Image: 
Ricardo Villalba, Argentine Institute of Snow, Glacier and Environmental Sciences, at the National Research Council for Science and Technology

Scientists have filled a gaping hole in the world's climate records by reconstructing 600 years of soil-moisture swings across southern and central South America. Along with documenting the mechanisms behind natural changes, the new South American Drought Atlas reveals that unprecedented widespread, intense droughts and unusually wet periods have been on the rise since the mid-20th century. It suggests that the increased volatility could be due in part to global warming, along with earlier pollution of the atmosphere by ozone-depleting chemicals. The atlas was published this week in the journal Proceedings of the National Academy of Sciences.

Recent droughts have battered agriculture in wide areas of the continent, trends the study calls "alarming." Lead author Mariano Morales of the Argentine Institute of Snow, Glacier and Environmental Sciences at the National Research Council for Science and Technology, said, "Increasingly extreme hydroclimate events are consistent with the effects of human activities, but the atlas alone does not provide evidence of how much of the observed changes are due to natural climate variability versus human-induced warming." The new long-term record "highlights the acute vulnerability of South America to extreme climate events," he said.

Coauthor Edward Cook, head of the Tree Ring Lab at Columbia University's Lamont-Doherty Earth Observatory, said, "We don't want to jump off the cliff and say this is all climate change. There is a lot of natural variability that could mimic human-induced climate change." However, he said, armed with the new 600-year record, scientists are better equipped to sort things out.

The South American Drought Atlas is the latest in a series of drought atlases assembled by Cook and colleagues, covering many centuries of year-by-year climate conditions in North America; Asia; Europe and the Mediterranean; and New Zealand and eastern Australia. Subsequent studies building on the atlases have yielded new insights into how droughts may have adversely affected past civilizations, and the increasingly apparent role of human-induced warming on modern climate. Most recently, followup analyses of North America have suggested that warming is driving what may be the worst-ever known drought in the U.S. West.

The new atlas covers Argentina, Chile, Uruguay, Paraguay, most of Bolivia, and southern Brazil and Peru. It is the result of years of field collections of thousands of tree-ring records, and subsequent analyses by South American researchers, along with colleagues in Europe, Canada, Russia and the United States. Ring widths generally reflect yearly changes in soil moisture, and the researchers showed that collected rings correlate well with droughts and floods recorded starting in the early Spanish colonial period, as well as with modern instrumental measurements. This gave them confidence to extend the soil-moisture reconstruction back before written records.

The authors say that periodic natural shifts in precipitation are driven by complex, interlocking patterns of atmospheric circulation on land and at sea. One key factor: low-level westerly winds that blow moisture onto the continent from the Pacific. These are controlled in part by periodic cyclic changes in sea-surface temperatures over both the Pacific and the Atlantic, which can bring both droughts and wet periods. The authors say greenhouse-gas-driven shifts in these patterns appear linked to a still continuing 10-year drought over central Chile and western Argentina that has caused severe water shortages, along with heavier than normal rains in eastern regions.

Precipitation is also controlled in part by the Southern Annular Mode, a belt of westerly winds that circles Antarctica. This belt periodically contracts southward or expands northward, and when it contracts, it weakens the westerly winds that bring rain to South America. In recent decades, it has been stuck in the south -- largely a result of ozone-depleting chemicals used in 20th-century refrigerants that destroyed atmospheric ozone over Antarctica, scientists believe. The chemicals were banned in the 1980s, but their effects have persisted.

The third major factor is the so-called Hadley cell, a global phenomenon that lofts warm, moist air from near the equator and sends it further north and south, dropping precipitation as it goes. The air settles near the surface at predictable latitudes, by which time the moisture has been largely wrung out; this creates the permanently dry zones of the subtropics, including those in South America. During recent decades, the Hadley cell has expanded towards the poles, likely in response to human-induced climate changes; this has shifted rainfall patterns and broadened the subtropical dry zones, especially in the Southern Hemisphere.

The atlas indicates that there has been a steady increase in the frequency of widespread droughts since 1930, with the highest return times, about 10 years, occurring since the 1960s. Severe water shortages have affected central Chile and western Argentina from 1968-1969, 1976-1977, and 1996-1997. Currently, the drylands of central Chile and western Argentina are locked in one of the most severe decade-long droughts in the record. In some areas, up to two-thirds of some cereal and vegetable crops have been lost in some years. This threatens "the potential collapse of food systems," says Morales.

At the same time, southeastern parts of the continent are seeing heavier than normal rains. Walter Baethgen, who leads Latin American agricultural research for Columbia University's International Research Institute for Climate and Society, says his own studies show that the La Plata basin of Uruguay has seen more frequent extremely wet summers since 1970, with corresponding increases in crop and livestock production. But the frequency of very dry summers has remained the same, which translates to bigger losses of expected yields when they do come along, he said.

"Everything is consistent with the idea that you'll be intensifying both wet and dry events with global warming," said Jason Smerdon, a climate scientist at Lamont-Doherty and a coauthor of the study.

Using newly developed tree-ring records from Peru, Brazil, Bolivia and Colombia, the group is now working to expand the atlas to cover the entire continent, and extend the climate reconstruction back 1,000 years or more, said Morales.

The authors wish to dedicate the study to the memory of the late María del Rosario Prieto, their coauthor, and active promoter of environmental history studies in South America.

Credit: 
Columbia Climate School

Machine learning helps grow artificial organs

image: Researchers from the Moscow Institute of Physics and Technology, Ivannikov Institute for System Programming, and the Harvard Medical School-affiliated Schepens Eye Research Institute have developed a neural network capable of recognizing retinal tissues during the process of their differentiation in a dish. Unlike humans, the algorithm achieves this without the need to modify cells, making the method suitable for growing retinal tissue for developing cell replacement therapies to treat blindness and conducting research into new drugs.

Image: 
MIPT Press Office

Researchers from the Moscow Institute of Physics and Technology, Ivannikov Institute for System Programming, and the Harvard Medical School-affiliated Schepens Eye Research Institute have developed a neural network capable of recognizing retinal tissues during the process of their differentiation in a dish. Unlike humans, the algorithm achieves this without the need to modify cells, making the method suitable for growing retinal tissue for developing cell replacement therapies to treat blindness and conducting research into new drugs. The study was published in Frontiers in Cellular Neuroscience.

This would allow to expand the applications of the technology for multiple fields including the drug discovery and development of cell replacement therapies to treat blindness

In multicellular organisms, the cells making up different organs and tissues are not the same. They have distinct functions and properties, acquired in the course of development. They start out the same, as so-called stem cells, which have the potential to become any kind of cell the mature organism incorporates. They then undergo differentiation by producing proteins specific to certain tissues and organs.

The most advanced technique for replicating tissue differentiation in vitro relies on 3D cell aggregates called organoids. The method has already proved effective for studying the development of the retina, the brain, the inner ear, the intestine, the pancreas, and many other tissue types. Since organoid-based differentiation closely mimics natural processes, the resulting tissue is very similar to the one in an actual biological organ.

Some of the stages in cell differentiation toward retina have a stochastic (random) nature, leading to considerable variations in the number of cells with a particular function even between artificial organs in the same batch. The discrepancy is even greater when different cell lines are involved. As a result, it is necessary to have a means of determining which cells have already differentiated at a given point in time. Otherwise, experiments will not be truly replicable, making clinical applications less reliable, too.

To spot differentiated cells, tissue engineers use fluorescent proteins. By inserting the gene responsible for the production of such a protein into the DNA of cells, researchers ensure that it is synthesized and produces a signal once a certain stage in cell development has been reached. While this technique is highly sensitive, specific, and convenient for quantitative assessments, it is not suitable for cells intended for transplantation or hereditary disease modeling.

To address that pitfall, the authors of the recent study in Frontiers in Cellular Neuroscience have proposed an alternative approach based on tissue structure. No reliable and objective criteria for predicting the quality of differentiated cells have been formulated so far. The researchers proposed that the best retinal tissues -- those most suitable for transplantation, drug screening, or disease modeling -- should be selected using neural networks and artificial intelligence.

"One of the main focuses of our lab is applying the methods of bioinformatics, machine learning, and AI to practical tasks in genetics and molecular biology. And this solution, too, is at the interface between sciences. In it, neural networks, which are among the things MIPT traditionally excels at, address a problem important for biomedicine: predicting stem cell differentiation into retina," said study co-author Pavel Volchkov, who heads the Genome Engineering Lab at MIPT.

"The human retina has a very limited capacity for regeneration," the geneticist went on. "This means that any progressive loss of neurons -- for example, in glaucoma -- inevitably leads to complete loss of vision. And there is nothing a physician can recommend, short of getting a head start on learning Braille. Our research takes biomedicine a step closer to creating a cellular therapy for retinal diseases that would not only halt the progression but reverse vision loss."

The team trained a neural network -- that is, a computer algorithm that mimics the way neurons work in the human brain -- to identify the tissues in a developing retina based on photographs made by a conventional light microscope. The researchers first had a number of experts identify the differentiated cells in 1,200 images via an accurate technique that involves the use of a fluorescent reporter. The neural network was trained on 750 images, with another 150 used for validation and 250 for testing predictions. At this last stage, the machine was able to spot differentiated cells with an 84% accuracy, compared with 67% achieved by humans.

"Our findings indicate that the current criteria used for early-stage retinal tissue selection may be subjective. They depend on the expert making the decision. However, we hypothesized that the tissue morphology, its structure, contains clues that enable predicting retinal differentiation, even at very early stages. And unlike a human, the computer program can extract that information!" commented Evgenii Kegeles of the MIPT Laboratory for Orphan Disease Therapy and Schepens Eye Research Institute, U.S.

"This approach does not require images of a very high quality, fluorescent reporters, or dyes, making it relatively easy to implement," the scientist added. "It takes us one step closer to developing cellular therapies for the retinal diseases such as glaucoma and macular degeneration, which today invariably lead to blindness. Besides that, the approach can be transferred not just to other cell lines, but also to other human artificial organs."

Credit: 
Moscow Institute of Physics and Technology

Contest between superconductivity and insulating states in Magic Angle Graphene

image: Caption: from left to right: Dr. Xiaobo Lu, Ipsita Das, Dr. Petr Stepanov, and Prof. Dmitri Efetov in the lab at ICFO.

Image: 
©ICFO

If you stack two layers of graphene one on top of the other, and rotate them at an angle of 1.1º (no more and no less) from each other - the so-called magic-angle, experiments have proven that the material can behave like an insulator, where no electrical current can flow, and at the same can also behave like a superconductor, where electrical currents can flow without resistance.

This major finding took place in 2018. Last year, in 2019, while ICFO researchers were improving the quality of the device used to replicate such breakthrough, they stumbled upon something even bigger and totally unexpected. They were able to observe a zoo of previously unobserved superconducting and correlated states, in addition to an entirely new set of magnetic and topological states, opening a completely new realm of richer physics.

So far, there is no theory that has been able to explain superconductivity in magic angle graphene at the microscopic level. However, this finding has triggered many studies, which are trying to understand and unveil the physics behind all these phenomena that occur in this material. In particular, scientists drew analogies to unconventional high temperature superconductors - the cuprates, which hold the record highest superconducting temperatures, only 2 times lower than room temperature. Their microscopic mechanism of the superconducting phase is still not understood, 30 years after its discovery. However, similarly to Magic Angle Twisted Bi-layer Graphene (MATBG), it is believed that an insulating phase is responsible for the superconducting phase in proximity to it. Understanding the relationship between the superconducting and insulating phases is at the centre of researcher's interest, and could lead to a big breakthrough in superconductivity research.

In such pursuit, in a study recently published in Nature, ICFO researchers Petr Stepanov, Ipsita Das, Xiaobo Lu, Frank H. L. Koppens, led by ICFO Prof. Dmitri Efetov, in collaboration with an interdisciplinary group of scientists from MIT, National Institute for Materials Science in Japan, and Imperial College London, have delved deeper into the physical behaviour of this system and report on the detailed testing and screening-controlled of Magic-Angle Twisted Bi-layer Graphene (MATBG) devices with several near-magic-angle twist angles, to find a possible explanation for the mentioned states.

In their experiment, they were able to simultaneously control the speed and interaction energies of the electrons, and so turn the insulating phases into superconducting ones. Normally, at the magic angle, an insulating state is formed, since electrons have very small velocities, and in addition they strongly repel each other through the Coulomb force. In this study Stepanov and team used devices with twist-angles slightly away from the magic-angle of 1.1° by ± 0.05°, and placed these very close to metallic screening layers, separating these by only few nano-meters by insulating hexagonal boron nitride layers. This allowed to reduce the repulsive force between the electrons and to speed these up, so allowing them to move freely, escaping the insulating state.

By doing so, Stepanov and colleagues observed something quite unexpected. By changing the voltage (carrier density) in the different device configurations, the superconductivity phase remained while the correlated insulator phase disappeared. In fact, the superconducting phase spanned over larger regions of densities even when the carrier density varied. Such observations suggest that rather than having the same common origin, the insulating and superconducting phase actually could compete with each other, which puts into question the simple analogy with the cuprates, that was believed previously. However, the scientist soon realized, that the superconducting phase could be even more interesting, as it lies in close proximity to topological states, which are activated by recurring electronic interaction by applying a magnetic field.

Superconductivity with Magic-Angle Graphene

Room temperature superconductivity is the key to many technological goals such as efficient power transmission, frictionless trains, or even quantum computers, among others. When discovered more than 100 years ago, superconductivity was only plausible in materials cooled down to temperatures close to absolute zero. Then, in the late 80's, scientists discovered high temperature superconductors by using ceramic materials called cuprates. In spite of the difficulty of building superconductors and the need to apply extreme conditions (very strong magnetic fields) to study the material, the field took off as something of a holy grail among scientists based on this advance. Since last year, the excitement around this field has increased. The double mono-layers of carbon have captivated researchers because, in contrast to cuprates, their structural simplicity has become an excellent platform to explore the complex physics of superconductivity.

Credit: 
ICFO-The Institute of Photonic Sciences

TARA biosystems demonstrates <em>in vitro</em> cardiac biology model mimics human drug response

NEW YORK, July 7, 2020--TARA Biosystems today reported study results demonstrating the ability of TARA's in vitro human cardiac models to reproduce drug responses similar to those observed in humans. Appearing in the Journal of Pharmacological and Toxicological Methods, these findings further support the use of TARA's in vitro human cardiac models as a robust, translational platform for the evaluation of new medicines. The study was done in collaboration with Amgen, Inc.

The paper describes a validation study in which TARA's in vitro human cardiac tissues were treated, in a blinded fashion, with eight drugs from several classes of therapeutic agents known to increase the strength of cardiac muscle contraction i.e. inotropes. The results demonstrated that the contractile response of the treated tissues were consistent with the response observed in humans. The diverse mechanisms by which the agents studied improve cardiac contractility, showcase the broad-based applicability of TARA's in vitro human cardiac models for cardiac drug discovery and development.

One of the major challenges of cardiac drug development has been the lack of predictive high throughput models for compound testing. In vivo animal-based disease models often cannot recapitulate the human phenotype due to substantial species differences. Primary cells from patients are also limited, particularly for hard-to-access cells, such as those from human cardiac tissue. And, while induced pluripotent stem cells (iPSCs) are a promising strategy to address scalability, their utility has been limited by their functional immaturity and coincident lack of response to many drugs in clinical use. TARA's human cardiac tissues, engineered utilizing TARA's Biowire™ II platform, faithfully exhibit key aspects of human cardiac physiology.

"This study demonstrates the translational utility of TARA's human-based platform in evaluating the safety and efficacy of new therapies early in discovery," said Michael P. Graziano, PhD, chief scientific officer of TARA Biosystems.

Building on the work published today, TARA continues to extend its capabilities across a range of genetic and drug induced disease models and increasing the number of integrated endpoint measurements. More than 30 pharmaceutical and biotech companies are working with TARA to assess cardiac risk and investigate novel cardiac disease models for heart failure drug discovery.

Credit: 
CG Life

RNA key in helping stem cells know what to become

Look deep inside our cells, and you'll find that each has an identical genome -a complete set of genes that provides the instructions for our cells' form and function.

But if each blueprint is identical, why does an eye cell look and act differently than a skin cell or brain cell? How does a stem cell - the raw material with which our organ and tissue cells are made - know what to become?

In a study published July 8, University of Colorado Boulder researchers come one step closer to answering that fundamental question, concluding that the molecular messenger RNA (ribonucleic acid) plays an indispensable role in cell differentiation, serving as a bridge between our genes and the so-called "epigenetic" machinery that turns them on and off.

When that bridge is missing or flawed, the researchers report in the journal Nature Genetics, a stem cell on the path to becoming a heart cell never learns how to beat.

The paper comes at a time when pharmaceutical companies are taking unprecedented interest in RNA. And, while the research is young, it could ultimately inform development of new RNA-targeted therapies, from cancer treatments to therapies for cardiac abnormalities.

"All genes are not expressed all the time in all cells. Instead, each tissue type has its own epigenetic program that determines which genes get turned on or off at any moment," said co-senior author Thomas Cech, a Nobel laureate and distinguished professor of biochemistry. "We determined in great detail that RNA is a master regulator of this epigenetic silencing and that in the absence of RNA, this system cannot work. It is critical for life."

Scientists have known for decades that while each cell has identical genes, cells in different organs and tissues express them differently. Epigenetics, or the machinery that switches genes on or off, makes this possible.

But just how that machinery works has remained unclear.

In 2006, John Rinn, now a professor of biochemistry at CU Boulder and co-senior-author on the new paper, proposed for the first time that RNA - the oft-overlooked sibling of DNA (deoxyribonucleic acid) - might be key.

In a landmark paper in Cell, Rinn showed that inside the nucleus, RNA attaches itself to a folded cluster of proteins called polycomb repressive complex (PRC2), which is believed to regulate gene expression. Numerous other studies have since found the same and added that different RNAs also bind to different protein complexes.

The hotly debated question: Does this actually matter in determining a cell's fate?

No fewer than 502 papers have been published since. Some determined RNA is key in epigenetics; others dismissed its role as tangential at best.

So, in 2015, Yicheng Long, a biochemist and postdoctoral researcher in Cech's lab, set out to ask the question again using the latest available tools. After a chance meeting in a breakroom at the BioFrontiers Institute where both their labs are housed, Long bumped into Taeyoung Hwang, a computational biologist in Rinn's lab.

A unique partnership was born.

"We were able to use data science approaches and high-powered computing to understand molecular patterns and evaluate RNA's role in a novel, quantitative way," said Hwang, who along with Long is co-first-author on the new paper.

In the lab, the team then used a simple enzyme to remove all RNA in cells to understand whether the epigenetic machinery still found its way to DNA to silence genes. The answer was 'no.'

"RNA seemed to be playing the role of air traffic controller, guiding the plane - or protein complex - to the right spot on the DNA to land and silence genes," said Long.

For a third step, they used the gene-editing technology known as CRISPR to develop a line of stem cells destined to become human heart muscle cells but in which the protein complex, PRC2, was incapable of binding to RNA. In essence, the plane couldn't connect with air-traffic control and lost its way, and the process fell apart.

By day 7, the normal stem cells had begun to look and act like heart cells. But the mutant cells didn't beat. Notably, when normal PRC2 was restored, they began to behave more normally.

"We can now say, unequivocally, that RNA is critical in this process of cell differentiation," said Long.

Previous research has already shown that genetic mutations in humans that disrupt RNA's ability to bind to these proteins boost risk of certain cancers and fetal heart abnormalities.
Ultimately, the researchers envision a day when RNA-targeted therapies could be used to address such problems.

"These findings will set a new scientific stage showing an inextricable link between epigenetics and RNA biology," said Rinn. "They could have broad implications for understanding, and addressing, human disease going forward."

Credit: 
University of Colorado at Boulder