Tech

The wars in Former Yugoslavia continue in the classroom

image: Bosnian pupils

Image: 
Photo: UNICEF BiH/Facebook

According to the Education Act, schools in the ethnically divided Bosnia and Herzegovina must teach students "democratic ideals in a multicultural society." But according to new research from the University of Copenhagen, the opposite happens: Segregated schools perpetuate ethnic divisions between Croats, Serbs and Bosniaks, making reconciliation after the 1992-1995 wars extremely difficult.

25 years ago, the warring factions in the war in former Yugoslavia signed a peace agreement. Bosnia and Herzegovina, where 100,000 people lost their lives during the war, is now an independent state comprising the Bosnian-Croatian Federation and the Republika Srpska. It is a division that reflects the three groups in the country: The Muslim Bosniaks, the Catholic Croats and the Orthodox Serbs.

The ethnic division of the country is also seen in the education system, where no less than thirteen ministries of education are responsible for teaching in local Serbian, Croatian and Bosniak counties.

"The education system in Bosnia and Herzegovina is an example of how even the best intentions can lead to bad results: In the Education Act, which was drafted on the initiative of the international community, emphasis is placed on promoting students' democratic education in a multicultural society. In principle, this is what all parties have agreed on, says PhD Selma Bukovica Gundersen, who has just defended her PhD dissertation on the history classes in Bosnia and Herzegovina's schools." She continues:

"In practice, this is just not what happens because when the new constitution was written in 1995, the international community also wanted to ensure that children could be taught in their own language. This had the unintended consequence that the previous nationwide education system was replaced with an ethnically segregated system with curricula and textbooks in the now three official languages - which is basically one and the same language. This means, for example, that the pupils are presented with three fundamentally different versions of the war 1992-1995 in their history classes, depending on whether they attend a Croatian, Serbian or Bosniak school. In this way, the schools perpetuate ethnic and religious differences rather than prepare the ground for dialogue about the difficult and sensitive past."

The children are left alone with difficult thoughts

In connection with her dissertation, Selma Bukovica Gundersen interviewed history teachers and the other key actors in school governance, observed history classes and read a large number of documents such as curricula, history books and educational legislation. Finally, she collected and analysed 103 essays written by schoolchildren who were trying to come to grips with their identity and their knowledge of the war 1992-1995:

"The structure of the education system and the teaching materials, which are tailored to suit specific ethnic groups, mean that children primarily identify themselves with their own group, because there is no shared identity they can choose, even if they wanted to. The schools thus sustain a 'discourse of impossibility'- that is, the notion that co-existence across ethnic and religious divides is impossible. And it is clear from the essays that many children are very alone with difficult thoughts about war, grief, identity and belonging, and these are either addressed in a very one-sided fashion at school or not at all," says Selma Bukovica Gundersen and elaborates:

"The newly elected mayor in Banja Luka, which is the capital of the Serbian part of Bosnia and Herzegovina, is a young man who is perceived as the man of the future, a man with the potential to create change. He is only 27 years old and belongs to the generation I have written about in my dissertation - the generation that has no personal recollection of the war 1992-1995 but has grown up in a divided country. He does not recognise the genocide in Srebrenica or The Hague trials, and he can therefore be said to be a product of the segregated schools that reproduce the ethnonational narratives of the past. The same separation policy that was practised in the late 1980's and early 1990's, when World War II was the contentious subject."

History teachers are under pressure

According to Selma Bukovica Gundersen, the lack of political will in local school districts to handle the memory of the war 1992-1995 in constructive ways challenges teachers when communicating the controversial topic in their classrooms.

"Many teachers try to avoid dealing with the topic in their classes, but also acknowledge that this is hardly a viable or future-proof solution. Other teachers try to navigate between the local demands for rigorous ethnonational communication of history and the national and international demands for diversity and democratic dialogue. This is obviously not easy, and they feel under a lot of pressure," explains Selma Bukovica Gundersen and concludes:

"In my view, it is absolutely crucial that the education system in Bosnia and Herzegovina is capable of introducing future generations to the causes and consequences of the war 1992-1995, but without becoming a tool for narrow religious and ethnic identities, which, unfortunately, is the case now. If the idea is that future generations should be able to unite the divided country, you need to agree on a common language for the past across ethnic boundaries and establish a narrative that subsequent generations can be taught. We must ask ourselves how long a state can survive on the basis of a purely formal and administrative link between the state and its citizens, but without a common understanding of or interpretation of history?"

According to Selma Bukovica Gundersen, the theme of the dissertation is, however, in no way unique to Bosnia and Herzegovina. This is not only important in a post-war society, but in all societies, which must deal with ethnic and religious diversity - in other words, challenges of creating a democracy that includes more cultures, and where more ethnic groups and cultures must be able to coexist peacefully.

Credit: 
University of Copenhagen - Faculty of Humanities

A new modifier increases the efficiency of perovskite solar cells

image: Perovskite module prototype

Image: 
Sergey Gnuskov/NUST MISIS

The research team of NUST MISIS has presented an improved structure of perovskite solar cells. Scientists have modified perovskite-based solar cells using MXenes -- thin two-dimensional titanium carbides with high electrical conductivity. The MXenes-based modified cells showed superior performance, with power conversion efficiency exceeding 19% (the reference demonstrated 17%) and improved stabilized power output with respect to reference devices. The results have been published in the Nano energy international scientific journal.

Perovskite solar cells are promising alternative energy technology worldwide. They can be printed on special inkjet or slot die printers with minimal quantity of vacuum processes. This reduces the cost of the device compared to traditional silicon solar cell technology.

Their other advantages are flexibility (the solar cell can be made on substrates of PET a common material for plastic bottles) and compactness. Perovskite solar cells can be mounted on the walls of buildings and curved surfaces of automobile panoramic roofs, receiving independent power supply.

The perovskite module has a sandwich structure: there is a process of collecting electrons between the layers. As a result, the energy of sunlight is converted into electrical energy. The layers are very thin -- from 10 to 50 nanometers, and the "sandwich" itself is thinner than a human hair. The collection of the charge carriers in the solar cells should go with minimal losses during electron transport. The reduction of the such losses in the device will increase the power of the solar cell.

A scientific group of physicists from NUST MISIS and the University of Tor Vergata (Rome, Italy) have shown experimentally that the addition of a small amount of titanium carbide-based MXenes to light-absorbing perovskite layers improves the electronic transport process and optimizes the performance of the solar cell. The name -- MXenes comes from the synthesis process. The material is made by etching and exfoliation of the atomically thin metal carbides pre-coated with aluminum (MAX phases -- layered hexagonal carbides and nitrides).

"In this work, we demonstrate a useful role of MXenes doping both for the photoactive layer (perovskite) and for the electron transport layer (fullerenes) in the structure of solar cells based on nickel oxide," said the co-author of the paper, a researcher from the NUST MISIS Laboratory for Advanced Solar Energy, post-graduate student Anastasia Yakusheva. "On the one hand, the addition of MXenes helps to align the energy levels at the perovskite/fullerene interface, and, on the other hand, it helps to control the concentration of defects in the thin-film device, and improves the collection of photocurrent."

The solar cells developed with the new approach have shown improved characteristics with a power conversion efficiency exceeding 19%. This is 2% more in comparison to the reference devices.

The approach proposed by the developers can be easily scaled to the format of modules and large-area panels. Doping with MXenes does not change the fabrication sequence and integrated only to the initial stage of ink preparation without changes to architecture of the device.

Credit: 
National University of Science and Technology MISIS

Color is in the eye of the beholder

image: Atala hairstreak (Eumaeus atala) foraging on flower nectar in the Montgomery Botanical Garden, Miami, Florida.

Image: 
Image copyright Nanfang Yu (Columbia University).

The colors in a flower patch appear completely different to a bear, a honeybee, a butterfly and humans. The ability to see these colors is generated by specific properties of opsins - light-sensitive proteins in the retina of our eyes. The number of opsins expressed and the molecular structure of the receptor proteins determines the colors we see.

In a paper published February 9 in Proceedings of the National Academy of Sciences a team of researchers led by Harvard University develop a novel method to express long wavelength invertebrate opsin proteins in vitro and detail the molecular structure of redshift (long-wavelength) and blueshift (short-wavelength) in the opsins of the iconic tropical lycaenid butterfly, Eumaeus atala.

The study led by Postdoctoral Fellow Marjorie Liénard and Professor Naomi Pierce, Department of Organismic and Evolutionary Biology, Harvard University, Research Associate Emeritus Gary Bernard, University of Washington, Seattle, and Professor Feng Zhang, Broad Institute, discovered previously unknown opsins that result in red-shifted long wavelength sensitivity in the visual system of Eumaeus atala. With this method researchers could pinpoint the specific base pair changes responsible for the spectral tuning of these visual proteins and reveal how vision genes evolved.

Aside from primates, relatively few animals living on land can perceive long-wavelength orange and red light. However, researchers have long known that certain butterflies have red light photoreceptors and show a preference to gather nectar at red flowers. The visual range for human eyes is typically 380nm to 700nm, but many insects can perceive shorter wavelengths of ultraviolet light below 400nm and they sometimes use these shorter wavelengths as a "private channel" of communication to signal to each other. Many flowers also take advantage of this ultraviolet reflectance to attract and signal rewards for pollinating insects such as bees, flies, and butterflies.

Given the importance of color-guided behaviors and the remarkable photoreceptor spectral diversity observed in insects, the dynamic opsin gene diversification found across lineages highlights their central role in adaptation. Insects are an ideal system in which to explore the dynamics of photoreceptor evolution; plenty of insect behavior is color-guided (e.g. mate-choice) and researchers know that insects vary remarkably in their spectral sensitivities. However, there lacked a set of tools that would allow researchers to probe the molecular details of insect vision.

To do this, researchers needed a tool to isolate and express a particular insect opsin gene in order to explore its structure and function when expressed in the membranes of cells in cell culture, a "heterologous expression system". Previously this method was used to analyze a few insect opsins sensitive to shorter wavelength, but opsins coding for longer wavelengths proved intractable leading researchers to speculate that these opsins were too unstable to be expressed in cell membranes.

'Because of the difficulty of expressing long-wavelength invertebrate opsins in vitro, our understanding of the molecular basis of functional shifts in opsin spectral sensitivities has been biased towards research primarily in vertebrates," said Liénard. The study by Liénard et al provides the tools needed to understand precisely how a single amino acid change in an opsin protein can alter what an insect sees and opens the ability to tease apart genotype-phenotype relationships underlying spectral tuning and visual adaptations in insects.

"Once we understand how genes making up the light-sensitive opsins in insect eyes function, we can start to retrace the evolutionary transitions involved in adaptive color vision across invertebrates," said Pierce.

Liénard made the critical breakthrough to express invertebrate opsin proteins in vitro optimizing each step - designing engineering cassettes, optimizing gene expression, and appropriately scaling up. Liénard's new system can be used to investigate opsins whose sensitivities range from the ultraviolet through to long wavelengths in the red, bordering on near infrared. And it can explore the behavior of these opsins without the interference of other eye components, such as filtering pigments, that often surround photoreceptor cells. "Once we understand how the genes making up the light-sensitive opsins in insect eyes function, we can start to retrace the evolutionary transitions involved in adaptive color vision across lineages," said Pierce.

The researchers characterized and purified all visual opsin genes in vitro for multiple species of butterflies. They then analyzed the photoreceptors where the opsin molecules are expressed across the eyes in the Atala hairstreak (Eumaeus atala). They looked for consistent patterns of base pair changes in different opsins, and then experimentally mutated the sequences of those opsins to test their evolutionary spectral tuning trajectories.

They discovered a new type of opsin absorbing red light and identified amino acids that are key to evolving green-shifted blue opsin functions. Lycaenid butterflies are famous for their rich diversity of wing coloration and behavioral ecology. Compared to the ancestral insect eye, which is thought to have been equipped with only one blue and one green receptor, these butterflies are able to maintain reliable color vision both across blue-green and green-red ranges of the light spectrum owing to the adaptive evolution of new opsin functions. Coordinated spectral shifts in green and red opsins underlie the genetic basis of red color vision in these butterflies.

The study also surprisingly showed that insects, which rely on a different subclass of opsin G-protein coupled receptors compared to vertebrates, nevertheless change blue opsin absorption by convergently shifting some of the same key amino acid residues in the protein binding pocket as short wavelength opsins of vertebrates. "But we also identified new tuning sites," said Liénard, "and the question now is whether the chromophore-binding sites are specific to this butterfly species, or whether they occurred recurrently as a signature of convergent adaptive evolution across insects."

"Color vision is driven by neural comparison among photoreceptors that have different spectral sensitivities," said Bernard. "But studying living eyes is a tedious task and obtaining a sufficient number of living individuals to make the measurements can also be a limiting factor."

Liénard, now a researcher at Lund University in Sweden, agreed, "This is why reconciling physiology and functional heterologous expression opens up new avenues in the field, especially since all invertebrate groups share the same opsin subclass. We hope this assay will 'deorphanize' functional studies of invertebrate opsins."

"Ultimately, this opens up the opportunity to better understand the structure-function relationships of light sensitive receptors," said Pierce, "and most importantly, how genotypic variation can translate into functional phenotypes, which is a cornerstone of evolutionary biology."

Credit: 
Harvard University, Department of Organismic and Evolutionary Biology

The pandemic lockdown leads to cleaner city air across Canada, Concordia paper reveals

image: Xuelin Tian: "Traffic congestion levels decreased by 69 per cent in Toronto and by 75 per cent in Montreal, compared to the same week of March in 2019."

Image: 
Concordia University

The COVID-19 pandemic that shuttered cities around the world did not just affect the way we work, study and socialize. It also affected our mobility. With millions of workers no longer commuting, vehicle traffic across Canada has plummeted. This has had a significant impact on the quality of air in major Canadian cities, according to a new study by Concordia researchers.

A paper published in the journal Science of the Total Environment looked at downtown air quality monitoring station data from Vancouver, Edmonton, Saskatoon, Winnipeg, Toronto, Montreal, Halifax and St. John's. It compared the cities' concentration levels of nitrogen dioxide, carbon monoxide and sulfur dioxide measured between February and August 2020 to the figures recorded over the same period in 2018 and 2019. They also used satellite imagery and urban transportation fuel consumption figures to investigate emissions traffic congestion data provided by tracking technology embedded in phones and cars worldwide.

Not surprisingly, the researchers found that emission levels dropped dramatically over the course of the pandemic. The most noticeable drop-off occurred in week 12 of 2020 -- the one beginning Sunday, March 15, when national lockdown measures were implemented.

"We saw traffic congestion levels decrease by 69 per cent in Toronto and by 75 per cent in Montreal, compared to the same week in 2019," says the paper's lead author, Xuelin Tian, a second-year MSc student at the Gina Cody School of Engineering and Computer Science. Her co-authors include fellow student Zhikun Chen, her supervisor Chunjiang An, assistant professor in the Department of Building, Civil and Environmental Engineering, and Zhiqiang Tian of Xi'an Jiaotong University in China.

Less gasoline means less pollution

The paper notes that motor gasoline consumption fell by almost half during the pandemic's early weeks, with a similar, corresponding drop seen in carbon dioxide emissions. Motor gasoline consumption added 8,253.52 million kilograms of carbon dioxide to the atmosphere in April 2019, according to the authors' data. That number dropped to 4,593.01 million kilograms in April 2020.

There have also been significant drops in the concentration levels of nitrogen dioxide in Vancouver, Edmonton, Toronto and Montreal since the beginning of the pandemic. Similarly, concentration levels of carbon monoxide, closely linked to the transportation and mobile equipment sectors, dropped. In Edmonton, carbon monoxide concentration levels fell by as much as 50 per cent, from 0.14 parts per million in March 2018 to 0.07 in March 2020.

Emissions began to grow again over the summer, but the researchers have not yet had a chance to examine data from the second lockdown that began in late fall/winter 2020.

Aside from providing a kind of snapshot of a particularly unusual period, the data can also help governments assess the long-term impact of replacing gas-burning vehicles with electric ones on Canadian city streets.

"This pandemic provided an opportunity for scenario analysis, although it wasn't done on purpose," says An, Concordia University Research Chair in Spill Response and Remediation.

"Governments everywhere are trying to reduce their use of carbon-based fuels. Now we have some data that shows what happens when we reduce the number of gasoline-powered vehicles and the effect that has on emissions."

Credit: 
Concordia University

Human eye beats machine in archaeological color identification test

image: Florida Museum of History archaeologists tested how accurately and consistently the X-Rite Capsure, right, could score the color of chips pieces of fired clay and sediment in the field.

Image: 
Lindsay Bloch/Florida Museum

GAINESVILLE, Fla. --- A ruler and scale can tell archaeologists the size and weight of a fragment of pottery - but identifying its precise color can depend on individual perception. So, when a handheld color-matching gadget came on the market, scientists hoped it offered a consistent way of determining color, free of human bias.

But a new study by archaeologists at the Florida Museum of Natural History found that the tool, known as the X-Rite Capsure, often misread colors readily distinguished by the human eye.

When tested against a book of color chips, the machine failed to produce correct color scores in 37.5% of cases, even though its software system included the same set of chips. In an analysis of fired clay bricks, the Capsure matched archaeologists' color scores only 35% of the time, dropping to about 5% matching scores when reading sediment colors in the field. Researchers also found the machine was prone to reading color chips as more yellow than they were and sediment and clay as too red.

"I think that we were surprised by how much we disagreed with the instrument. We had the expectation that it would kind of act as the moderator and resolve conflicts," said Lindsay Bloch, collection manager of the Florida Museum's Ceramic Technology Lab and lead study author. "Instead, the device would often have an entirely different answer that we all agreed was wrong."

Identifying subtle differences in color can help archaeologists compare the composition of soil and the origins of artifacts, such as pottery and beads, to understand how people lived and interacted in the past. Color can also reveal whether materials have been exposed to fire, indicating how communities used surrounding natural resources.

Today, the Munsell color system, created by Albert Munsell in 1905 and later adopted by the U.S. Department of Agriculture for soil research, is the archaeological standard for identifying colors. Researchers use a binder of 436 unique color chips to determine a Munsell color score for artifacts, sediment and objects such as bones, shell and rocks. These scores enable archaeologists around the world to compare colors across sites and time periods. But the process of assigning scores can vary based on lighting conditions, the quality of a sample and the perspective of the researcher.

This study is the first to test and record the accuracy of the X-Rite Capsure, a device made by the same company that owns the color authority Pantone. Although marketed to archaeologists, the device was originally designed for interior designers and cosmetologists, not research, Bloch said.

"I think the main takeaway was just sort of surprise that it's something that is marketed for our field, specifically for archaeologists, but hasn't been made for us and the kind of data we need to collect," she added. "When you read the manual, it says you should always verify that the color the machine tells you looks right with your eyes, which seems to negate the use of the instrument."

In an experiment designed with the help of University of Florida undergraduate researchers Claudette Lopez and Emily Kracht, the team tested the Capsure's readings of the three elements of Munsell's system: a color's general family, or hue; intensity, also known as chroma; and lightness, also called value.

The team first tested the Capsure on all 436 Munsell soil color chips, rating its reading as correct if it matched the exact score on a chip three out of five times. It correctly scored 274 chips. Of its errant readings, about 75% were misidentifications of hue. The Capsure was consistent, though often wrong, producing the same reading five times for 89% of the chips.

To determine how well the machine performed in a typical laboratory setting, the team tested its color readings of 140 pottery briquettes that had been assigned Munsell scores by Lopez. The Capsure matched the archaeologist's scores in 35% of cases, again tending to misread hue. It proved consistent in this second test as well, yielding the same score across all trials of more than 70% of the briquettes.

In the most challenging of color-identification conditions - outdoors, where lighting and texture can vary - the machine only matched archaeologists' scores of sediment samples about 5% of the time, often rating a shade darker or lighter. For one sample, the Capsure reported colors from five different families, even though archaeologists agreed the sediment was a single hue. Bloch said the discrepancy was likely due to moisture, sand and shells, which don't usually interfere with human observations.

Unlike some other methods of identifying color, the Capsure is a remote control-sized device that can provide a reading in seconds. Bloch said the tool's simple design and accessibility lend it to other scientific applications, but that the team's results point to a need for further scrutiny of how archaeologists record color.

"This new tool has really forced us to see that color is subjective and that, even with a supposedly objective instrument, it may be much more complicated than we've been led to believe," she said. "We need to pay really close attention and record how we're describing color in order to make good data. Ultimately, if we're putting bad color data in, we're going to get bad data out."

Bloch said she would give the Capsure three out of five stars for being easy to use and offering helpful ways to store data.

"The ding is for the quality of data because it's still kind of unknown. At this point, I think that our team would say the subjective eye is better."

Credit: 
Florida Museum of Natural History

Quantum computing enables simulations to unravel mysteries of magnetic materials

image: The researchers embedded a programmable model into a D-Wave quantum computer chip.

Image: 
D-Wave

A multi-institutional team became the first to generate accurate results from materials science simulations on a quantum computer that can be verified with neutron scattering experiments and other practical techniques.

Researchers from the Department of Energy's Oak Ridge National Laboratory; the University of Tennessee, Knoxville; Purdue University and D-Wave Systems harnessed the power of quantum annealing, a form of quantum computing, by embedding an existing model into a quantum computer.

Characterizing materials has long been a hallmark of classical supercomputers, which encode information using a binary system of bits that are each assigned a value of either 0 or 1. But quantum computers -- in this case, D-Wave's 2000Q - rely on qubits, which can be valued at 0, 1 or both simultaneously because of a quantum mechanical capability known as superposition.

"The underlying method behind solving materials science problems on quantum computers had already been developed, but it was all theoretical," said Paul Kairys, a student at UT Knoxville's Bredesen Center for Interdisciplinary Research and Graduate Education who led ORNL's contributions to the project. "We developed new solutions to enable materials simulations on real-world quantum devices."

This unique approach proved that quantum resources are capable of studying the magnetic structure and properties of these materials, which could lead to a better understanding of spin liquids, spin ices and other novel phases of matter useful for data storage and spintronics applications. The researchers published the results of their simulations -- which matched theoretical predictions and strongly resembled experimental data -- in PRX Quantum.

Eventually, the power and robustness of quantum computers could enable these systems to outperform their classical counterparts in terms of both accuracy and complexity, providing precise answers to materials science questions instead of approximations. However, quantum hardware limitations previously made such studies difficult or impossible to complete.

To overcome these limitations, the researchers programmed various parameters into the Shastry-Sutherland Ising model. Because it shares striking similarities with the rare earth tetraborides, a class of magnetic materials, subsequent simulations using this model could provide substantial insights into the behavior of these tangible substances.

"We are encouraged that the novel quantum annealing platform can directly help us understand materials with complicated magnetic phases, even those that have multiple defects," said co-corresponding author Arnab Banerjee, an assistant professor at Purdue. "This capability will help us make sense of real material data from a variety of neutron scattering, magnetic susceptibility and heat capacity experiments, which can be very difficult otherwise."

Magnetic materials can be described in terms of magnetic particles called spins. Each spin has a preferred orientation based on the behavior of its neighboring spins, but rare earth tetraborides are frustrated, meaning these orientations are incompatible with each other. As a result, the spins are forced to compromise on a collective configuration, leading to exotic behavior such as fractional magnetization plateaus. This peculiar behavior occurs when an applied magnetic field, which normally causes all spins to point in one direction, affects only some spins in the usual way while others point in the opposite direction instead.

Using a Monte Carlo simulation technique powered by the quantum evolution of the Ising model, the team evaluated this phenomenon in microscopic detail.

"We came up with new ways to represent the boundaries, or edges, of the material to trick the quantum computer into thinking that the material was effectively infinite, and that turned out to be crucial for correctly answering materials science questions," said co-corresponding author Travis Humble. Humble is an ORNL researcher and deputy director of the Quantum Science Center, or QSC, a DOE Quantum Information Science Research Center established at ORNL in 2020. The individuals and institutions involved in this research are QSC members.

Quantum resources have previously simulated small molecules to examine chemical or material systems. Yet, studying magnetic materials that contain thousands of atoms is possible because of the size and versatility of D-Wave's quantum device.

"D-Wave processors are now being used to simulate magnetic systems of practical interest, resembling real compounds. This is a big deal and takes us from the notepad to the lab," said Andrew King, director of performance research at D-Wave. "The ultimate goal is to study phenomena that are intractable for classical computing and outside the reach of known experimental methods."

The researchers anticipate that their novel simulations will serve as a foundation to streamline future efforts on next-generation quantum computers. In the meantime, they plan to conduct related research through the QSC, from testing different models and materials to performing experimental measurements to validate the results.

"We completed the largest simulation possible for this model on the largest quantum computer available at the time, and the results demonstrated the significant promise of using these techniques for materials science studies going forward," Kairys said.

Credit: 
DOE/Oak Ridge National Laboratory

Shining a light on the true value of solar power

image: "Customers with solar distributed generation are making it so utility companies don't have to make as many infrastructure investments, while at the same time solar shaves down peak demands when electricity is the most expensive," says Joshua Pearce, Richard Witte Endowed Professor of Materials Science and Engineering and professor of electrical and computer engineering at Michigan Technological University.

Image: 
Sarah Atkinson/Michigan Tech

Beyond the environmental benefits and lower electric bills, it turns out installing solar panels on your house actually benefits your whole community. Value estimations for grid-tied photovoltaic systems prove solar panels are beneficial for utility companies and consumers alike.

For years some utility companies have worried that solar panels drive up electric costs for people without panels. Joshua Pearce, Richard Witte Endowed Professor of Materials Science and Engineering and professor of electrical and computer engineering at Michigan Technological University, has shown the opposite is true -- grid-tied solar photovoltaic (PV) owners are actually subsidizing their non-PV neighbors.

Most PV systems are grid-tied and convert sunlight directly into electricity that is either used on-site or fed back into the grid. At night or on cloudy days, PV-owning customers use grid-sourced electricity so no batteries are needed.

"Anyone who puts up solar is being a great citizen for their neighbors and for their local utility," Pearce said, noting that when someone puts up grid-tied solar panels, they are essentially investing in the grid itself. "Customers with solar distributed generation are making it so utility companies don't have to make as many infrastructure investments, while at the same time solar shaves down peak demands when electricity is the most expensive."

Pearce and Koami Soulemane Hayibo, graduate student in the Michigan Tech Open Sustainability Technology (MOST) Lab, found that grid-tied PV-owning utility customers are undercompensated in most of the U.S., as the "value of solar" eclipses both the net metering and two-tiered rates that utilities pay for solar electricity. Their results are published online now and will be printed in the March issue of Renewable and Sustainable Energy Reviews.

The value of solar is becoming the preferred method for evaluating the economics of grid-tied PV systems. Yet value of solar calculations are challenging and there is widespread disagreement in the literature on the methods and data needed. To overcome these limitations, Pearce and Hayibo's paper reviews past studies to develop a generalized model that considers realistic costs and liabilities utility companies can avoid when individual people install grid-tied solar panels. Each component of the value has a sensitivity analysis run on the core variables and these sensitivities are applied for the total value of solar.

The overall value of solar equation has numerous components:

Avoided operation and maintenance costs (fixed and variable)

Avoided fuel.

Avoided generations capacity.

Avoided reserve capacity (plants on standby that turn on if you have, for example, a large air conditioning load on hot day).

Avoided transmission capacity (lines).

Environmental and health liability costs associated with forms of electric generation that are polluting.

Pearce said one of the paper's goals was to provide the equations to determine the value of solar so individual utility companies can plug in their proprietary data to quickly make a complete valuation.

"It can be concluded that substantial future regulatory reform is needed to ensure that grid-tied solar PV owners are not unjustly subsidizing U.S. electric utilities," Pearce explains. "This study provides greater clarity to decision makers so they see solar PV is truly an economic benefit in the best interest of all utility customers."

Solar PV technology is now a profitable method to decarbonize the grid, but if catastrophic climate change is to be avoided, emissions from transportation and heating must also decarbonize, Pearce argues.

One approach to renewable heating is leveraging improvements in PV with heat pumps (HPs), and it turns out investing in PV+HP tech has a better rate of return than CDs or savings accounts.

To determine the potential for PV+HP systems in Michigan's Upper Peninsula, Pearce performed numerical simulations and economic analysis using the same loads and climate, but with local electricity and natural gas rates for Sault Ste. Marie, in both Canada and U.S. North American residents can profitably install residential PV+HP systems, earning up to 1.9% return in the U.S. and 2.7% in Canada, to provide for all of their electric and heating needs.

"Our results suggest northern homeowners have a clear and simple method to reduce their greenhouse gas emissions by making an investment that offers a higher internal rate of return than savings accounts, CDs and global investment certificates in both the U.S. and Canada," Pearce said. "Residential PV and solar-powered heat pumps can be considered 25-year investments in financial security and environmental sustainability."

Credit: 
Michigan Technological University

'Defective' carbon simplifies hydrogen peroxide production

image: Scientists at Rice University have introduced plasma-treated carbon black as a simple and highly efficient catalyst for the production of hydrogen peroxide. Defects created in the carbon provide more catalytic sites to reduce oxygen to hydrogen peroxide.

Image: 
Tour Group/Yakobson Research Group/Rice University

HOUSTON - (Feb. 9, 2021) - Rice University researchers have created a "defective" catalyst that simplifies the generation of hydrogen peroxide from oxygen.

Rice scientists treated metal-free carbon black, the inexpensive, powdered product of petroleum production, with oxygen plasma. The process introduces defects and oxygen-containing groups into the structure of the carbon particles, exposing more surface area for interactions.

When used as a catalyst, the defective particles known as CB-Plasma reduce oxygen to hydrogen peroxide with 100% Faradaic efficiency, a measure of charge transfer in electrochemical reactions. The process shows promise to replace the complex anthraquinone-based production method that requires expensive catalysts and generates toxic organic byproducts and large amounts of wastewater, according to the researchers.

The research by Rice chemist James Tour and materials theorist Boris Yakobson appears in the American Chemical Society journal ACS Catalysis.

Hydrogen peroxide is widely used as a disinfectant, as well as in wastewater treatment, in the paper and pulp industries and for chemical oxidation. Tour expects the new process will influence the design of hydrogen peroxide catalysts going forward.

"The electrochemical process outlined here needs no metal catalysts, and this will lower the cost and make the entire process far simpler," Tour said. "Proper engineering of carbon structure could provide suitable active sites that reduce oxygen molecules while maintaining the O-O bond, so that hydrogen peroxide is the only product. Besides that, the metal-free design helps prevent the decomposition of hydrogen peroxide."

Plasma processing creates defects in carbon black particles that appear as five- or seven-member rings in the material's atomic lattice. The process sometimes removes enough atoms to create vacancies in the lattice.

The catalyst works by pulling two electrons from oxygen, allowing it to combine with two hydrogen electrons to create hydrogen peroxide. (Reducing oxygen by four electrons, a process used in fuel cells, produces water as a byproduct.)

"The selectivity towards peroxide rather than water originates not from carbon black per se but, as (co-lead author and Rice graduate student) Qin-Kun Li's calculations show, from the specific defects created by plasma processing," Yakobson said. "These catalytic defect sites favor the bonding of key intermediates for peroxide formation, lowering the reaction barrier and accelerating the desirable outcome."

Tour's lab also treated carbon black with ultraviolet-ozone and treated CB-Plasma after oxygen reduction with argon to remove most of the oxygen-containing groups. CB-UV was no better at catalysis than plain carbon black, but CB-Argon performed just as well as CB-Plasma with an even wider range of electrochemical potential, the lab reported.

Because the exposure of CB-Plasma to argon under high temperature removed most of the oxygen groups, the lab inferred the carbon defects themselves were responsible for the catalytic reduction to hydrogen peroxide.

The simplicity of the process could allow more local generation of the valuable chemical, reducing the need to transport it from centralized plants. Tour noted CB-Plasma matches the efficiency of state-of-the-art materials now used to generate hydrogen peroxide.

"Scaling this process is much easier than present methods, and it is so simple that even small units could be used to generate hydrogen peroxide at the sites of need," Tour said.

The process is the second introduced by Rice in recent months to make the manufacture of hydrogen peroxide more efficient. Rice chemical and biomolecular engineer Haotian Wang and his lab developed an oxidized carbon nanoparticle-based catalyst that produces the chemical from sunlight, air and water.

Credit: 
Rice University

Scientists create flexible biocompatible cilia that can be controlled by a magnet

image: Filaments made of polymer-coated iron oxide nanoparticles are obtained by exposing the material to a magnetic field under controlled temperature.

Image: 
Aline Grein Iankovski

Researchers at the University of Campinas’s Chemistry Institute (IQ-UNICAMP) in the state of São Paulo, Brazil, have developed a template-free technique to fabricate cilia of different sizes that mimic biological functions and have multiple applications, from directing fluids in microchannels to loading material into a cell, for example. The highly flexible cilia are based on polymer-coated iron oxide nanoparticles, and their motion can be controlled by a magnet.

In nature, cilia are microscopic hairlike structures found in large numbers on the surface of certain cells, causing currents in the surrounding fluid or, in some protozoans and other small organisms, providing propulsion.

To fabricate the elongated nanostructures without using a template, Watson Loh and postdoctoral fellow Aline Grein-Iankovski coated particles of iron oxide (γ-Fe2O3, known as maghemite) with a layer of a polymer containing thermoresponsive phosphonic acid groups and custom-synthesized by a specialized company. The technique leverages the binding affinity of phosphonic acid groups to metal oxide surfaces, fabricating the cilia by means of temperature control and use of a magnetic field.

“The materials don’t bind at room temperature or thereabouts, and form a clump without the stimulus of a magnetic field,” Loh explained. “It’s the effect of the magnetic field that gives them the elongated shape of a cilium.”

Grein-Iankovski started with stable particles in solution and had the idea of obtaining the cilia during an attempt to aggregate the material. “I was preparing loose elongated filaments in solution and thought about changing the direction field,” she recalled. “Instead of orienting them parallel to the glass slide, I placed them in a perpendicular position and found they then tended to migrate to the surface of the glass. I realized that if I forced them to stick to the glass, I could obtain a different type of material that wouldn’t be loose: its movement would be ordered and collaborative.”

The thermoresponsive polymer binds to the surface of the nanoparticles and organizes them into elongated filaments when the mixture is heated and exposed to a magnetic field. The transition occurs at a biologically compatible temperature (around 37 °C). The resulting magnetic cilia are “remarkably flexible”, she added. By increasing the concentration of the nanoparticles, their length can be varied from 10 to 100 microns. One micron (μm) is a millionth of a meter.

“The advantage of not using a template is not being subject to the limitations of this method, such as size, for example,” Grein-Inakovski explained. “In this case, to produce very small cilia we would have to create templates with microscopic holes, which would be extremely laborious. Adjustments to coat density and cilium size would require new templates. A different template has to be used for each end-product thickness. Furthermore, using a template adds another stage to the production of cilia, which is the fabrication of the template itself.”

Grein-Iankovski is the lead author of an article published in The Journal of Physical Chemistry C on the invention, which was part of a Thematic Project supported by FAPESP, with Loh as principal investigator.

“The Thematic Project involves four groups who are investigating how molecules and particles are organized at the colloidal level, meaning at the level of very small structures. Our approach is to try to find ways of controlling these molecules so that they aggregate in response to an external stimulus, giving rise to different shapes with a range of different uses,” Loh said.

Reversibility

After the magnetic field is removed, the material remains aggregated for at least 24 hours. It then disaggregates at a speed that depends on the temperature at which it was prepared. “The higher the temperature, the more intense the effect and the longer it remains aggregated outside the magnetic field,” Grein-Iankovski said.

According to Loh, the reversibility of the material is a positive point. “In our view, being able to organize and disorganize the material, to ‘switch the system on and off’, is an advantage,” Loh said. “We can adjust the temperature, how long it remains aggregated, cilium length, and coat density. We can customize the material for many different types of use, organize it and shape it for specific purposes. I believe the potential applications are countless, from biological to physical uses, including materials science applications.”

Another major advantage, Grein-Iankovski added, is the possibility of manipulating the material externally, where the tool used to do so is not inside the system. “The filaments can be used to homogenize and move particles in a fluid microsystem, in microchannels, simply by approaching a magnet from the outside. They can be made to direct fluid in this way, for example.”

The cilia can also be used in sensors, in which the particles respond to stimuli from a molecule, or to feed microscopic living organisms. “Ultimately it’s possible to feed a microorganism or cell with loose cilia, which cross the cell membrane under certain conditions. They can be made to enter a cell, and a magnetic field is applied to manipulate their motion inside the cell,” Loh said.

For more than ten years, Loh has collaborated with Jean-François Berret at Paris Diderot University (Paris 7, France) in research on the same family of polymers to obtain elongated materials for use in the biomedical field. “We’re pursuing other partnerships to explore other possible uses of the cilia,” he said.

The scientists now plan to include a chemical additive in the nanostructures that will bind the particles chemically, obtaining cilia with a higher mechanical strength that remain functional for longer when not exposed to a magnetic field, if this is desirable.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Evidence for routine brain tumor imaging is murky, but research can shed light

What is the best way to monitor a brain tumor? This question is at the heart of a new Position Statement published in open-access journal Frontiers in Oncology. The article is the work of a large collaboration of UK experts and stakeholders who met to discuss the value of routinely imaging brain tumor patients to assess their tumor treatment response, which is known as "interval imaging". Their verdict: there is very limited evidence to support the practice at present. However, the article also discusses how future research could determine and maximize the value of interval imaging by assessing its cost effectiveness and how it affects patient quality of life, treatment and survival.

Medical staff use brain scans at predetermined times to assess if a brain tumor patient is responding to treatment, but scanning frequency can range from every few weeks to every few months. Different countries and hospitals use different approaches, but what is the best approach and is any of this based on science?

Getting things right is important. Not scanning someone enough could mean that doctors miss the signs that a patient requires further treatment. Conversely, scanning someone excessively is inconvenient and impractical for patients and medical staff alike, and can cause patient anxiety, especially if the results of the scan are unclear.

Scanning patients is also expensive, and with limited budgets, healthcare facilities need to use their resources as cost-effectively as possible. Most interval imaging aims to find increases in tumor size, but tumors grow differently in different patients, which sometimes makes it difficult to draw concrete conclusions from interval imaging results. Would patients be better off receiving scans only if they experience new symptoms?

A group of experts and other stakeholders met to discuss these issues in London in 2019. The group was diverse and included numerous people with an interest in these issues. "Charity representatives, neuro-oncologists, neuro-surgeons, neuro-radiologists, neuro-psychologists, trialists, health economists, data scientists, and the imaging industry were all represented," said Dr. Thomas Booth of King's College London and the lead author on the article. Their findings are presented in this latest Position Statement.

The group discussed the evidence behind current interval imaging practices in the UK. "We found that there is very little evidence to support the currently used imaging interval schedules and that the status quo is no more than considered opinion," said Prof. Michael Jenkinson of the University of Liverpool, and senior author on the article.

So, how can we determine if interval imaging is valuable? The meeting participants also discussed a variety of potential research approaches that could cast light on the most important factors - patient quality of life, patient survival, and cost effectiveness. However, this is not without its challenges.

"The treatment complexity and relative rarity of brain tumors mean that solutions beyond traditional 'randomized controlled trials' alone are required to obtain the necessary evidence," said Booth. "We propose a range of incremental research solutions."

These include economic and statistical analyses, surveys to measure patient attitudes and quality of life while undergoing interval imaging, and even machine-learning methods to obtain more accurate predictions about the value of interval imaging from large datasets. Future targeted research is key to assessing and maximizing the potential of interval imaging, and this article lights the way.

Credit: 
Frontiers

Program led by health coaches at primary care clinics helped reduce heart risk

DALLAS, Feb. 9, 2021 -- Participants in a two-year, lifestyle intervention/weight-loss program provided through health coaches at their primary care center were able to lower their blood sugar and improve their cholesterol levels, according to new research published today in the American Heart Association's flagship journal Circulation Journal. Researchers with the PROmoting Successful Weight Loss in Primary CarE in Louisiana (PROPEL) Trial reported previously that participants also reduced body weight by an average of 5% and note that patients who lost more weight experienced greater improvements in their heart disease risk factors.

"Our results demonstrate lifestyle intervention and weight-loss programs can be successful for people in underserved, low-income communities if you bring the program to where they are, removing barriers to participation," said PROPEL principal investigator Peter T. Katzmarzyk, Ph.D., FAHA, a professor and the Marie Edana Corcoran Endowed Chair in Pediatric Obesity and Diabetes and associate executive director for population health sciences at the Pennington Biomedical Research Center of Louisiana State University in Baton Rouge, Louisiana.

Obesity is associated with numerous serious chronic health risks, including heart attacks and strokes. Between 2017 and 2018, researchers estimated the prevalence of obesity among adults in the United States was more than 40%. Food insecurity and lower levels of education and income increase the risk of obesity and its complications. Intensive lifestyle interventions are an effective treatment for obesity, however, access to these programs is often limited, particularly in low-income communities. The PROPEL trial examines the effectiveness of these actions when incorporated into primary care medical clinics.

The PROPEL Trial was conducted between 2016 and 2019 at 18 clinics across Louisiana that serve low-income patients. Clinics were randomly allocated to either usual care or a program, enrolling more than 800 participants who were between ages 20 and 75 years old and with obesity, which is defined as body mass index (BMI) ?30 kg/m2.

The usual care group received normal primary care and printed newsletters about healthy lifestyle habits. Those in the study group received 24 months of a high-intensity, lifestyle-based intervention/weight-loss program delivered by health coaches in the clinic. The program consisted of weekly sessions for the first six months and monthly sessions for the following 18 months. National guidelines set in the 2013 American Heart Association, American College of Cardiology and Task Force for the Management of Overweight and Obesity in Adults served as the basis for the PROPEL program.

Findings include:

Blood sugar levels decreased by nearly 5 mg/dL among those in the lifestyle intervention group after one year, however, there were no changes within the usual care group.

HDL cholesterol (good cholesterol) increased at both 12 and 24 months among participants in the intervention group, yet the usual care group saw no significant changes.

Overall cardiometabolic risk scores improved significantly for participants in the intervention program, while scores for patients in the usual care group were unchanged.

The researchers suggest the collaborative care approach of the PROPEL model likely offers more successful obesity treatment than the existing Centers for Medicare and Medicaid Services model, which relies solely on the primary care practitioner.

"A broader implementation of the PROPEL model could better allow people in under-resourced communities to receive effective treatment and, thus, help to reduce the prevalence of obesity and related health conditions and risks," said Katzmarzyk.

Because the PROPEL trial included a significant proportion of Black participants, the majority of whom were female, the authors suggest more research is needed to specifically address this issue among men including Black men. Additionally, they believe more study is needed on the dissemination and implementation of lifestyle intervention programs in other types of clinic settings.

Credit: 
American Heart Association

Coffee lovers, rejoice! Drinking more coffee associated with decreased heart failure risk

DALLAS, Feb. 9, 2021 -- Dietary information from three large, well-known heart disease studies suggests drinking one or more cups of caffeinated coffee may reduce heart failure risk, according to research published today in Circulation: Heart Failure, an American Heart Association journal.

Coronary artery disease, heart failure and stroke are among the top causes of death from heart disease in the U.S. "While smoking, age and high blood pressure are among the most well-known heart disease risk factors, unidentified risk factors for heart disease remain," according to David P. Kao, M.D., senior author of the study, assistant professor of cardiology and medical director at the Colorado Center for Personalized Medicine at the University of Colorado School of Medicine in Aurora, Colorado.

"The risks and benefits of drinking coffee have been topics of ongoing scientific interest due to the popularity and frequency of consumption worldwide," said Linda Van Horn, Ph.D., R.D., professor and Chief of the Department of Preventive Medicine's Nutrition Division at the Northwestern University Feinberg School of Medicine in Chicago, and member of the American Heart Association's Nutrition Committee. "Studies reporting associations with outcomes remain relatively limited due to inconsistencies in diet assessment and analytical methodologies, as well as inherent problems with self-reported dietary intake."

Kao and colleagues used machine learning through the American Heart Association's Precision Medicine Platform to examine data from the original cohort of the Framingham Heart Study and referenced it against data from both the Atherosclerosis Risk in Communities Study and the Cardiovascular Health Study to help confirm their findings. Each study included at least 10 years of follow-up, and, collectively, the studies provided information on more than 21,000 U.S. adult participants.

To analyze the outcomes of drinking caffeinated coffee, researchers categorized consumption as 0 cups per day, 1 cup per day, 2 cups per day and ?3 cups per day. Across the three studies, coffee consumption was self-reported, and no standard unit of measure were available.

The analysis revealed:

In all three studies, people who reported drinking one or more cups of caffeinated coffee had an associated decreased long-term heart failure risk.

In the Framingham Heart and the Cardiovascular Health studies, the risk of heart failure over the course of decades decreased by 5-to-12% per cup per day of coffee, compared with no coffee consumption.

In the Atherosclerosis Risk in Communities Study, the risk of heart failure did not change between 0 to 1 cup per day of coffee; however, it was about 30% lower in people who drank at least 2 cups a day.

Drinking decaffeinated coffee appeared to have an opposite effect on heart failure risk - significantly increasing the risk of heart failure in the Framingham Heart Study. In the Cardiovascular Health Study however; there was no increase or decrease in risk of heart failure associated with drinking decaffeinated coffee. When the researchers examined this further, they found caffeine consumption from any source appeared to be associated with decreased heart failure risk, and caffeine was at least part of the reason for the apparent benefit from drinking more coffee.

"The association between caffeine and heart failure risk reduction was surprising. Coffee and caffeine are often considered by the general population to be 'bad' for the heart because people associate them with palpitations, high blood pressure, etc. The consistent relationship between increasing caffeine consumption and decreasing heart failure risk turns that assumption on its head," Kao said. "However, there is not yet enough clear evidence to recommend increasing coffee consumption to decrease risk of heart disease with the same strength and certainty as stopping smoking, losing weight or exercising."

According to the federal dietary guidelines, three to five 8-ounce cups of coffee per day can be part of a healthy diet, but that only refers to plain black coffee. The American Heart Association warns that popular coffee-based drinks such as lattes and macchiatos are often high in calories, added sugar and fat. In addition, despite its benefits, research has shown that caffeine also can be dangerous if consumed in excess. Additionally, children should avoid caffeine. The American Academy of Pediatrics recommends that, in general, kids avoid beverages with caffeine.

"While unable to prove causality, it is intriguing that these three studies suggest that drinking coffee is associated with a decreased risk of heart failure and that coffee can be part of a healthy dietary pattern if consumed plain, without added sugar and high fat dairy products such as cream," said Penny M. Kris-Etherton, Ph.D., R.D.N., immediate past chairperson of the American Heart Association's Lifestyle and Cardiometabolic Health Council Leadership Committee, Evan Pugh University Professor of Nutritional Sciences and distinguished professor of nutrition at The Pennsylvania State University, College of Health and Human Development in University Park. "The bottom line: enjoy coffee in moderation as part of an overall heart-healthy dietary pattern that meets recommendations for fruits and vegetables, whole grains, low-fat/non-fat dairy products, and that also is low in sodium, saturated fat and added sugars. Also, it is important to be mindful that caffeine is a stimulant and consuming too much may be problematic - causing jitteriness and sleep problems."

Study limitations that may have impacted the results of the analysis included differences in the way coffee drinking was recorded and the type of coffee consumed. For example, drip, percolated, French press or espresso coffee types; origin of the coffee beans; and filtered or unfiltered coffee were details not specified. There also may have been variability regarding the unit measurement for 1 cup of coffee (i.e., how many ounces per cup). These factors could result in different caffeine levels. In addition, researchers caution that the original studies detailed only caffeinated or decaffeinated coffee, therefore these findings may not apply to energy drinks, caffeinated teas, soda and other food items with caffeine including chocolate.

Credit: 
American Heart Association

Limiting warming to 2 C requires emissions reductions 80% above Paris Agreement targets

In 2017, a widely cited study used statistical tools to model how likely the world is to meet the Paris Agreement global temperature targets. The analysis found that on current trends, the planet had only a 5% chance of staying below 2 degrees Celsius warming this century -- the international climate treaty's supposed goal.

Now, the same authors have used their tools to ask: What emissions cuts would actually be required to meet the goal of 2 C warming, considered a threshold for climate stability and climate-related risks such as excessive heat, drought, extreme weather and sea level rise?

The University of Washington study finds that emissions reductions about 80% more ambitious than those in the Paris Agreement, or an average of 1.8% drop in emissions per year rather than 1% per year, would be enough to stay within 2 degrees. The results were published Feb. 9 in Nature's open-access journal Communications Earth & Environment.

"A number of people have been saying, particularly in the past few years, that the emissions targets need to be more ambitious," said lead author Adrian Raftery, a UW professor of statistics. "We went beyond that to ask in a more precise way: How much more ambitious do they need to be?"

The paper uses the same statistical approach to model the three main drivers of human-produced greenhouse gases: national population, gross domestic product per person and the amount of carbon emitted for each dollar of economic activity, known as carbon intensity. It then uses a statistical model to show the range of likely future outcomes based on data and projections so far.

Even with updated methods and five more years of data, now spanning 1960 through 2015, the conclusion remains similar to the previous study: Meeting Paris Agreement targets would give only a 5% probability of staying below 2 degrees Celsius warming.

Assuming that climate policies won't target population growth or economic growth, the authors then ask what change in the "carbon intensity" measure would be needed to meet the 2 degrees warming goal.

Increasing the overall targets to cut carbon emissions by an average of 1.8% annually, and continuing on that path after the Paris Agreement expires in 2030, would give the planet a 50% chance of staying below 2 degrees warming by 2100.

"Achieving the Paris Agreement's temperature goals is something we're not on target to do now, but it wouldn't take that much extra to do it," said first author Peiran Liu, who did the research as part of his doctorate at the UW.

The paper looks at what this overall plan would mean for different countries' Paris Agreement commitments. Nations set their own Paris Agreement emissions-reductions pledges. The United States pledged a 1% drop in carbon emissions per year until 2026, or slightly more ambitious than the average. China pledged to reduce its carbon intensity, or the carbon emissions per unit of economic activity, by 60% of its 2005 levels by 2030.

"Globally, the temperature goal requires an 80% boost in the annual rate of emissions decline compared to the Paris Agreement, but if a country has finished most of its promised mitigation measures, then the extra decline required now will be smaller," Liu said.

Assuming that each country's share of the work remains unchanged, the U.S. would need to increase its goal by 38% to do its part toward actually achieving the 2 degrees goal. China's more ambitious and fairly successful plan would need only a 7% boost, and the United Kingdom, which has made substantial progress already, would need a 17% increase. On the other hand, countries that had pledged cuts but where emissions have risen, like South Korea and Brazil, would need a bigger boost now to make up for the lost time.

The authors also suggest that countries increase their accountability by reviewing progress annually, rather than on the five-year, 10-year or longer timescales included in many existing climate plans.

"To some extent, the discourse around climate has been: 'We have to completely change our lifestyles and everything,'" Raftery said. "The idea from our work is that actually, what's required is not easy, but it's quantifiable. Reducing global emissions by 1.8% per year is a goal that's not astronomical."

From 2011 to 2015, Raftery says, the U.S. did see a drop in emissions, due to efficiencies in industries ranging from lighting to transportation as well as regulation. The pandemic-related economic changes will be short-lived, he predicts, but the creativity and flexibility the pandemic has required may usher in a lasting drop in emissions.

"If you say, 'Everything's a disaster and we need to radically overhaul society,' there's a feeling of hopelessness," Raftery said. "But if we say, 'We need to reduce emissions by 1.8% a year,' that's a different mindset."

Credit: 
University of Washington

New factor in the carbon cycle of the Southern Ocean identified

image: The study is based on an expedition by the British research vessel RSS James Clark Ross, shown here before setting off from the Falkland Islands.

Image: 
Thomas Browning/GEOMAR

The term plankton describes usually very small organisms that drift with the currents in the seas and oceans. Despite their small size, they play an important role for our planet due to their immense quantity. Photosynthesizing plankton, known as phytoplankton, for example, produce half of the oxygen in the atmosphere while binding huge amounts of carbon dioxide (CO2). Since the Southern Ocean around Antarctica is very rich in nutrients, phytoplankton can thrive there. It is therefore a key region for controlling atmospheric CO2 concentrations.

As other nutrients are abundant, scientists have so far assumed that the amount of the available "micronutrient" iron determines how well phytoplankton thrives or not in the Southern Ocean. Researchers from GEOMAR Helmholtz Centre for Ocean Research Kiel and the UK's National Oceanography Center have now published a study in the international journal Nature Communications showing for the first time that in some areas of the Southern Ocean, manganese, not iron, is the limiting factor for phytoplankton growth.

"This is an important finding for our ability to assess future changes, but also to better understand phytoplankton in the past," says Dr. Thomas J. Browning of GEOMAR, lead author of the study.

Earlier research suggests that greater phytoplankton growth in the Southern Ocean was a key contributor to the onset of the ice ages over the past 2.58 million years. More phytoplankton was able to bind more CO2, which was removed from the atmosphere. As a result, average global temperatures further declined. "So it's critical that we understand exactly what processes regulate phytoplankton growth in the Southern Ocean," Dr. Browning points out.

Indeed, along with iron, manganese is another essential "micronutrient" required by every photosynthetic organism, from algae to oak trees. In most of the ocean, however, enough manganese is available to phytoplankton that it does not limit its growth.

Measurements in remote regions of the Southern Ocean, on the other hand, have shown much lower manganese concentrations. During an expedition on the British research vessel RRS JAMES CLARK ROSS through the Drake Passage between Tierra del Fuego and the Antarctic Peninsula in November 2018, Dr. Browning and his team took water samples. While still on board, they used these water samples and the phytoplankton they contained to conduct experiments on which nutrients affect growth and which do not.

"In doing so, we were able to demonstrate for the first time a manganese limitation for phytoplankton growth in the center of Drake Passage. Closer to shore, iron was the limiting factor, as expected," Dr. Browning reports.

After the expedition, the team used additional model calculations to assess the implications of the experimental results. Among other things, they found that manganese limitation may have been even more widespread during the ice ages than it is today. "This would make this previously unaccounted for factor a central part of understanding the ice ages," says Dr. Browning.

However, because this is the first record in a specific region of the Southern Ocean, further research is needed to better understand the geographic extent and timing of manganese limitation in the Southern Ocean. "We also still need to study what factors control manganese concentrations in seawater and how phytoplankton adapt to manganese scarcity. All of this is critical to building more accurate models of how the Earth system works," Thomas Browning concludes.

Credit: 
Helmholtz Centre for Ocean Research Kiel (GEOMAR)

Radiation vulnerability

Exposure to radiation can wreak indiscriminate havoc on cells, tissues, and organs. Curiously, however, some tissues are more vulnerable to radiation damage than others.

Scientists have known these differences involve the protein p53, a well-studied tumor-suppressor protein that initiates a cell's auto-destruct programs. Yet, levels of this sentinel protein are often similar in tissues with vastly different sensitivities to radiation, posing the question: How is p53 involved?

A new study by researchers in the Blavatnik Institute at Harvard Medical School, Massachusetts General Hospital, and the Novartis Institutes for BioMedical Research now sheds light on this mystery.

Reporting in Nature Communications on Feb. 9, they describe how cellular survival after radiation exposure depends on behavior of p53 over time. In vulnerable tissues, p53 levels go up and remain high, leading to cell death. In tissues that tend to survive radiation damage, p53 levels oscillate up and down.

"Dynamics matter. How things change over time matters," said co-corresponding author Galit Lahav, the Novartis Professor of Systems Biology at HMS. "Our ability to understand biology is limited when we only look at snapshots. By seeing how things evolve temporally, we gain much richer information that can be critical for dissecting diseases and creating new therapies."

Notably, the findings suggest new strategies to improve combination therapies for cancer. The team found certain types of tumors in mice were more vulnerable to radiation after being given a drug that blocks p53 levels from declining and oscillating. Tumors treated this way shrunk significantly more than when given either radiation alone or the drug alone.

"We were able to connect differences in temporal p53 expression with radiation response, and these insights allowed us to 'coax' radioresistant tumors into more radiosensitive ones," said co-corresponding author Ralph Weissleder, the Thrall Family Professor of Radiology and HMS professor of systems biology at Mass General. "This is an incredibly exciting study showing that basic science done in rigorous quantitative fashion can lead to new important clinical discoveries."

When cells are exposed to ionizing radiation, high-energy atomic particles haphazardly assault the delicate molecular machinery inside. If this damage cannot be repaired, particularly to DNA, cells will self-destruct to protect the surrounding tissue and organism as a whole.

This act of cellular seppuku is regulated by p53, which acts as a sentinel for genomic damage. The protein is also a famous tumor suppressor--around half of human cancers have p53 mutations that render it defective or suboptimal. Previously, Lahav and colleagues revealed the dynamic behavior of p53 over time and how it affects cancer drug efficacy, cell fate, and more.

Stronger together

In the current study, Lahav, Weissleder, and their team looked at tissues in mice that have very different sensitivities to ionizing radiation yet are known to express comparable levels of p53--the spleen and thymus, which are highly vulnerable, and the large and small intestines, which are more radioresistant.

Under normal conditions, cells express little to no p53. After radiation exposure, all four tissues expressed elevated p53 along with other markers of DNA and cellular damage as expected. But quantitative imaging analyses revealed that p53 in the intestines peaked and then declined a few hours after irradiation. By contrast, p53 in the spleen and thymus remained high over the same time period.

To probe the effects of p53 behavior, the team used an experimental anti-cancer drug to inhibit MDM2, a protein that degrades p53. They found that by blocking MDM2 activity after radiation exposure, p53 could be forced to remain elevated in cells where it would otherwise decline. In the intestine, which is normally more resistant to radiation, the addition of the drug reduced cell viability and survival.

Some cancers can become resistant to radiation therapy. So, the team explored whether manipulating p53 dynamics could increase tumor vulnerability, focusing on human colon cancer cell lines with unmutated, functional p53.

In mice with transplanted human colon cancer tumors, the team observed significant tumor shrinkage after a single dose of MDM2 inhibitor given shortly after irradiation. After around 6 weeks, tumors treated with radiation and the drug together were five-times smaller than those treated with the drug alone and half the size of those treated with only with radiation.

"By irradiating first, we force the cancer cells to activate p53, and by adding MDM2 inhibitor on top of that, we can keep p53 active longer," Lahav said. "This combination has a much stronger effect than either alone."

The findings support the importance of understanding the dynamics of p53 and how to manipulate it to treat cancer.

Combination therapies using MDM2 inhibitors are currently being evaluated in clinical trials, the authors note, but these efforts are not designed to examine the underlying mechanisms and timing of the treatments. Further studies are needed to better understand p53 dynamics in cancer, which can inform how to better combine and time therapies to treat patients with cancer.

In addition, although the researchers identified differences in p53 dynamics in different tissues after radiation exposure, the biological pathways that lead to these differences remains a question for future study.

"For a lab studying p53, cancer is always a major motivation. Our goal is to acquire knowledge to help develop better and more efficient therapies," Lahav said. "Understanding how p53 behaves over time in different conditions is a critical piece of the puzzle."

Credit: 
Harvard Medical School