Tech

Impacts of coronavirus lockdowns: New study collects data on pollutants in the atmosphere

image: Schematic of major emission sectors and primary emissions, meteorological and chemical processes, impacts to air quality and climate, and measurement and analysis tools used to analyze the effects of emissions changes.

Image: 
Copyright: Forschungszentrum Jülich

One consequence of the coronavirus pandemic has been global restrictions on mobility. This, in turn, has had an effect on pollution levels in the atmosphere. Researchers from across the world are using this unique opportunity to take measurements, collect data, and publish studies. An international team led by Forschungszentrum Jülich's Institute of Climate and Energy Research - Troposphere has now published a comprehensive review providing an overview of results up to September 2020. The study also has its own dedicated website, where additional measurement data can be added to supplement and refine existing research results. At the same time, this collection of data allows scientifically substantiated predictions to be made about the pollution levels of future mobility scenarios.

The meta-analysis was coordinated by Prof. Astrid Kiendler-Scharr, director at Jülich's Institute of Climate and Energy Research - Troposphere. The analysis covers the measurement data of around 200 studies from the first seven months following the onset of the pandemic. It focuses on the following air pollutants: nitrogen dioxide, particulate matter, ozone, ammonia, sulfur dioxide, black carbon, volatile organic compounds (VOCs), and carbon monoxide. A third of the studies take into account the prevailing meteorological situation when calculating the influence of lockdowns on the air composition. The Government Stringency Index (SI) - summarizing the severity of local shutdown measures in a number that can be compared at international level - acted as a reference value.

A key finding of the analysis is that lockdowns, which have the sole aim of slowing down the infection rate, are also reducing the global pollution of the atmosphere with nitrogen dioxide and particulate matter - the higher the SI, the greater this impact. However, this only applies to pollutants that primarily have an anthropogenic origin, i.e. are directly emitted by humans, especially in the field of mobility. In contrast, ozone levels increased. This increase was a result of atmospheric chemical processes caused by reduced nitrogen oxide levels in the air.

The study also highlights current gaps in the data collection and the need for further research. The authors are therefore of the opinion that the period of analysis should be extended to cover the entire year of 2020. The scientists place a particular emphasis on hydrocarbons, which have so far only been examined sporadically in studies, and on extended analyses looking at the impact of emission changes on the climate.

An important addition to the meta-analysis is a database that can be accessed via a website (COVID-19 Air Quality Data Collection). It contains all data from the study on pollution levels, including data on pollutions levels in individual countries. Researchers can also find a list of publications to date and thus obtain a quick overview of previous studies.

The website also invites scientists to present data from their new studies and to thus become part of the reference system. It therefore acts as a "living version", with the presentation of collected results being constantly refined. Similarly, there are plans to further develop the data collection to include measurement results and the analysis of other pollutants that are not part of the current canon, for example hydrocarbons.

The important data could also form the basis for better assessments of the impacts on atmospheric chemistry in future scenarios. This includes a considerable, long-term reduction in pollution levels for a comprehensive transition to electromobility.

Credit: 
Forschungszentrum Juelich

Suppression of COVID-19 waves reflects time-dependent social activity, not herd immunity

image: Scientists modeling the spread of COVID-19 showed that a temporary state of immunity arises due to individual differences in social behaviors. This "transient collective immunity"-- referring to when the susceptible or more social groups collectively have been infected--gets destroyed as people modify their social behaviors over time. For example, someone who isolated in the early days of the epidemic may at some point renew their social networks, meeting with small groups or large crowds.

Image: 
Brookhaven National Laboratory

UPTON, NY--Scientists at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory and the University of Illinois Urbana-Champaign (UIUC) have developed a new mathematical model for predicting how COVID-19 spreads. This model not only accounts for individuals' varying biological susceptibility to infection but also their levels of social activity, which naturally change over time. Using their model, the team showed that a temporary state of collective immunity--what they coined "transient collective immunity"--emerged during early, fast-paced stages of the epidemic. However, subsequent "waves," or surges in the number of cases, continued to appear because of changing social behaviors. Their results are published in the Proceedings of the National Academy of Sciences.

The COVID-19 epidemic reached the United States in early 2020, rapidly spreading across several states by March. To mitigate disease spread, states issued stay-at-home orders, closed schools and businesses, and put in place mask mandates. In major cities like New York City (NYC) and Chicago, the first wave ended in June. In the winter, a second wave broke out in both cities. Understanding why initial waves end and subsequent waves begin is key to being able to predict future epidemic dynamics.

Here's where modeling can help. But classical epidemiological models were developed almost 100 years ago. While these models are mathematically robust, they don't perfectly capture reality. One of their flaws is failing to account for the structure of person-to-person contact networks, which serve as channels for the spread of infectious diseases.

"Classical epidemiological models tend to ignore the fact that a population is heterogenous, or different, on multiple levels, including physiologically and socially," said Alexei Tkachenko, a physicist in the Theory and Computation Group at the Center for Functional Nanomaterials (CFN), a DOE Office of Science User Facility at Brookhaven Lab. "We don't all have the same susceptibility to infection because of factors such as age, preexisting health conditions, and genetics. Similarly, we don't have the same level of activity in our social lives. We differ in the number of close contacts we have and in how often we interact with them throughout different seasons. Population heterogeneity--these individual differences in biological and social susceptibility--is particularly important because it lowers the herd immunity threshold."

Herd immunity is the percentage of the population who must achieve immunity in order for an epidemic to end.

"Herd immunity is a controversial topic," said Sergei Maslov, a CFN user and professor and Bliss Faculty Scholar at UIUC, with faculty appointments in the Departments of Physics and Bioengineering and at the Carl R. Woese Institute for Genomic Biology. "Since early on in the COVID-19 pandemic, there have been suggestions of reaching herd immunity quickly, thereby ending local transmission of the virus. However, our study shows that apparent collective immunity reached in this way would not last."

"What was missing prior to this work was that people's social activity waxes and wanes, especially due to lockdowns or other mitigations," added Nigel Goldenfeld, Swanlund Professor of Physics and director of the NASA Astrobiology Institute for Universal Biology at UIUC. "So, a wave of the epidemic can seem to die away due to mitigation measures when the susceptible or more social groups collectively have been infected--what we call transient collective immunity. But once these measures are relaxed and people's social networks are renewed, another wave can start, as we've seen with states and countries opening up too soon, thinking the worst was behind them."

Ahmed Elbanna, a Donald Biggar Willett Faculty Fellow and professor of civil and environmental engineering at UIUC, noted transient collective immunity has profound implications for public policy.

"Mitigation measures, such as mask wearing and avoiding large gatherings, should continue until the true herd immunity threshold is achieved through vaccination," said Elbanna. "We can't outsmart this virus by forcing our way to herd immunity through widespread infection because the number of infected people and number hospitalized who may die would be too high."

The nuts and bolts of predictive modelling

Over the past year, the Brookhaven-UIUC team has been carrying out various projects related to a broader COVID-19 modeling effort. Previously, they modeled how the epidemic would spread through Illinois and the UIUC campus, and how mitigation efforts would impact that spread. Last May, they began this project to calculate the effect of population heterogeneity on the spread of COVID-19.

Several approaches already exist for modeling the effect of heterogeneity on epidemic dynamics, but they typically assume heterogeneity remains constant over time. So, for example, if you're not socially active today, you won't be socially active tomorrow or in the weeks and months ahead.

"Basic epidemiological models only have one characteristic time, called the generation interval or incubation period," said Tkachenko. "It refers to the time when you can infect another person after becoming infected yourself. For COVID-19, it's roughly five days. But that's only one timescale. There are other timescales over which people change their social behavior."

In this work, the team incorporated time variations in individual social activity into existing epidemiological models. While a complicated, multidimensional model is needed to describe each group of people with different susceptibilities to disease, they compressed this model into only three equations, developing a single parameter to capture biological and social sources of heterogeneity.

"We call this parameter the immunity factor, which tells you how much the reproduction number drops as susceptible individuals are removed from the population," explained Maslov.

The reproduction number indicates how transmissible an infectious disease is. Specifically, the quantity refers to how many people one infected person will in turn infect. To estimate the social contribution to the immunity factor, the team leveraged previous studies in which scientists actively monitored people's social behavior. They also looked at actual epidemic dynamics, determining the immunity factor most consistent with data on COVID-19-related hospitalizations, intensive care unit admissions, and daily deaths in NYC and Chicago. For example, when the susceptible number dropped by 10 percent during the early, fast-paced epidemic in NYC and Chicago, the reproduction number fell by 40 to 50 percent--corresponding to an estimated immunity factor of four to five.

"That's a fairly large immunity factor, but it's not representative of lasting herd immunity," said Tkachenko. "On a longer timescale, we estimate a much lower immunity factor of about two. The fact that a single wave stops doesn't mean you're safe. It can come back."

This temporary state of immunity arises because population heterogeneity is not permanent; people change their social behavior over time. For instance, individuals who self-isolated during the first wave--staying home, not having visitors over, ordering groceries online--subsequently start relaxing their behaviors. Any increase in social activity means additional exposure risk.

"The epidemic has been with us a year now," said Maslov. "It's important to understand why it has been here for such a long time. The gradual change in social behavior among individuals partially explains why plateaus and subsequent waves are occurring. For example, both cities avoided a summer wave but experienced a winter wave. We attribute the winter wave to two factors: the change in season and the waning of transient collective immunity."

With vaccination becoming more widespread, the team hopes we will be spared from another wave. In their most recent work, they are studying epidemic dynamics in more detail. For example, they are feeding statistics from "superspreader" events--gatherings where a single infected person causes a large outbreak among attendees--into the model. They are also applying their model to different regions across the country to explain overall epidemic dynamics from the end of lockdown to early March 2021.

Credit: 
DOE/Brookhaven National Laboratory

Why do some alloys become stronger at room temperature?

An alloy is typically a metal that has a few per cent of at least one other element added. Some aluminium alloys have a seemingly strange property.

"We've known that aluminium alloys can become stronger by being stored at room temperature - that's not new information," says Adrian Lervik, a physicist at the Norwegian University of Science and Technology (NTNU).

The German metallurgist Alfred Wilm discovered this property way back in 1906. But why does it happen? So far the phenomenon has been poorly understood, but now Lervik and his colleagues from NTNU and SINTEF, the largest independent research institute in Scandinavia, have tackled that question.

Lervik recently completed his doctorate at NTNU's Department of Physics. His work explains an important part of this mystery. But first a little background, because Lervik has dug into some prehistory as well.

"At the end of the 1800s, Wilm worked to try to increase the strength of aluminium, a light metal that had the recently become available. He melted and cast a number of different alloys and tested out various cooling rates common in steel production in order to achieve the best possible strength," says Lervik.

One weekend when the weather was good Wilm decided to take a break from his experiments and instead take an early weekend to sail along the Havel River.

"He returned to the lab on Monday and continued to run tensile tests of an alloy consisting of aluminium, copper and magnesium that he had started the week before. He discovered that the alloy's strength had increased considerably over the weekend.

This alloy had simply stayed at room temperature during that time. Time had done the job that all sorts of other cooling methods couldn't do.

Today this phenomenon is called natural ageing.

The American metallurgist Paul Merica suggested in 1919 that the phenomenon must be due to small particles of the various elements that form a kind of precipitation in the alloy. But at that time there were no experimental methods that could prove this.

"Only towards the end of the 1930s could the method of X-ray diffraction prove that the alloying elements accumulated in small clusters on a nanoscale," says Lervik.

Pure aluminium consists of lots of crystals. A crystal can be seen as a block of grid sheets, where an atom sits in each square of the grid. Strength is measured in the sheets' resistance to sliding over each other.

In an alloy, a small per cent of the squares are occupied by other elements, making it a little harder for the sheets to slide across each other and resulting in increased strength.

As Lervik explains it, "An aggregate is like a small drop of paint in the grid block. The alloying elements accumulate and occupy a few dozen neighbouring squares that extend over several sheets. Together with the aluminium, they form a pattern. These drops have a different atomic structure than the aluminium and make dislocation sliding more difficult for the sheets in the grid block."

Aggregates of alloying elements are known as "clusters. In technical language they are called Guinier-Preston (GP) zones after the two scientists who first described them. In the 1960s, it became possible to see GP zones through an electron microscope for the first time, but it's taken until now to view them at the single-atom level.

"In recent years, numerous scientists have explored the composition of aggregates, but little work has been done to understand their nuclear structure. Instead, many studies have focused on optimizing alloys by experimenting with age hardening at different temperatures and for different lengths of time," says Lervik.

Age hardening and creating strong metal mixtures are clearly very important in an industrial context. But very few researchers and people in the industry have cared much about what the clusters actually consist of. They were simply too small to prove.

Lervik and his colleagues thought differently.

"With our modern experimental methods, we managed to take atomic-level pictures of the clusters with the transmission electron microscope in Trondheim for the first time in 2018," says Lervik.

"He and his team studied alloys of aluminium, zinc and magnesium. These are becoming increasingly important in the automotive and aerospace industries."

The research team also determined the clusters' chemical composition using the instrument for atomic probe tomography that was recently installed at NTNU. The infrastructure programme at the Research Council of Norway made this discovery possible. This investment has already contributed to new fundamental insights into metals.

The researchers studied alloys of aluminium, zinc and magnesium, known as 7xxx series Al alloys. These light metal alloys are becoming increasingly important in the automotive and aerospace industries.

"We found clusters with a radius of 1.9 nanometres buried in the aluminium. Although numerous, they are difficult to observe under a microscope. We only managed to identify the atomic structure under special experimental conditions," says Lervik.

This is part of the reason why no one has done this before. Performing the experiments is tricky and requires advanced modern experimental equipment.

"We experienced just how tricky this was several times. Even though we managed to take a picture of the clusters and could extract some information about their composition, it took several years before we understood enough to be able to describe the nuclear structure," says Lervik.

So what exactly makes this work so special? In the past, people have assumed that aggregates consist of the alloying elements, aluminium and perhaps vacancies (empty squares) that are more or less randomly arranged.

"We found that we can describe all the clusters we've observed based on a unique geometric spatial figure called a 'truncated cube octahedron,'" says Lervik.

Right here anyone without a background in physics or chemistry may want to skim the next sections or jump straight to the middle heading "Important for understanding heat treatment."

To understand the illustration above, we must first accept that an aluminium crystal (square block) can be visualized as a stack of cubes, each with atoms on the 8 corners and 6 sides.

This structure is an atomic side-centred cubic lattice. The geometric figure is like a cube, with an outer shell formed from the surrounding cubes. We describe it as three shells around the centre cube: one for the sides, one for the corners and the outermost shell. These shells consist of 6 zinc, 8 magnesium and 24 zinc atoms, respectively.

The middle of the body (cube) may contain an extra atom - an 'interstitial' - which in this illustration can be described as being located between the spaces (squares) of aluminium.

This single figure further explains all larger cluster units by their ability to connect and expand in three defined directions. The picture also explains observations previously reported by others. These cluster units are what contribute to increased strength during age hardening.

Important for understanding heat treatment

"Why is this cool? It's cool because natural ageing isn't usually the last step in processing an alloy before it's ready to be used," says Lervik.

These alloys also go through a final heat treatment at higher temperatures (130-200°C) to form larger precipitates with defined crystal structures. They bind the atomic planes (sheets) even more tightly together and strengthen it considerably .

"We believe that understanding the atomic structure of the clusters formed by natural aging is essential to further understand the process of forming the precipitates that determine so much of the material's properties. Do the precipitates form on the clusters or do the clusters transform into precipitates during heat treatment? How can this be optimized and utilized? Our further work will try to answer these questions," says Lervik.

Credit: 
Norwegian University of Science and Technology

Protein found to control drivers of normal growth and cancer

Researchers have found a long-sought enzyme that prevents cancer by enabling the breakdown of proteins that drive cell growth, and that causes cancer when disabled.

Publishing online in Nature on April 14, the new study revolves around the ability of each human cell to divide in two, with this process repeating itself until a single cell (the fertilized egg) becomes a body with trillions of cells. For each division, a cell must follow certain steps, most of which are promoted by proteins called cyclins.

Led by researchers at NYU Grossman School of Medicine, the work revealed that an enzyme called AMBRA1 labels a key class of cyclins for destruction by cellular machines that break down proteins. The work finds that the enzyme's control of cyclins is essential for proper cell growth during embryonic development, and that its malfunction causes lethal cell overgrowth. Moreover, the study further suggests that an existing drug class may be able to reverse such defects in the future.

As in a developing fetus, restraints on cell division are central to the prevention of abnormal, aggressive growth seen in cancers, and the new study finds that cells have evolved to use AMBRA1 to defend against it.

"Our study clarifies basic features of human cells, provides insights into cancer biology, and opens new research avenues into potential treatments," says corresponding study author Michele Pagano, MD, chair of the Department of Biochemistry and Molecular Pharmacology at NYU Langone Health, and an investigator with the Howard Hughes Medical Institute.

New Tumor Suppressor

The current study addresses the three D-type cyclins, the subset that must link up with enzymes called cyclin-dependent kinases (CDKs), specifically CDK4 and CDK6, if cells are to divide. The authors found that AMBRA1, as a ligase, attaches molecular tags to all three D-type cyclins, labeling them for destruction. Previously proposed mechanisms for how D-type cyclins are eliminated by the cell could not be reproduced by the scientific community. Thus, prior to the new study, a central regulator of D-type cyclins had remained elusive for a quarter of a century, Pagano says.

The new work also revealed the role of AMBRA1 in development. Mice lacking the AMBRA1 gene, which codes for the AMBRA1 enzyme, developed uncontrolled, lethal tissue growth that distorted the developing brain and spinal cord. The researchers also found for the first time that treating with a CDK4/6 inhibitor pregnant mice carrying embryos without the AMBRA1 gene reduced these neuronal abnormalities.

In terms of cancer, the authors analyzed patient databases to conclude that those with lower-than-normal expression of AMBRA1 were less likely to survive diffuse large B-cell lymphoma, the most common form of non-Hodgkin lymphoma in the United States. The causes of lower expression of AMBRA1 may include random changes that delete the gene or make its encoded instructions harder to read.

To confirm the role of AMBRA1 as a tumor suppressor, the researchers monitored cancer cell growth in mouse models of diffuse large B-cell lymphoma, in collaboration with study author Luca Busino, PhD, at the University of Pennsylvania. When human B-cell lymphoma cells were transplanted into mice, for instance, tumors without the AMBRA1 gene grew up to three times faster than those with the gene. While the NYU Langone-led study looked at diffuse large B-cell lymphoma, two other studies led by Stanford University and the Danish Cancer Society Research Center, published in the same issue of Nature, found missing or disabled AMBRA1 to be a key factor in lung cancer.

Further, D-type cyclins are known to assemble with CDK4 and CDK6 into enzymes that encourage both normal and abnormal cell growth. Drugs that inhibit CDK4 and CDK6 have been FDA-approved in recent years as cancer therapies, but some patients have a weaker response to the drugs. Providing insight into this problem, the current team found that lymphomas lacking AMBRA1 are less sensitive to CDK4/6 inhibitors. When the AMBRA1 gene is missing, levels of D-type cyclins become high enough to form complexes with another CDK (CDK2), which, due to its structure, cannot be inactivated by CDK4/6 inhibitors.

"This makes AMBRA1 a potential marker for the selection of patients best suited for CDK4/6 inhibitor therapy," says first author Daniele Simoneschi, PhD, a senior research coordinator in the Department of Biochemistry and Molecular Pharmacology at NYU Langone Health. As a next step, he says the team plans to study the effect of combining CDK4/6 inhibitors with CDK2 inhibitors in tumors with low AMBRA1, as well as in those with mutations in D-type cyclins that make them insensitive to AMBRA1.

Credit: 
NYU Langone Health / NYU Grossman School of Medicine

Telling sunbathers what they don't want to hear: Tanning is bad

COLUMBUS, Ohio - Most young women already know that tanning is dangerous and sunbathe anyway, so a campaign informing them of the risk should take into account their potential resistance to the message, according to a new study.

Word choice and targeting a specific audience are part of messaging strategy, but there is also psychology at play, researchers say - especially when the message is telling people something they don't really want to hear.

"A lot of thought goes into the content, but possibly less thought goes into the style," said Hillary Shulman, senior author of the study and an assistant professor of communication at The Ohio State University.

"That's the argument we're trying to put out there for people to consider."

In the study, participants who read a message that combined the most lay-friendly phrasing with references to the specific audience - young women in college - were the most likely to acknowledge the severity of tanning-related risk for skin cancer and say they would curb their own sunbathing behavior.

And that's because the researchers considered what the young women already thought about soaking in the sun.

"While with communication we're fundamentally interested in the messages themselves, we look at it through the lens of what's going on in people's heads. And if we understand what's going on in people's heads, we can design messages more effectively to match beliefs and attitudes," said Olivia Bullock, a graduate student in communication at Ohio State and lead author of the paper.

"How can we get people to do the things that they know they should do but don't always do? Tanning is a perfect example of that. Most women know that tanning and sun exposure is not great for you, but we do it anyway, for a number of reasons."

The research is published online in the journal Communication Studies.

This study was both an application and assessment of the framing theory of persuasive communication, which suggests that the experience of consuming a persuasive message involves a pathway toward being convinced or rejecting the notion of a change in thinking or behavior. The pathway has three components: availability, or a fixed belief about the topic; accessibility, considering the message easy to process because it is draws upon existing knowledge; and applicability, or perceiving the message as relevant.

"Framing theory says we can target specific attitudes and make them more likely to be drawn upon. And if they're more likely to be drawn upon, then they're more likely to be used in making a decision or forming a judgment," Bullock said.

In health campaigns, there can be a tendency to use technical terms to lend credibility or "heft" to a message, Shulman said.

But that can backfire.

"Some people might strategically try to use medical terminology in recognition of the nuance and precision that certain words offer, which for people who are experts is quite important," Shulman said. "But people have a whole bunch of barriers and resistance in their brains to discount information that they don't want to hear, and one of those is, 'This is hard, and it's not really talking to me.'

"If you can figure out pretty simple communication strategies that help circumvent those defenses, you might get people to engage, even in the moment, a little more readily."

Bullock composed four messages of identical length with different combinations of language difficulty - related to the accessibility part of the process - and message relevance, the nod to applicability. Relevance was based on the affected parties cited in the messages - either female undergraduates at Ohio State or older adults living in the South.

The low language difficulty messages include the statement "Research shows that Ohio State women who tan raise their risk of deadly skin cancer by 75%." The equivalent line in the high language difficulty message: "Research illustrates that Ohio State women who tan multiply their probability of acquiring melanoma by 75%."

Each of the 529 college students participating in the research was randomly assigned to read a single message. Participants followed by completing a survey assessing how accessible and relevant they thought the message was and gauging their beliefs and future plans related to tanning.

The young women who read the message with low language difficulty and high relevance were more likely to judge skin cancer as a severe tanning-associated risk, consider themselves susceptible to cancer, and indicate a behavioral intention not to soak up the rays in the future.

The study also demonstrated through an unexpected result just how important the psychology of message consumption can be: When the language was easy to understand and the message was perceived by the college students as relevant to them, participants found the information harder to process.

"When we told people something scary could happen to them, they wanted to say, actually, this message isn't about me at all," Bullock said. "You want to discredit the threatening information to make yourself feel better about your life."

Linking message design to psychological states could apply to a variety of public risk topics about which audience members might be inclined to put up a barrier to what they're hearing, the researchers said - for example, climate change or politics.

The key, Shulman said, is to put as much effort into how - and not just what - information is presented.

Credit: 
Ohio State University

Channel migration plays leading role in river network evolution, study finds

image: The three panels show model results of a landscape and river network evolving over 10 million years. The first panel represents 0 to 5 million years, the second panel shows 5 to 10 million years and the third shows 10 to 15 million years.

Image: 
Graphic courtesy Jeffrey Kwang

A new study by former University of Illinois Urbana-Champaign graduate student Jeffrey Kwang, now at the University of Massachusetts, Amherst; Abigail Langston, of Kansas State University; and Illinois civil and environmental engineering professor Gary Parker takes a closer look at the vertical and lateral – or depth and width – components of river erosion and drainage patterns. The study is published in the Proceedings of the National Academy of Sciences.

“A tree’s dendritic structure exists to provide fresh ends for leaves to grow and collect as much light as possible,” Parker said. “If you chop off some branches, they will regrow in a dendritic pattern. In our study, we wanted to find out if river systems behave similarly when their paths are altered, even though existing numerical models cannot replicate this.”

In a previous study conducted at Illinois, Parker and Kwang experimented with sandbox and waterflow landscape models with meandering, S-shaped streams imprinted onto them. With the water left running, the systems eventually rerouted the S-shaped channel into the ever-familiar dendritic pattern over time – something that the numerical models do not predict.

“That told us there was some key element missing in the numerical models,” Kwang said. “One thing I observed was that the channels in the model sandbox streams were migrating laterally, or across the landscape, to reorganize the drainage network. This lateral migration has been observed in other researchers’ experiments but is not captured by the numerical models. I thought that this has to be where the numerical and physical models differ.”

Soon after Parker and Kwang’s realization, Kwang and Langston met at the Summer Institute on Earth-Surface Dynamics at the St. Anthony Falls Laboratory at the University of Minnesota. They discovered a mutual interest in lateral stream erosion.

“Working through the existing river drainage models, Jeffrey and I found that the initial conditions in landscape evolution models have been overlooked,” Langston said. “Usually, they started with a flat, featureless surface with very small, randomized bumps. We wondered if introducing more complex initial conditions, like the meandering stream imprint Jeffrey used in his earlier experiment, would make a big difference in the numerical models.”

Changing the initial modeling conditions also had the researchers rethinking the importance of lateral versus vertical erosion, challenging the traditional modeling practice, which concentrates on vertical erosion. They decided to run numerical simulations using vertical erosion and another set using both vertical and lateral erosion.

Incorporating these new ideas, the team created a landscape evolution model that simulates a river network with an initial S-shaped channel and vertical erosion. After running the model 5 million years into the future, the river carves a deep canyon that retains the S-shaped pattern, which is not very realistic, Kwang said.  

At the 5 million-year mark, the team introduced to the model a vertical and lateral erosion scenario, developed by Langston and University of Colorado, Boulder professor Gregory Tucker. The channels begin to migrate and reorganize the river network into a more realistic dendritic pattern.

At the 10 million-year mark, the model starts to resemble a tree and continues to become more dendritic through to the end of the model at 15 million years.  

“Our work shows that lateral migration of channels, a mechanism commonly ignored in landscape evolution models, has potential to restructure river networks drastically,” Kwang said.

The team plans to examine the role that sediment type and geologic features such as mountains, faults and fractures play in this process. There are places where the underlying geology has an enormous influence on drainage patterns, the researchers said, and accounting for them in the models is needed. 

Understanding how rivers evolve naturally or rebound after engineering measures have rerouted them will help decision-makers plan future land use, the researchers said. “It is important to look ahead tens to hundreds or even thousands of years when planning the storage of toxic waste, for example,” Kwang said. “Lateral migration of nearby rivers could pose a significant threat over time.”

“We’ve known about lateral migration of mountain rivers for years, but nobody thought to put it into a model and run it at hundreds to thousands to millions of years,” Parker said. “This was the first time that anyone has attempted this, and we are very excited about the results.”

The National Science Foundation supported this research.

Parker also is affiliated with the Center for Advanced Study and geology and geography and geographic information sciences at Illinois. Kwang is a postdoctoral researcher in geosciences at the University of Massachusetts, Amherst. Langston is a professor of geography and geospatial sciences at Kansas State University.

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Scientists identify potential drug candidates for deadly pediatric leukemia

image: Senior study author Ani Deshpande (left) and co-first authors Bo-Rui Chen (middle) and Anagha Deshpande (right).

Image: 
Sanford Burnham Prebys

LA JOLLA, CALIF. - April 14, 2021 - Scientists at Sanford Burnham Prebys Medical Discovery Institute have shown that two existing drug candidates--JAK inhibitors and Mepron--hold potential as treatments for a deadly acute myeloid leukemia (AML) subtype that is more common in children. The foundational study, published in the journal Blood, is a first step toward finding effective treatments for the hard-to-treat blood cancer.

"While highly successful therapies have been found for other blood cancers, most children diagnosed with this AML subtype are still treated with harsh, toxic chemotherapies," says Ani Deshpande, Ph.D., assistant professor in Sanford Burnham Prebys' National Cancer Institute (NCI)-designated Cancer Center and senior author of the study. "We are excited about this study because we uncovered two promising therapeutic targets for which drugs already exist, setting the stage for potential clinical trials."

AML is often caused by a chromosomal fusion--when part of one chromosome attaches to another chromosome--resulting in a fused protein that drives out-of-control growth of immature blood cells. There are dozens of different fusion proteins that can cause the cancer, and survival rates may vary depending on the fusion type. This study focused on a subtype of AML with fusions involving a protein called AF10, which is common in pediatric AML and deadlier. Many children with these subtypes don't survive very long; and if a child does respond to treatment, they often have debilitating, life-long side effects that stem from receiving chemotherapy at a young age.

"After Luke battled AML, we learned there are little to no treatment options for his fusion type and that this type of cancer research is really underfunded," says Rena Johnson, co-founder of the Luke Tatsu Johnson Foundation, which partially funded the research. "These findings fill me with hope for a future where children like Luke can grow up and thrive."

Targeting cancer-specific fusions

When chromosomal fusions were first identified as a cause of AML, cancer researchers were hopeful that treatments could be found relatively easily. The protein produced by the fusion is unique to cancer cells, providing a target that could selectively kill tumors and spare healthy cells.

"One of the most successful cancer drugs ever developed, Gleevec, actually targets a fusion protein. This drug melts away tumors in people with another type of blood cancer, chronic myelogenous leukemia, or CML," explains Deshpande. "However, we lucked out a bit because the fusion protein that Gleevec targets is a kinase, a type of protein that is relatively easy to make drugs against. Now we know that most fusions involve other types of proteins such as transcription factors or chromatin modifiers, which aren't as easily druggable."

To overcome this hurdle, Deshpande and his team got creative. The scientists decided to map which proteins interact--or are "friends" with--the abnormal AF10 fusion proteins, leveraging mouse models that allow the protein's production to be switched on and off. This work revealed that AF10 fusion proteins activate the inflammatory signaling proteins JAK1 and STAT3--both druggable targets for which inhibitors already exist. The scientists showed that both JAK1 and STAT3 inhibitors slowed the growth of human AML cells; and that Mepron, a STAT3 inhibitor, melted AML tumors and extended survival in mice with the CALM-AF10 mutation.

"Interestingly, many people with AML already get Mepron to protect against infection after a bone marrow transplant, and analysis suggests that is linked to better outcomes. Our studies show that AF10 fusion positive patients may benefit from this drug," says Bo-Rui Chen, Ph.D., who completed the study as a postdoctoral researcher in the Deshpande lab, and is the co-first author of the study. "We are also very excited about our findings because one JAK inhibitor is already FDA-approved and many others are being developed for autoimmune disorders, which means they could be advanced to the clinic relatively quickly."

The scientists do caution that while the results are exciting, more data is needed before children or adults with these AML subtypes can start receiving the drug candidates.

"Before we can proceed to clinical trials testing these drugs in people with these AML subtypes, we need to test both drugs in larger cohorts of mouse models," says Anagha Deshpande, Ph.D., senior research associate in the Deshpande lab and co-first author of the study. "However, if those studies are favorable, clinical development will be accelerated since these drugs are already known to be safe in humans."

Credit: 
Sanford Burnham Prebys

How we can reduce food waste and promote healthy eating

image: In a recent journal article, University of Illinois researchers Brenna Ellison (left) and Melissa Pflugh Prescott discuss ways to reduce food waste while promoting healthy nutrition.

Image: 
College of ACES, University of Illinois.

URBANA, Ill. - Food waste and obesity are major problems in developed countries. They are both caused by an overabundance of food, but strategies to reduce one can inadvertently increase the other. A broader perspective can help identify ways to limit food waste while also promoting healthy nutrition, two University of Illinois researchers suggest.

"You can reduce food waste by obtaining less or eating more. Our concern was that if people are reducing waste by eating more, what does that mean for nutrition? And how do we think about these tradeoffs in a way that promotes both good nutrition outcomes and good food waste outcomes? Public policies have generally focused on either obesity or food waste, but rarely considered them together, says Brenna Ellison, associate professor in the Department of Agricultural and Consumer Economics (ACE) at U of I.

Ellison and Melissa Pflugh Prescott, assistant professor in the Department of Food Science and Human Nutrition (FSHN) at U of I, discuss a systems approach to addressing food waste and nutrition in a new paper, published in Journal of Nutrition Education and Behavior.

Food waste refers to the loss of edible food that is not consumed for various reasons. It occurs at all levels of the supply chain, from farm to transportation, processing, retail, food service, and consumer levels.

Food waste is often calculated by weight or by calories, Ellison explains. If you calculate by weight, dairy products, vegetables, grain products, and fruit account for the majority of food loss. But when converted to calories, added fats and oils, grain products, and added sugars and sweeteners are the top categories for food waste. Encouraging increased consumption of those foods could have negative health consequences, she notes.

In their paper, Ellison and Prescott provide strategies for reducing food waste in a variety of settings, including food service, retail, schools, and homes.

Some restaurants and university dining halls that offer buffet-style dining have tried to limit food waste by imposing fines or offering incentives to ensure people finish the food they select. While such strategies may limit waste, they encourage overeating, the researchers say. They suggest instead using behavioral cues such as smaller plates and scoops that nudge people to select less food.

School meals are important means to improve public health and introduce children to new, healthy foods. However, plate waste is a persistent problem in school lunch settings. Schools can use salad bars to encourage students to try new items, but that causes pre-plate waste because some items are not selected. COVID-19 modifications pose additional challenges to safe strategies for food recovery, but there are still viable options, Prescott states.

"For example, schools can take items like whole apples or unopened cartons of milk and recycle them. They can reuse them in future meals, making sure they are following food safety protocols. Or they can donate them to food pantries and other nonprofit organizations, or create backpack programs where they can send some of those items home with students who may be struggling with food insecurity. There are certainly ways to do this safely," she says.

The researchers note that households are responsible for some of the costliest food waste, because they are at the end of the supply chain. Consumers throw away food for various reasons, such as food safety concerns, desire to eat fresh food, and poor food management.

Choosing more processed food could reduce waste but is not desirable from a health perspective. Learning strategies for better meal planning and using a list for grocery shopping are better ways to accomplish both waste reduction and improved nutrition goals, Ellison says.

"We know that even if you try to plan meals, it can be hard to follow through. It's important to be realistic about planning. For example, if you know that you're likely to order take out one or two nights a week, then plan for that. Don't buy food you won't need," she notes.

The researchers also suggest ways to encourage good nutrition through small changes. "If you have young kids, you can try frozen vegetables. You can take a little bit out at a time and do some testing with your children; you won't have a whole package that might go to waste," Ellison says.

Better cooking skills are also important, Prescott states.

"Cooking is a win-win in terms of promoting health and reducing food waste. There is evidence that links cooking and improved diet quality. And people who cook might over time become more skilled at repurposing leftovers, and being more creative with foods that are about to go to waste," she says. "Freezing leftovers for future meals is also a helpful strategy, if you have freezer space."

Prescott notes that some of these strategies may be difficult for families that lack adequate equipment for cooking, storing, and freezing. She and Ellison are working to develop a cooking education curriculum primarily addressing the challenges facing low-income households who may have limited resources available.

The two researchers are also planning a study on school nutrition aiming to identify behavioral nudges to increase fruit and vegetable consumption while reducing waste, and a project focusing on safety issues of food recovery in schools.

Illinois Extension provides information and resources for families wanting to learn more about nutrition and wellness.

Credit: 
University of Illinois College of Agricultural, Consumer and Environmental Sciences

New method measures super-fast, free electron laser pulses

image: An optical shutter created by ionization allows an ordinary camera to measure a femtosecond pulse from a free electron laser.

Image: 
Los Alamos National Laboratory

LOS ALAMOS, N.M., April 14, 2021--New research shows how to measure the super-short bursts of high-frequency light emitted from free electron lasers (FELs). By using the light-induced ionization itself to create a femtosecond optical shutter, the technique encodes the electric field of the FEL pulse in a visible light pulse so that it can be measured with a standard, slow, visible-light camera.

"This work has the potential to lead to a new online diagnostic for FELs, where the exact pulse shape of each light pulse can be determined. That information can help both the end-user and the accelerator scientists," said Pamela Bowlan, Los Alamos National Laboratory's lead researcher on the project. The paper was published April 12, 2021 in Optica. "This work also paves the way for measuring x-ray pulses or femtosecond time-resolved x-ray images."

Free electron lasers, which are driven by kilometer-long linear accelerators, emit bursts of short-wavelength light lasting one quadrillionth of a second. As a result, they can act as strobe lights for viewing the fastest events in nature--atomic or molecular motion--and therefore promise to revolutionize our understanding of almost any kind of matter.

Measuring such a vanishingly rapid burst of ionizing radiation has previously proved challenging. But while electronics are too slow to measure these light pulses, optical effects can be essentially instantaneous. Squeezing all of the energy of a continuous laser into short pulses means that femtosecond laser pulses are extremely bright and have the ability to modify a material's absorption or refraction, creating effectively instantaneous "optical shutters."

This idea has been widely used for measuring visible-light femtosecond laser pulses. But the higher-frequency extreme ultraviolet light from FELs interacts with matter differently; this light is ionizing, meaning that it pulls electrons out of their atoms. The researchers showed that ionization itself can be used as a "femtosecond optical shutter" for measuring extreme ultraviolet laser pulses at 31 nanometers.

"Ionization typically changes the optical properties of a material for nanoseconds, which is 10,000 times slower than the FEL pulse duration," Bowlan said. "But the duration of the rising edge of ionization, determined by how long it takes the electron to leave the atom, is significantly faster. This resulting change in the optical properties can act as the fast shutter needed to measure the FEL pulses."

Credit: 
DOE/Los Alamos National Laboratory

Mediterranean diet with lean beef may lower risk factors for heart disease

UNIVERSITY PARK, Pa. -- Eating red meat may have a bad reputation for being bad for the heart, but new research found that lean beef may have a place in healthy diets, after all.

In a randomized controlled study, researchers found that a Mediterranean diet combined with small portions of lean beef helped lower risk factors for developing heart disease, such as LDL cholesterol.

Jennifer Fleming, assistant teaching professor of nutrition at Penn State, said the study suggests that healthy diets can include a wide variety of foods, such as red meat, and still be heart friendly.

"When you create a healthy diet built on fruits, vegetables, and other plant-based foods, it leaves room for moderate amounts of other foods like lean beef," Fleming said. "There are still important nutrients in beef that you can benefit from by eating lean cuts like the loin or round, or 93% lean ground beef."

David J. Baer, research leader at the United States Department of Agriculture - Agricultural Research Service, and study co-principal investigator, added, "This study highlights the importance of including lean beef in a Mediterranean dietary pattern that can yield heart-healthy benefits."

According to the researchers, red meat such as beef has been associated with an increased risk for cardiovascular disease in previous studies. But it has remained unclear whether red meat actually causes these effects or if they actually are caused by other diet and lifestyle choices that people engage in alongside red meat consumption.

Additionally, the researchers said many studies have combined both fresh and processed meats together when evaluating red meat consumption and health. Processed red meats have a very different nutrient profile than fresh meat -- for example, processed meat products are much higher in sodium -- that could explain the red meat research that has been reported.

"The Mediterranean diet is traditionally low in red meat," Fleming said. "But, knowing that many Americans enjoy red meat, we wanted to examine how combining lean beef with the Mediterranean diet would affect cardiovascular risk markers."

The study included 59 participants. Every participant consumed each diet for four weeks each, with a one week break between each diet period, and blood samples were drawn at the beginning of the study as well as after each diet period.

Three of the four diet periods contained different amounts of beef to a Mediterranean diet plan, which provided 41% calories from fat, 42% from carbohydrates and 17% from protein. In addition to the control average American diet, one diet provided 0.5 ounces of beef a day, which is the amount recommended in the Mediterranean diet pyramid. A second diet provided 2.5 ounces a day, which represents the amount an average American eats in a day, and the third experimental diet included 5.5 ounces a day, which previous research connected with certain heart health benefits.

All three Mediterranean diet periods included olive oil as the predominant fat source, three to six servings of fruits, and six or more servings of vegetables a day. The beef included in these diet periods was either lean or extra-lean.

Fleming said they were able to use a special technology called nuclear magnetic resonance -- or NMR technology -- to measure the number and size of lipoprotein particles. She said this study was one of the first randomized controlled trials of the Mediterranean diet to use the technique.

"This is important because there is growing evidence to suggest that LDL particle number is more strongly associated with cardiovascular disease risk than total blood LDL concentrations alone," Fleming said. "Moreover, we were able to identify changes in apolipoproteins, specifically apoB, which are also associated with increased CVD risk."

After the data were analyzed, the researchers found that participants all had lower LDL cholesterol following the Mediterranean diet periods compared to the average American diet. But while the total numbers of LDL particles were reduced following all three Mediterranean diet periods, they were only significantly decreased when following those periods that included 0.5 or 2.5 ounces of beef a day compared to the average American diet.

Additionally, non-HDL cholesterol and apoB -- a protein involved in lipid metabolism and a marker of CVD risk -- were lower following all three Mediterranean diet periods compared to the average American diet.

Fleming said the study -- recently published in the American Journal of Clinical Nutrition -- underscores the importance of consuming healthy, well-balanced diets.

"Our study helped illustrate the benefits associated with a healthy Mediterranean dietary pattern that embodies balance, variety and the inclusion of nutrient-rich components, which can include low to moderate amounts of lean beef," Fleming said.

Credit: 
Penn State

Photonic MEMS switches going commercial

image: Partial SEM image of the switch matrix: the whole structure patterned in the top silicon layer by dry etching seems to "float" as the oxide is removed. Each matrix unit contains an electrostatic comb drive that can selectively move portions of the waveguides to establish a desired light path from one of the 32 input ports to one of the 32 output ports.

Image: 
Han et al.

One of the technical challenges the current data revolution faces is finding an efficient way to route the data. This task is usually performed by electronic switches, while the data itself is transferred using light confined in optical waveguides. For this reason, conversion from an optical to an electronic signal and back-conversion are required, which costs energy and limits the amount of transferable information. These drawbacks are avoidable with a full optical switch operation. One of the most promising approaches is based on microelectromechanical systems (MEMS), thanks to decisive advantages such as low optical loss and energy consumption, monolithic integration, and high scalability. Indeed, the largest photonic switch ever demonstrated uses this approach.

Commercialization

Until now, those MEMS photonic switches have been fabricated using nonstandard and complex processes in laboratory environments, which has made their commercialization difficult. But University of California Berkeley researchers initiated a collaboration that gathered engineers from different universities worldwide to demonstrate that the difficulties could be overcome. They created a photonic MEMS switch using a commercially available complementary metal-oxide-semiconductor (CMOS) fabrication process without modification. The use of this well-known microfabrication platform represents a huge step toward industrialization because it is compatible with most current technologies, cost-effective, and suited for high-volume production.

Switch fabrication

In their research, recently published in SPIE's new Journal of Optical Microsystems, the photonic switch was fabricated on silicon-on-insulator (SOI) 200-mm wafers using regular photolithographic and dry-etching processes in a commercial foundry. The whole photonic integrated circuit is included in the silicon top layer, which has the advantage of limiting the number of fabrication steps: There are two different dry-etching processes, one lift-off to create metal interconnects, and the final release of the MEMS by oxide etching. The switch design includes 32 input ports and 32 output ports, representing a 32 x 32 matrix (full size is 5.9 mm x 5.9 mm) of the same replicated element. In each of the single elements, the light transfer from one channel to the other is produced by decreasing the distance between two waveguides to couple their modes, an operation achieved by an electrostatic comb drive also included in the silicon top layer.

"For the first time, large-scale and integrated MEMS photonic switches have been fabricated in a commercial foundry on 200-mm SOI wafers. In my opinion, this is a convincing demonstration that this technology is suited for commercialization and mass production. They could be incorporated in data communication systems in the near future," said Jeremy Béguelin, one of the Berkeley researchers.

Promising path

The researchers evaluated the performance of the photonic switches by measuring several important parameters: the light power loss through the entire switch of 7.7 dB, the optical bandwidth of about 30 nm at the 1550 nm wavelength, and the speed of the switching operation of 50 μs. These values are already excellent in comparison with other photonic switch approaches, and ways to improve them have already been identified.

By using a CMOS-compatible fabrication process and SOI wafers, the research team created a robust and efficient photonic switch based on MEMS technology. Such work opens a promising path toward the commercialization and mass production of large and integrated photonic switches, a future key component of data communication networks.

Credit: 
SPIE--International Society for Optics and Photonics

Little swirling mysteries: Uncovering dynamics of ultrasmall, ultrafast groups of atoms

image: Artist's conception of polar vortices moving in ferroelectric material. These small groupings of atoms must be excited with high-frequency electric fields to move, but studying their behavior may lead to new innovations in data storage and processing.

Image: 
Ellen Weiss/Argonne National Laboratory

Our high-speed, high-bandwidth world constantly requires new ways to process and store information. Semiconductors and magnetic materials have made up the bulk of data storage devices for decades. In recent years, however, researchers and engineers have turned to ferroelectric materials, a type of crystal that can be manipulated with electricity.

In 2016, the study of ferroelectrics got more interesting with the discovery of polar vortices — essentially spiral-shaped groupings of atoms — within the structure of the material. Now a team of researchers led by the U.S. Department of Energy’s (DOE) Argonne National Laboratory has uncovered new insights into the behavior of these vortices, insights that may be the first step toward using them for fast, versatile data processing and storage.

“You don’t want something that does what a transistor does, because we have transistors already. So you look for new phenomena. What aspects can they bring? We look for objects with faster speed. This is what inspires people. How can we do something different?” — John Freeland, senior physicist, Argonne National Laboratory

What is so important about the behavior of groups of atoms in these materials? For one thing, these polar vortices are intriguing new discoveries, even when they are just sitting still. For another, this new research, published as a cover story in Nature, reveals how they move. This new type of spiral-patterned atomic motion can be coaxed into occurring, and can be manipulated. That’s good news for this material’s potential use in future data processing and storage devices.

“Although the motion of individual atoms alone may not be too exciting, these motions join together to create something new — an example of what scientists refer to as emergent phenomena — which may host capabilities we could not imagine before,” said Haidan Wen, a physicist in Argonne’s X-ray Science Division (XSD).

These vortices are indeed small — about five or six nanometers wide, thousands of times smaller than the width of a human hair, or about twice as wide as a single strand of DNA. Their dynamics, however, cannot be seen in a typical laboratory environment. They need to be excited into action by applying an ultrafast electric field.

All of which makes them difficult to observe and to characterize. Wen and his colleague, John Freeland, a senior physicist in Argonne’s XSD, have spent years studying these vortices, first with the ultrabright X-rays of the Advanced Photon Source (APS) at Argonne, and most recently with the free-electron laser capabilities of the LINAC Coherent Light Source (LCLS) at DOE’s SLAC National Accelerator Laboratory. Both the APS and LCLS are DOE Office of Science User Facilities.

Using the APS, researchers were able to use lasers to create a new state of matter and obtain a comprehensive picture of its structure using X-ray diffraction. In 2019, the team, led jointly by Argonne and The Pennsylvania State University, reported their findings in a Nature Materials cover story, most notably that the vortices can be manipulated with light pulses. Data was taken at several APS beamlines: 7-ID-C, 11-ID-D, 33-BM and 33-ID-C.

“Although this new state of matter, a so called supercrystal, does not exist naturally, it can be created by illuminating carefully engineered thin layers of two distinct materials using light,” said Venkatraman Gopalan, professor of materials science and engineering and physics at Penn State.

“A lot of work went into measuring the motion of a tiny object,” Freeland said. “The question was, how do we see these phenomena with X-rays? We could see that there was something interesting with the system, something we might be able to characterize with ultrafast timescale probes.”

The APS was able to take snapshots of these vortices at nanosecond time scales — a hundred million times faster than it takes to blink your eyes — but the research team discovered this was not fast enough.

“We knew something exciting must be happening that we couldn’t detect,” Wen said. “The APS experiments helped us pinpoint where we want to measure, at faster time scales that we were not able to access at the APS. But LCLS, our sister facility at SLAC, provides the exact tools needed to solve this puzzle.” 

With their prior research in hand, Wen and Freeland joined colleagues from SLAC and DOE’s Lawrence Berkeley National Laboratory (Berkeley Lab) — Gopalan and Long-Qing Chen of Pennsylvania State University; Jirka Hlinka, head of the Department of Dielectrics at the Institute of Physics of the Czech Academy of Sciences; Paul Evans of the University of Wisconsin, Madison; and their teams — to design a new experiment that would be able to tell them how these atoms behave, and whether that behavior could be controlled. Using what they learned at APS, the team -- including the lead authors of the new paper, Qian Li and Vladimir Stoica, both post-doctoral researchers at the APS at the time of this work -- pursued further investigations at the LCLS at SLAC.

“LCLS uses X-ray beams to take snapshots of what atoms are doing at timescales not accessible to conventional X-ray apparatus,” said Aaron Lindenberg, associate professor of materials science and engineering and photon sciences at Stanford University and SLAC. “X-ray scattering can map out structures, but it takes a machine like LCLS to see where the atoms are and to track how they are dynamically moving at unimaginably fast speeds.”

Using a new ferroelectric material designed by Ramamoorthy Ramesh and Lane Martin at Berkeley Lab, the team was able to excite a group of atoms into swirling motion by an electric field at terahertz frequencies, the frequency that’s roughly 1,000 times faster than the processor in your cell phone. They were able to then capture images of those spins at femtosecond timescales. A femtosecond is a quadrillionth of a second — it’s such a short period of time that light can only travel about the length of a small bacteria before it’s over.

With this level of precision, the research team saw a new type of motion they had not seen before.

“Despite theorists having been interested in this type of motion, the exact dynamical properties of polar vortices remained nebulous until the completion of this experiment,” Hlinka said. “The experimental findings helped theorists to refine the model, providing a microscopic insight in the experimental observations. It was a real adventure to reveal this sort of concerted atomic dance.”

This discovery opens up a new set of questions that will take further experiments to answer, and planned upgrades of both the APS and LCLS light sources will help push this research further. LCLS-II, now under construction, will increase its X-ray pulses from 120 to 1 million per second, enabling scientists to look at the dynamics of materials with unprecedented accuracy.

And the APS Upgrade, which will replace the current electron storage ring with a state-of-the-art model that will increase the brightness of the coherent X-rays up to 500 times, will enable researchers to image small objects like these vortices with nanometer resolution. 

Researchers can already see the possible applications of this knowledge. The fact that these materials can be tuned by applying small changes opens up a wide range of possibilities, Lindenberg said.

“From a fundamental perspective we are seeing a new type of matter,” he said. “From a technological perspective of information storage, we want to take advantage of what is happening at these frequencies for high-speed, high-bandwidth storage technology. I am excited about controlling the properties of this material, and this experiment shows possible ways of doing this in a dynamical sense, faster than we thought possible.”

Wen and Freeland agreed, noting that these materials may have applications that no one has thought of yet.

“You don’t want something that does what a transistor does, because we have transistors already,” Freeland said. “So you look for new phenomena. What aspects can they bring? We look for objects with faster speed. This is what inspires people. How can we do something different?”

Credit: 
DOE/Argonne National Laboratory

Self-assembling nanofibers prevent damage from inflammation

image: A graphic of a supramolecular peptide nanofiber bearing complement protein C3dg (blue), B-cell epitopes (green), and T-cell epitopes (purple).

Image: 
Chelsea Fries

Biomedical engineers at Duke University have developed a self-assembling nanomaterial that can help limit damage caused by inflammatory diseases by activating key cells in the immune system. In mouse models of psoriasis, the nanofiber-based drug has been shown to mitigate damaging inflammation as effectively as a gold-standard therapy.

One of the hallmarks of inflammatory diseases, like rheumatoid arthritis, Crohn's disease and psoriasis, is the overproduction of signaling proteins, called cytokines, that cause inflammation. One of the most significant inflammatory cytokines is a protein called TNF. Currently, the best treatment for these diseases involves the use of manufactured antibodies, called monoclonal antibodies, which are designed to target and destroy TNF and reduce inflammation.

Although monoclonal antibodies have enabled better treatment of inflammatory diseases, the therapy is not without its drawbacks, including a high cost and the need for patients to regularly inject themselves. Most significantly, the drugs also have uneven efficacy, as they may sometimes not work at all or eventually stop working as the body learns to make antibodies that can destroy the manufactured drug.

To circumvent these issues, researchers have been exploring how immunotherapies can help teach the immune system how to generate its own therapeutic antibodies that can specifically limit inflammation.

"We're essentially looking for ways to use nanomaterials to induce the body's immune system to become an anti-inflammatory antibody factory," said Joel Collier, a professor of biomedical engineering at Duke University. "If these therapies are successful, patients need fewer doses of the therapy, which would ideally improve patient compliance and tolerance. It would be a whole new way of treating inflammatory disease."

In their new paper, which appeared online in the Proceedings of the National Academy of Sciences on April 5, Collier and Kelly Hainline, a graduate student in the Collier lab, describe how novel nanomaterials could assemble into long nanofibers that include a specialized protein, called C3dg. These fibers then were able to activate immune system B-cells to generate antibodies.

"C3dg is a protein that you'd normally find in your body," said Hainline. "The protein helps the innate immune system and the adaptive immune system communicate, so it can activate specific white blood cells and antibodies to clear out damaged cells and destroy antigens."

Due to the protein's ability to interface between different cells in the immune system and activate the creation of antibodies without causing inflammation, researchers have been exploring how C3dg could be used as a vaccine adjuvant, which is a protein that can help boost the immune response to a desired target or pathogen.

In their new nanomaterial, Hainline and Collier were able put this idea to the test by weaving key fragments of the C3dg protein with components of TNF into nanofibers. The C3dg protein would trigger the B-cells to create antibodies, while the TNF components would provide a blueprint of what the antibodies need to seek out and destroy.

"When Kelly assembled the C3dg protein and key portions of TNF into these nanofibers, she saw that there was a strong B-cell response, which means there was an increased production of antibodies that targeted TNF," said Collier. "In standard mouse models of inflammation, mice experience a temperature change where their internal temperature will drop. But when Kelly delivered her C3dg nanofibers, it was highly protective, and the mice didn't experience an inflammatory response."

When the team tested their nanomaterial in the psoriasis mouse model, they found that the nanofibers carrying C3dg were as effective as a monoclonal antibody therapy. And because C3dg is normally found in the body, it wasn't flushed out of the system by anti-drug antibodies.

After examining the psoriasis model, the team made a surprising discovery -- C3dg wasn't just stimulating antibody production in the B-cells, it was also influencing the response of T-cells.

"We observed that nanofibers that only contained the C3dg components without the TNF components still showed a therapeutic benefit to our models, which was surprising. But I think the most significant discovery was seeing a beneficial T-cell response that was activated by a protein you'd naturally find in your body," said Hainline. "That kind of response had been seen before with other proteins, but we haven't seen any reports of people using that response with C3dg."

For their next steps, the team hopes to further explore the mechanisms behind this beneficial T-cell activation. They'll also pursue additional experiments to explore the response to similar nanomaterials in rheumatoid arthritis models.

"We're still learning about this T-cell response, and we're trying to understand how it's involved," said Collier. "Ultimately, we'd love to see if C3dg can be used as a universal component in multiple different therapies against inflammation, especially if we can swap out the TNF segments with a different target. This work clearly indicates that nanomaterials involving C3dg warrant further development as immunotherapies."

Credit: 
Duke University

Stretching the boundaries of medical tech with wearable antennae

image: The wearable transmitter is designed to compress its top layer in a double arch pattern, shown here, to respond to movement without compromising signal transmission.

Image: 
Huanyu Cheng, Penn State

Current research on flexible electronics is paving the way for wireless sensors that can be worn on the body and collect a variety of medical data. But where do the data go? Without a similar flexible transmitting device, these sensors would require wired connections to transmit health data.

Huanyu "Larry" Cheng, Dorothy Quiggle Career Development Assistant Professor of Engineering Science and Mechanics in the Penn State College of Engineering, and two international teams of researchers are developing devices to explore the possibilities of wearable, flexible antennae. They published two papers in April in Nano-Micro Letters and Materials & Design.

Wearable antenna bends, stretches, compresses without compromising function

Like wearable sensors, a wearable transmitter needs to be safe for use on human skin, functional at room temperature and able to withstand twisting, compression and stretching. The flexibility of the transmitter, though, poses a unique challenge: When antennae are compressed or stretched, their resonance frequency (RF) changes and they transmit radio signals at wavelengths that may not match those of the antenna's intended receivers.

"Changing the geometry of an antenna will change its performance," Cheng said. "We wanted to target a geometric structure that would allow for movement while leaving the transmitting frequency unchanged."

The research team created the flexible transmitter in layers. Building upon previous research, they fabricated a copper mesh with a pattern of overlapping, wavy lines. This mesh makes up the bottom layer, which touches the skin, and the top layer, which serves as the radiating element in the antenna. The top layer creates a double arch when compressed and stretches when pulled -- and moves between these stages in an ordered set of steps. The structured process through which the antenna mesh arches, flattens and stretches improves the overall flexibility of the layer and reduces RF fluctuations between the antenna's states, according to Cheng.

Energy efficiency was another priority. The bottom mesh layer keeps radio signals from interacting with the skin. This implementation, beyond preventing tissue damage, avoids a loss of energy caused by tissue degrading the signal. The antenna's ability to maintain a steady RF also allows the transmitter to collect energy from radio waves, Cheng said, potentially lowering energy consumption from outside sources.

The transmitter, which can send wireless data at a range of nearly 300 feet, can easily integrate a number of computer chips or sensors, Cheng said. With further research, it could have applications in health monitoring and clinical treatments, as well as energy generation and storage.

"We've demonstrated robust wireless communication in a stretchable transmitter," Cheng said. "To our knowledge, this is the first wearable antenna that exhibits almost completely unchanged resonance frequency over a relatively large range of stretching."

Enabling further antenna customization with constant variables

After developing the stretchable antenna prototype, Cheng analyzed it with another research team. The researchers aimed to identify new fundamental pathways for fine-tuning such a device that could be applied to similar, future research.

"We wanted to investigate the problem by examining the connection between mechanical properties and electromagnetic behavior," Cheng said. "Highlighting this relationship can reveal insights about the influence of different parameters on antenna performance."

The team fabricated an antenna with layers and a mesh pattern similar to their previous prototype but lacking the double-arch compression structure. They measured the deformation of the antenna as the mesh was stretched at different intervals, then used computer simulations to examine the relationship between the deformation and the antenna performance.

To simplify the analysis of the antenna's radio signal transmission, the researchers used a mathematical technique to convert certain measurements -- such as the width and angle of the repeating mesh pattern -- into constant values. With this process, called normalization, researchers can focus on the relationship between specific variables by negating the influence of the normalized variables.

The team found that the normalization of different variables provided several avenues for customizing the antenna's performance. They also found that the simulated geometry of the mesh could produce different outcomes, even with the same set of normalized variables.

Though the researchers analyzed wearable antenna properties, Cheng emphasized that their methods could be applied to other radio frequency devices.

"We've shown that you don't have to be limited to exploring the effects of one normalized variable," Cheng said. "Using this method, we can tailor the properties for other antennae or devices that communicate using microwaves."

Looking toward the future

Cheng and his collaborators will continue to research ways to facilitate the development of these devices through application-based studies as well as further fundamental explorations to optimize the design process.

"We are really excited that this research could one day lead to networks of sensors and transmitters worn on the body, all communicating with each other and external devices," Cheng said. "What we're imagining is science fiction at the moment, but we are working to make it happen."

Credit: 
Penn State

Northern Star Coral study could help protect tropical corals

image: Close-up of a Northern Star Coral (Astrangia poculata) colony taken from a microscope in the laboratory at Roger Williams University, Rhode Island.

Image: 
Alicia Schickle

As the Rhode Island legislature considers designating the Northern Star Coral an official state emblem, researchers are finding that studying this local creature's recovery from a laboratory-induced stressor could help better understand how to protect endangered tropical corals.

A new study published today in mSystems, a journal of the American Society for Microbiology, investigates antibiotic-induced disturbance of the coral (Astrangia poculata) and shows that antibiotic exposure significantly altered the composition of the coral's mucus bacterial microbiome, but that all the treated corals recovered in two weeks in ambient seawater.

The stony Northern Star Coral naturally occurs off the coast of Rhode Island and other New England states in brown colonies with high (symbiotic) densities and in white colonies with low (aposymbiotic) densities of a symbiotic dinoflagellate alga. The study found that those corals with algal symbionts - organisms that are embedded within the coral's tissue and are required by tropical corals to survive - recovered their mucus microbiomes more consistently and more quickly.

The study also identified six bacterial taxa that played a prominent role in reassembling the coral back to its healthy microbiome. This is the first microbiome manipulation study on this coral.

"The work is important because it suggests that this coral may be able to recover its mucus microbiome following disturbance, it identifies specific microbes that may be important to assembly, and it demonstrates that algal symbionts may play a previously undocumented role in the microbial recovery and resilience to environmental change," the paper states.

With thermal bleaching and disease posing major threats to tropical corals, this research, along with other work on tropical corals, "provides a major step toward identifying the microbiome's roles in maintaining coral resilience," the paper notes.

"We think that the algae are helping the coral select the microbes that live with it, and this suggestion of symbiont-microbe coordination following disturbance is a new concept for corals," said paper co-author Amy Apprill, associate scientist at the Woods Hole Oceanographic Institution.

"Worldwide, coral reefs are in crisis. Any time we see corals recover, that's always good news. It shows that they can combat a stressor and figure out how to become healthy again," said Apprill. "What we found here is translatable to tropical corals which are faced with different stressors, such as warming water, disease, and pollution. This paper suggests that the symbiotic algae play a major role in providing consistency and resilience to the coral microbiome."

"When we think about corals, it's usually assumed that we're thinking about the tropics and the bright blue water and where it's warm, sunny, and sandy. However, the Northern Star Coral lives in murkier and much colder waters, yet it can still teach us a lot about expanding our understanding of corals," said lead author Shavonna Bent, a student in the MIT-WHOI Joint Program in Oceanography/Applied Ocean Science and Engineering.

The Northern Star Coral is an ideal emblem for Rhode Island, said co-author Koty Sharp. The coral is small like the state; it's New England-tough in dealing with large temperature fluctuations; and it's a local, offering plenty of insight that can help address global problems, said paper co-author Koty Sharp, an associate professor at Roger Williams University who is leading the effort for official designation of the coral.

Committees from both the Rhode Island House and Senate have held hearings on the proposed legislation. The Senate has approved the bill, and the House could vote on it in the coming month. Assuming the House also approves the bill, it will be sent to Rhode Island Gov. Daniel McKee for signing into law.

Sharp said the designation effort has a big educational component. "If designating this as a state emblem allows us to teach more people about the power of basic research to support conservation, or if this allows us to teach a generation of school children about the local animals that live around them, then this state coral will have a great deal of value," she said.

Credit: 
Woods Hole Oceanographic Institution