Tech

Research news tip sheet: Story ideas from Johns Hopkins Medicine

PROGRESSIVE KIDNEY DISEASE MAY BE PREDICTED BY PROTEINS IN URINE

Media Contact: Michael E. Newman, mnewma25@jhmi.edu

In a study of people who were diagnosed during hospitalization with the short term but serious disorder called acute kidney injury (AKI), Johns Hopkins Medicine researchers showed that the levels of three proteins isolated from the urine of these patients could serve as biomarkers to predict the likelihood of progression to chronic kidney disease (CKD), kidney failure -- also called end stage renal disease (ESRD) -- or death.

The findings are reported in the Feb. 1, 2021, issue of The Journal of Clinical Investigation.

AKI, as described by the National Kidney Foundation, is a "sudden episode of kidney failure or kidney damage that happens within a few hours or a few days." It causes waste products to build up in the blood, making it hard for the kidneys to maintain the correct balance of fluids in the body.

Symptoms of AKI differ depending on the cause and may include: too little urine leaving the body; swelling in the legs and ankles, and around the eyes; fatigue; shortness of breath; confusion; nausea; chest pain; and in severe cases, seizures or coma. The disorder is most commonly seen in patients in the hospital whose kidneys are affected by medical and surgical stress and complications.

"Although many studies have investigated biomarkers to detect AKI in its early stages and forecast the short-term outcomes of the condition, little research has been devoted to examining biomarkers for their ability to predict long-term kidney function," says Chirag Parikh, Ph.D., director of the Division of Nephrology at the Johns Hopkins University School of Medicine and the study's senior author. "We looked at three proteins easily measured from urine -- and known to be altered in response to kidney inflammation or damage -- to see if they could be effective in making those predictions."

The three proteins evaluated were monocyte chemoattractant protein 1 (MCP-1), also known as C-C motif chemokine ligand 2 (CCL2); uromodulin (UMOD) and YKL-40, also known as chitinase 3-like 1 (CHI3L1). Their levels in urine were measured for each of 1,538 study participants -- half of whom were diagnosed with AKI during their hospitalizations -- at three months after release from the hospital. Following this baseline measurement, the patients were followed for an extended period of time (median: 4.3 years) to see how many progressed to CKD or ESRD. Throughout the monitoring period, the researchers assessed the relationship between the baseline biomarker levels for each patient with changes in estimated glomerular filtration rate (eGFR), a measure of kidney function (a low number indicates poor performance); the development of CKD or ESRD; or death from kidney failure.

The relationship between baseline biomarker levels and composite kidney outcome (development of CKD or ESRD) was defined using a statistical model that produces a hazard ratio -- a measure over time of how often specific events (in this case, declining kidney performance) happen in a study group (the patients with AKI during their hospitalizations) compared with their frequency in a control group (the patients without AKI during their hospitalizations). In this study, a hazard ratio of 1 suggests no difference between the groups, and a ratio greater than 1 indicates a greater likelihood of a poor composite kidney outcome. Likewise, a ratio less than 1 shows a decreased chance.

The researchers found that higher MCP-1 and YKL-40 levels in patients with AKI during hospitalization were associated with increased progression of the acute condition to CKD or ESRD. The hazard ratio for MCP-1 was 1.32 while the ratio for YKL-40 was 1.15. Both baseline protein measures also were associated with progressively declining eGFR during the observation period.

The opposite was observed for those whose baseline urine samples had higher UMOD levels, with a hazard ratio of 0.85 for CKD or ESRD development. Higher UMOD also predicted less chance of declining renal function over time.

To confirm their results, Parikh and his colleagues studied mice in which AKI was followed by either renal atrophy (shrinking of the kidney, which models progressive kidney decline) or repair. In the mice with atrophy, the researchers observed more activity by the genes that produce MCP-1 and YKL-40. In the repair mice, there was more production of UMOD. This, the researchers say, suggests that MCP-1 and YKL-40 may hinder the ability of kidneys to repair damage caused by AKI, setting the stage for progression to more serious kidney disease. On the other hand, they say UMOD production may enhance recovery.

"Based on our findings, the three proteins we studied show great promise as biomarkers for predicting the risk of CKD or ESRD following AKI, and with more research to prove their abilities, they may become valuable screening tools for physicians in the future," says Parikh.

Parikh is available for interviews.

PEOPLE WITH SERIOUS MENTAL ILLNESS NOT LIKELY TO RECEIVE ADDICTION TREATMENT

Media Contact: Marisol Martinez, mmart150@jhmi.edu

In an analysis of records from two Baltimore community mental health centers, Johns Hopkins Medicine researchers found that people with serious mental illnesses such as schizophrenia or severe bipolar disorder were 20 times more likely to use heroin than the general population. The researchers also discovered that only one in seven of these patients received medication for opioid addiction. The researchers note that this may be in part because addiction treatment programs weren't designed with serious mental illness in mind.

In their study published in the February 2021 issue of Psychiatry Research, the investigators note that specialized treatment programs and greater awareness among addiction treatment providers of underlying mental health conditions may be required to meet this underserved population's needs. More aggressive treatment of the mental disorders also may be necessary to reduce the disparity.

Evidence-based programs for treating substance use disorders employ medications such as buprenorphine or methadone, as well as services such as group therapy, doctor appointments and drug testing.

"People with serious mental disorders don't do well in this type of structured treatment setting. They may not be organized enough or may seem distracted, they may feel uncomfortable in groups or they may make other people uncomfortable," says study senior author Stanislav Spivak, M.D., assistant professor of psychiatry and behavioral sciences at the Johns Hopkins University School of Medicine. "Similarly, the negative symptoms that accompany serious mental disorders such as apathy, ambivalence or social withdrawal, can decrease these patients' ability to fully participate in treatment."

"Providers who treat substance use disorders need to be made aware of these symptoms as potentially being related to an underlying mental illness, and they shouldn't dismiss these patients as disinterested or intentionally trying to be difficult," he explains.

After reviewing the records of 271 patients with severe mental illness, researchers found that 32% said they used heroin compared with 1.6% of the general population reporting heroin use. Of those patients with a history of drug use, 15% were treated with medications typically prescribed to treat substance abuse. About 59% of these patients were taking at least one anti-psychotic medication, and those people were four times more likely to also be treated for addiction. Patients that scored high on questionnaires measuring social avoidance symptoms were less likely to be treated for their substance abuse.

"There are likely many factors as to why drug use is so much higher in people with severe mental illness," says study lead author (and senior author Spivak's wife) Amethyst Spivak, J.D., who started out as a researcher at the Johns Hopkins University and is now a board member of the National Trafficking Shelter Alliance. "Some of it may be due to self-medication. However, much of the issue lies in that this is a vulnerable group of people. They may have poor coping skills, are more likely to be exposed to drugs and may be less likely to push back when offered drugs."

She adds that exposure to drugs is greater in low income neighborhoods and, unfortunately, people with serious mental illness are more likely to be impoverished.

Because this was a small study focused on one city, the researchers say they now need to see if these trends are representative of people with mental illness across the nation.

The researchers note that this research was only possible after a large gift from an anonymous donor.

Spivak is available for interviews.

STUDY SUGGESTS ALTRUISM COORELATED WITH PATIENTS CHOOSING CHEAPER TREATMENTS

Media Contact: Rachel Butch; rbutch1@jhmi.edu

In a medical record review of 189 patients seen for vascular retinal eye diseases at the Johns Hopkins Medicine Wilmer Eye Institute, researchers found that patients who exhibited a marker for altruism were more likely to choose a less expensive, potentially less effective drug to treat their eye condition. The full reasons behind this choice are unknown, but the researchers say this small study could inform how physicians present treatment options to their patients.

The study was published Feb. 22, 2021, in the Journal of the American Medical Association Network Open.

In this study, patients were asked to choose one of two therapies: bevacizumab (Avastin) or aflibercept (Eylea). The former is a chemotherapy used to treat cancer -- but not approved by the U.S. Food and Drug Administration (FDA) to treat ocular vascular disease -- that costs approximately $100 per dose. The latter is an FDA-approved drug for treatment of numerous eye diseases that costs about $2,000 per dose. Some studies have shown that aflibercept may be more effective than bevacizumab for treating eye disease.

Separate from this choice, patients were asked if they wanted to participate in a clinical trial to identify future treatments for their eye disease. The patients were not compensated for participation in the clinical study. Because participation required an invasive procedure that would benefit the research community but not the individual patient, the researchers used it as an indication that the volunteers were motivated by altruism. A total of 125 patients (approximately 66%) of the 189 who were studied volunteered for the trial.

Controlling for factors, such as age, race, economic status and health insurance, the researchers found that altruistic patients were 24% more likely to choose bevacizumab. These results suggest that for some patients, altruism may be a driving factor in the decision to select a potentially less clinically effective, but more cost-effective medicine for their own health care.

"It raises ethical questions for how or when to ask patients to make these decisions if a certain segment of the population is willing to make a potential sacrifice for the greater good," says Akrit Sodhi, M.D., Ph.D., the Branna and Irving Sisenwein Professor in Ophthalmology at the Johns Hopkins University School of Medicine.

Overall, personality traits like altruism are flexible and can change situation-to-situation. Sodhi and his colleagues say more research is needed to determine how altruism may influence health care decisions.

Sodhi is available for interviews.

RESEARCHERS LEARN HOW EXERCISE DAMAGES HEART TISSUE IN GENETIC CONDITION

Media Contact: Brian Waters, bwaters3@jhmi.edu

Although exercise can lead to sudden death in people with genetic heart rhythm disorders, the cellular and molecular mechanisms behind the process haven't been pinned down. Now, using mice with one of these heart rhythm disorders, Johns Hopkins Medicine researchers and their colleagues have teased out the intricate biological steps leading to heart cell death during exercise. These steps, they report, can create a buildup of scar tissue in the heart that interrupts the electrical waves propagating heart beats.

In their study published Feb. 17, 2021, in Science Translational Medicine, the researchers used a synthetic peptide -- a laboratory-produced fragment of a protein -- to prevent heart cells from dying in simulated exercise. The results of their findings suggest that researchers one day may develop therapies that enable people with genetic heart diseases to work out and participate in other physical activities.

"We've always thought that exercise promotes more arrhythmias, or irregular heartbeats, in people with these genetic heart rhythm disorders, but now we've demonstrated why exercise is bad for them on a cellular level," says Stephen Chelko, Ph.D., an adjunct assistant professor of medicine at the Johns Hopkins University School of Medicine and assistant professor of biomedical sciences at Florida State University.

Using a mouse with the second most common mutation for arrhythmogenic cardiomyopathy -- one of the genetic heart rhythm disorders -- the researchers first showed that when these mice swam for exercise, it caused heart cells to accumulate calcium ions. This pushed the cells into a programmed death or suicide, known as apoptosis (a natural process to remove old or damaged cells). They found that exercise activates the protein calpain, which upon moving to the mitochondria in a cell -- traditionally known as the cell's power factory -- triggers another protein: apoptosis-inducing factor (AIF). A normal component of the mitochondria, AIF ultimately causes heart muscle cell death. However, when it's clipped by calpain, AIF migrates outside of the mitochondria and docks to the cell nucleus. There, it induces the DNA inside the nucleus to break apart and cause cell death.

"We believe that this mechanism is not something specific to this particular disease, but may be at the core of other cardiomyopathies and, very likely, many other heart diseases," says Nazareno Paolocci, M.D., Ph.D., associate professor of medicine at the Johns Hopkins University School of Medicine. "We think that understanding how this works in great detail will enable us to develop therapeutics to treat these type of genetic heart diseases."

To gain this insight, the researchers collaborated with Fabio Di Lisa, M.D., of the University of Padua and Nunzianna Doti, Ph.D., and Menotti Ruvo, Ph.D., at the National Research Council of Italy. The team developed a protein fragment made to look like a portion of AIF and put it inside mouse heart cells grown in the laboratory. After treating the cells with calcium ions and hormones from the adrenal glands to simulate conditions during exercise, the researchers observed that the mimic protein fragment blocked the activator and prevented the heart cell suicide from occurring.

Based on this finding, the researchers next plan to see if the mimic protein fragment can protect heart cells during exercise in live mice bred with the mutation for genetic heart disease.

Chelko and Paolocci are available for interviews.

IMMUNE PROTEIN MAY LINK CHRONIC INFLAMMATION AND FRAILTY IN OLDER ADULTS

Media Contact: Waun'Shae Blount, wblount1@jhmi.edu

Chronic inflammation in people age 65 and older may be marked by frequent infections, pain, injuries and slow healing wounds. To make matters worse, the negative impact of chronic inflammation on older adults is often compounded by frailty -- the state of aging characterized by weakness, weight loss, poor balance and other symptoms that makes older adults among the most vulnerable for accidents, mobility issues, poor outcomes following illnesses, and death.

In recent years, medical researchers have proposed that the dangerous connection between chronic inflammation and frailty may be due to a protein called interleukin-6 (IL-6). IL-6 is a cytokine, a molecule produced by immune system cells to help regulate the body's response to injury or infection. It is one of the main stimulators of inflammation and fever, two of the mechanisms the immune system uses to restore health.

What isn't known is how IL-6 contributes to the loss of physical ability commonly associated with increasing frailty during aging. To better understand this role, a Johns Hopkins Medicine research team genetically engineered a new mouse model that develops chronic inflammation but lacks IL-6. The model can be used to determine if the absence of IL-6 during uncontrollable chronic inflammation is enough to protect against the physical and functional decline observed with age.

The team's findings were published in the February 2021 issue of The Journals of Gerontology, Series A.

To characterize their new mouse model, the researchers used a state-of-the-art metabolomic profiling assay that enabled them to define the unique chemical fingerprints involved in the development of chronic inflammation without IL-6.

They also used an advanced dynamic positron emission tomography-computed tomography (PET-CT) system to determine how the absence of IL-6 (with presence of inflammation) impacts energy production by mitochondria (the cell's "energy factory") in the hearts of the mice. Decline in energy production during aging is considered a major contributor to many age-related illnesses including heart failure and Alzheimer's disease.

What the researchers found in frail mice lacking IL-6 were increases in circulating fat compounds -- lysolecithins -- that are important in maintaining healthy mitochondria function and decrease with age in humans. The frail mice lacking IL-6 also had increased heart energy production compared to the frail mice that produced IL-6. Together, these findings show that improvements in mitochondrial function occurred in frail mice when IL-6 was not present.

To determine if the enhancements in mitochondrial function translated to improvements in physical performance, the researchers used functional assessment tools commonly reserved for humans, such as running on a treadmill and grip strength. Treadmill running was chosen because of its similarity to human cardiac stress tests and because it assesses several systems at once: cardiovascular, skeletal muscle and pulmonary.

"Frail mice without IL-6 had short term improvements in running and fewer falls off the treadmill, but this improvement disappeared after three days," says study senior author Peter Abadir, M.D., associate professor of medicine at the Johns Hopkins University School of Medicine. "Surprisingly, and perhaps counterintuitively, we observed dramatically higher mortality in these mice in the presence of chronic inflammation -- as high as a fourfold increase compared with nonmodified mice and with mice that developed chronic inflammation but could still produce IL-6."

The researchers say these results suggest multiple impacts for IL-6. While the deletion or absence of IL-6 may improve some molecular and physical functions, its absence in the context of stress or chronic inflammation also may precipitate a quick decline in health and eventually, death.

Hints of similar effects can be gleaned from clinical studies of human patients with autoimmune disorders who were taking treatments with antibodies against TNF alpha, a cytokine similar to IL-6. When these patients develop fever or signs of infection, clinicians often withhold their medication to allow the body to mount an immune response.

The researchers say their findings in frail mice suggest a delicate balance exists between aging and chronic inflammation, and that IL-6 may be needed to maintain long-term exercise ability and prevent premature death. Therefore, they caution physicians to keep this balance in mind when prescribing drugs to reverse age-related increases in IL-6 levels.

Abadir is available for interviews.

Credit: 
Johns Hopkins Medicine

Scientists investigate 3D-printed high-entropy alloys

Scientists from the Skoltech Center for Design, Manufacturing and Materials (CDMM) and the Institute for Metals Superplasticity Problems (IMSP RAS) have studied the fatigue behavior of additive-manufactured high-entropy alloys (HEA). The research was published in the Journal of Alloys and Compounds.

Conventional 20th century materials that are extensively used in industries and mechanical engineering have reached their performance limit. Nowadays, alloying is commonly used to improve the alloys' mechanical performance and increase their operating temperature. An alternative to alloying, HEAs containing equal atomic fractions of their constituent elements were first obtained in 2004. Since then, various publications have offered ample evidence of excellent mechanical performance of HEAs over a broad temperature range. Most of the characteristics were demonstrated for HEAs manufactured by traditional metallurgy techniques.

"Looking at the properties of additive-manufactured HEAs, we discovered that previous research was focused on the static characteristics of printed HEAs. However, from the standpoint of practical applications, it is essential to study the properties of HEAs under cyclic loads," explains Stanislav Ye Evlashin, a leading research scientist at Skoltech.

In their recent work, the team studied the fatigue properties of CrFeCoNi, an alloy produced by the Laser-Powder Bed Fusion (L-PBF) technique, building on previous HEA research.

"In our new study, we have shown that the annealing of printed samples reduces residual stress, improves plasticity and slightly decreases yield strength, We have demonstrated that machining removes surface defects and extends the fatigue life," says Yulia Kuzminova, a PhD student at CDMM.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Study: Bahamas were settled earlier than believed

Humans were present in Florida by 14,000 years ago, and until recently, it was believed the Bahamas - located only a few miles away - were not colonized until about 1,000 years ago. But new findings from a team including a Texas A&M University at Galveston researcher prove that the area was colonized earlier, and the new settlers dramatically changed the landscape.

Peter van Hengstum, associate professor in the Department of Marine and Coastal Environment Science at Texas A&M-Galveston, and colleagues have had their findings published in PNAS (Proceedings of the National Academy of Sciences).

Researchers generated a new environmental record from the Blackwood Sinkhole, which is flooded with 120 feet of groundwater without dissolved oxygen. This is important because it has pristinely preserved organic material for the last 3,000 years. Using core samples and radiocarbon dating, the team examined charcoal deposits from human fires thousands of years ago, indicating that the first settlers arrived in the Bahamas sooner than previously thought.

"The Bahamas were the last place colonized by people in the Caribbean region, and previous physical evidence indicated that it may have taken hundreds of years for indigenous people of the Bahamas - called the Lucayans - to move through the Bahamian archipelago that spans about 500 miles," van Hengstum said.

While people were present in Florida more than 14,000 years ago at the end of the last ice age, he said, these people never crossed the Florida Straits to nearby Bahamian islands, only 50 to 65 miles away. Meanwhile, the Caribbean islands were populated by people migrating from South American northward. Van Hengstum said the oldest archaeological sites in the southernmost Bahamian archipelago from the Turks and Caicos Islands indicate human arrival likely by 700 A.D.

"But in the northern Bahamian Great Abaco Island, the earliest physical evidence of human occupation are skeletons preserved in sinkholes and blueholes," he said. "These two skeletons from Abaco date from 1200 to 1300 A.D. Our new record of landscape disturbance from people indicates that slash-and-burn agriculture likely began around 830 A.D., meaning the Lucayans rapidly migrated through the Bahamian archipelago in likely a century, or spanning just a few human generations."

The team's other findings show how the Lucayans changed the new land.

When the Lucayans arrived, Great Abaco Island was mostly covered with pine and palm forests, and had a unique reptile-dominated ecosystem of giant tortoises and crocodiles. Increased deforestation and burning allowed pine trees to colonize and out-compete native palms and hardwoods.

Large land reptiles began to disappear after 1000 A.D. A significant increase in intense regional hurricane activity around 1500 AD is thought to have caused considerable damage to the new pine tree forests, as indicated by a decrease in pine pollen in the sediment core.

"The pollen record indicates that the pre-contact forest was not significantly impacted earlier in the record during known times when intense hurricane strike events were more frequent," van Hengstum said. "In our current world where the intensity of the largest hurricanes is expected to increase over the coming decades, the current pine trees in the northern Bahamas may not be as resilient to environmental impacts of these changes in hurricane activity."

Credit: 
Texas A&M University

Current issue articles for Geosphere posted online in February

Boulder, Colo., USA: GSA’s dynamic online journal, Geosphere,
posts articles online regularly. Topics for articles posted for Geosphere this month include “a tale of five enclaves”; evidence
for mantle and Moho in the Baltimore Mafic Complex (Maryland, USA); and the
after effects of the 1964 Mw 9.2 megathrust rupture, Alaska.

From Ordovician nascent to early Permian mature arc in the southern
Altaids: Insights from the Kalatage inlier in the Eastern Tianshan, NW
China

Qigui Mao; Jingbin Wang; Wenjiao Xiao; Brian F. Windley; Karel Schulmann
...

Abstract:
The Kalatage inlier in the Dananhu-Haerlik arc is one of the most important
arcs in the Eastern Tianshan, southern Altaids (or Central Asian orogenic
belt). Based on outcrop maps and core logs, we report 16 new U-Pb dates in
order to reconstruct the stratigraphic framework of the Dananhu-Haerlik
arc. The new U-Pb ages reveal that the volcanic and intrusive rocks formed
in the interval from the Ordovician to early Permian (445–299 Ma), with the
oldest diorite dike at 445 ± 3 Ma and the youngest rhyolite at 299 ± 2 Ma.
These results constrain the ages of the oldest basaltic and volcaniclastic
rocks of the Ordovician Huangchaopo Group, which were intruded by granite-
granodiorite-diorite plutons in the Late Ordovician to middle Silurian
(445–426 Ma). The second oldest components are intermediate volcanic and
volcaniclastic rocks of the early Silurian Hongliuxia Formation (S 1h), which lies unconformably on the Huangchaopo Group
and is unconformably overlain by Early Devonian volcanic rocks (416 Ma).
From the mid- to late Silurian (S2-3), all the rocks were
exhumed, eroded, and overlain by polymictic pyroclastic deposits. Following
subaerial to shallow subaqueous burial at 416–300 Ma by intermediate to
felsic volcanic and volcaniclastics rocks, the succession was intruded by
diorites, granodiorites, and granites (390–314 Ma). The arc volcanic and
intrusive rocks are characterized by potassium enrichment, when they
evolved from mafic to felsic and from tholeiitic via transitional and
calc-alkaline to final high-K calc- alkaline compositions with relatively
low initial Sr values, (87Sr/86Sr)i =
0.70391–0.70567, and positive εNd(t) values, +4.1 to
+9.2. These new data suggest that the Dananhu-Haerlik arc is a long-lived
arc that consequently requires a new evolutionary model. It began as a
nascent (immature) intra-oceanic arc in the Ordovician to early Silurian,
and it evolved into a mature island arc in the middle Silurian to early
Permian. The results suggest that the construction of a juvenile-to-mature
arc, in combination with its lateral attachment to an incoming arc or
continent, was an important crustal growth mechanism in the southern
Altaids.

View article:

https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02232.1/594553/From-Ordovician-nascent-to-early-Permian-mature

A tale of five enclaves: Mineral perspectives on origins of mafic
enclaves in the Tuolumne Intrusive Complex

C.G. Barnes; K. Werts; V. Memeti; S.R. Paterson; R. Bremer

Abstract:
The widespread occurrence of mafic magmatic enclaves (mme) in arc volcanic
rocks attests to hybridization of mafic-intermediate magmas with felsic
ones. Typically, mme and their hosts differ in mineral assemblage and the
compositions of phenocrysts and matrix glass. In contrast, in many arc
plutons, the mineral assemblages in mme are the same as in their host
granitic rocks, and major-element mineral compositions are similar or
identical. These similarities lead to difficulties in identifying mixing
end members except through the use of bulk-rock compositions, which
themselves may reflect various degrees of hybridization and potentially
melt loss. This work describes the variety of enclave types and occurrences
in the equigranular Half Dome unit (eHD) of the Tuolumne Intrusive Complex
and then focuses on textural and mineral composition data on five
porphyritic mme from the eHD. Specifically, major- and trace-element
compositions and zoning patterns of plagioclase and hornblende were
measured in the mme and their adjacent host granitic rocks. In each case,
the majority of plagioclase phenocrysts in the mme (i.e., large crystals)
were derived from a rhyolitic end member. The trace-element compositions
and zoning patterns in these plagioclase phenocrysts indicate that each mme
formed by hybridization with a distinct rhyolitic magma. In some cases,
hybridization involved a single mixing event, whereas in others, evidence
for at least two mixing events is preserved. In contrast, some hornblende
phenocrysts grew from the enclave magma, and others were derived from the
rhyolitic end member. Moreover, the composition of hornblende in the
immediately adjacent host rock is distinct from hornblende typically
observed in the eHD. Although primary basaltic magmas are thought to be
parental to the mme, little or no evidence of such parents is preserved in
the enclaves. Instead, the data indicate that hybridization of already
hybrid andesitic enclave magmas with rhyolitic magmas in the eHD involved
multiple andesitic and rhyolitic end members, which in turn is consistent
with the eHD representing an amalgamation of numerous, compositionally
distinct magma reservoirs. This conclusion applies to enclaves sampled
<30 m from one another. Moreover, during amalgamation of various
rhyolitic reservoirs, some mme were evidently disrupted from a surrounding
mush and thus carried remnants of that mush as their immediately adjacent
host. We suggest that detailed study of mineral compositions and zoning in
plutonic mme provides a means to identify magmatic processes that cannot be
deciphered from bulk-rock analysis.

View article:

https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02233.1/594552/A-tale-of-five-enclaves-Mineral-perspectives-on

Suprasubduction zone ophiolite fragments in the central Appalachian
orogen: Evidence for mantle and Moho in the Baltimore Mafic Complex
(Maryland, USA)

George L. Guice; Michael R. Ackerson; Robert M. Holder; Freya R. George;
Joseph F. Browning-Hanson ...

Abstract:
Suprasubduction zone (SSZ) ophiolites of the northern Appalachians (eastern
North America) have provided key constraints on the fundamental tectonic
processes responsible for the evolution of the Appalachian orogen. The
central and southern Appalachians, which extend from southern New York to
Alabama (USA), also contain numerous ultra- mafic-mafic bodies that have
been interpreted as ophiolite fragments; however, this interpretation is a
matter of debate, with the origin(s) of such occurrences also attributed to
layered intrusions. These disparate proposed origins, alongside the range
of possible magmatic affinities, have varied potential implications for the
magmatic and tectonic evolution of the central and southern Appalachian
orogen and its relationship with the northern Appalachian orogen. We
present the results of field observations, petrography, bulk-rock
geochemistry, and spinel mineral chemistry for ultramafic portions of the
Baltimore Mafic Complex, which refers to a series of ultramafic-mafic
bodies that are discontinuously exposed in Maryland and southern
Pennsylvania (USA). Our data indicate that the Baltimore Mafic Complex
comprises SSZ ophiolite fragments. The Soldiers Delight Ultramafite
displays geochemical characteristics—including highly depleted bulk-rock
trace element patterns and high Cr# of spinel—characteristic of
subduction-related mantle peridotites and serpentinites. The Hollofield
Ultramafite likely represents the “layered ultramafics” that form the Moho.
Interpretation of the Baltimore Mafic Complex as an Iapetus Ocean–derived
SSZ ophiolite in the central Appalachian orogen raises the possibility that
a broadly coeval suite of ophiolites is preserved along thousands of
kilometers of orogenic strike.

View article:

https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02289.1/594550/Suprasubduction-zone-ophiolite-fragments-in-the

Detrital zircon petrochronology of central Australia, and implications
for the secular record of zircon trace element composition

Charles Verdel; Matthew J. Campbell; Charlotte M. Allen

Abstract:
Hafnium (Hf) isotope composition of zircon has been integrated with U-Pb
age to form a long-term (>4 b.y.) record of the evolution of the crust.
In contrast, trace element compositions of zircon are most commonly
utilized in local- or regional-scale petrological studies, and the most
noteworthy applications of trace element studies of detrital zircon have
been in “fingerprinting” potential source lithologies. The extent to which
zircon trace element compositions varied globally over geological time
scales (as, for example, zircon U-Pb age abundance, O isotope composition,
and Hf isotope composition seem to have varied) has been little explored,
and it is a topic that is well suited to the large data sets produced by
detrital zircon studies. In this study we present new detrital zircon U-Pb
ages and trace element compositions from a continent-scale basin system in
Australia (the Centralian Superbasin) that bear directly on the Proterozoic
history of Australia and which may be applicable to broader interpretations
of plate-tectonic processes in other regions. U-Pb ages of detrital zircon
in the Centralian Superbasin are dominated by populations of ca. 1800,
1600, 1200, and 600 Ma, and secular variations of zircon Hf isotope ratios
are correlated with some trace element parameters between these major age
populations. In particular, elevated εHf(i) (i.e., radiogenic
“juvenile” Hf isotope composition) of detrital zircon in the Centralian
Superbasin tends to correspond with relatively high values of Yb/U, Ce
anomaly, and Lu/Nd (i.e., depletion of light rare earth elements). These
correlations seem to be fundamentally governed by three related factors:
elemental compatibility in the continental crust versus mantle, the
thickness of continental crust, and the contributions of sediment to
magmas. Similar trace element versus εHf(i) patterns among a
global zircon data set suggest broad applicability. One particularly
intriguing aspect of the global zircon data set is a late Neoproterozoic to
Cambrian period during which both zircon εHf(i) and Yb/U reached
minima, marking an era of anomalous zircon geochemistry that was related to
significant contributions from old continental crust.

View article:

https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02300.1/594551/Detrital-zircon-petrochronology-of-central

Along-strike variations in protothrust zone characteristics at the
Nankai Trough subduction margin

Hannah L. Tilley; Gregory F. Moore; Mikiya Yamashita; Shuichi Kodaira

Abstract:
Significant along-strike changes in the protothrust zone at the toe of the
Nankai Trough accretionary prism were imaged in new high-resolution seismic
reflection data. The width of the protothrust zone varies greatly along
strike; two spatially discrete segments have a wide protothrust zone
(∼3.3–7.8 km, ∼50–110 protothrusts), and two segments have almost no
protothrust zone (∼0.5–2.8 km, <20 protothrusts). The widest protothrust
zone occurs in the region with the widest and thickest sediment wedge and
subducting turbidite package, both of which are influenced by basement
topography. The trench wedge size and lithology, the lithology of the
subducting section, and the basement topography all influence the rate of
consolidation in the trench wedge, which we hypothesize is an important
control over the presence and width of the protothrust zone. We conclude
that protothrusts are fractures that form from shear surfaces in
deformation band clusters as the trench fill sediment is consolidated.
Strain localization occurs at sites with a high density of protothrusts,
which become the probable locations of future frontal thrust propagation.
The frontal thrust may propagate forward with a lower buildup of strain
where it is adjacent to a wide protothrust zone than at areas with a narrow
or no protothrust zone. This is reflected in the accretionary prism
geometry, where wide protothrust zones occur adjacent to fault-propagation
folds with shallow prism toe surface slopes.

View article:

https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02305.1/594554/Along-strike-variations-in-protothrust-zone

The Permian Monos Formation: Stratigraphic and detrital zircon evidence
for Permian Cordilleran arc development along the southwestern margin
of Laurentia (northwestern Sonora, Mexico)

Stephen C. Dobbs; Nancy R. Riggs; Kathleen M. Marsaglia; Carlos M.
González-León; M. Robinson Cecil ...

Abstract:
The southwestern margin of Laurentia transitioned from a left-lateral
transform margin to a convergent margin by middle Permian time, which
initiated the development of a subduction zone and subsequent Cordilleran
arc along western Laurentia. The displaced Caborca block was translated
several hundred kilometers from southern California, USA, to modern Sonora,
Mexico, beginning in Pennsylvanian time (ca. 305 Ma). The Monos Formation,
a ~600-m-thick assemblage of mixed bioclastic and volcaniclastic units
exposed in northwestern Sonora, provides lithostratigraphic, petrographic,
and geochronologic evidence for magmatic arc development associated with
subduction by middle Permian time (ca. 275 Ma). The Monos Formation was
deposited in a forearc basin adjacent to a magmatic arc forming along the
southwestern Laurentian margin. Detrital zircon U-Pb geochronology suggests
that Permian volcanic centers were the primary source for the Monos
Formation. These grains mixed with far-traveled zircons from both Laurentia
and Gondwana. Zircon age spectra in the Monos Formation are dominated by a
ca. 274 Ma population that makes up 65% of all analyzed grains. The
remaining 35% of grains range from 3.3 Ga to 0.3 Ma, similar to age spectra
from Permian strata deposited in the Paleozoic sequences in the western
continental interior. An abundance of Paleozoic through early
Neoproterozoic ages suggests that marginal Gondwanan sources from Mexico
and Central America also supplied material to the basin. The Monos
Formation was deposited within tropical to subtropical latitudes, yet
faunal assemblages are biosiliceous and heterotrophic. The lack of
photozoan assemblages suggests that cold-water coastal upwelling combined
with sedimentation from the Cordilleran arc and Laurentian continent
promoted conditions more suitable for fauna resilient to biogeochemically
stressed environments. We propose that transform faulting and displacement
of the Caborca block ceased by middle Permian time and a subduction zone
developed along the southwestern margin of Laurentia as early as early
Permian time. The Monos basin developed along the leading edge of the
continent as a magmatic arc developed, and facies indicate a consistent
shoaling trend over the span of deposition.

View article:

https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02320.1/595012/The-Permian-Monos-Formation-Stratigraphic-and

Detrital sanidine 40Ar/39Ar dating confirms <2 Ma age of Crooked
Ridge paleoriver and subsequent deep denudation of the southwestern
Colorado Plateau

Matthew T. Heizler; Karl E. Karlstrom; Micael Albonico; Richard Hereford;
L. Sue Beard ...

Abstract:
Crooked Ridge and White Mesa in northeastern Arizona (southwestern United
States) preserve, as inverted topography, a 57-km-long abandoned alluvial
system near the present drainage divide between the Colorado, San Juan, and
Little Colorado Rivers. The pathway of this paleoriver, flowing southwest
toward eastern Grand Canyon, has led to provocative alternative models for
its potential importance in carving Grand Canyon. The ~50-m-thick White
Mesa alluvium is the only datable record of this paleoriver system. We
present new 40Ar/39Ar sanidine dating that confirms a
ca. 2 Ma maximum depositional age for White Mesa alluvium, supported by a
large mode (n = 42) of dates from 2.06 to 1.76 Ma. Older grain
modes show abundant 37–23 Ma grains mostly derived ultimately from the San
Juan Mountains, as is also documented by rare volcanic and basement pebbles
in the White Mesa alluvium. A tuff with an age of 1.07 ± 0.05 Ma is inset
below, and hence provides a younger age bracket for the White Mesa
alluvium. Newly dated remnant deposits on Black Mesa contain similar 37–23
Ma grains and exotic pebbles, plus a large mode (n = 71) of 9.052
± 0.003 Ma sanidine. These deposits could be part of the White Mesa
alluvium without any Pleistocene grains, but new detrital sanidine data
from the upper Bidahochi Formation near Ganado, Arizona, have similar
maximum depositional ages of 11.0–6.1 Ma and show similar 40–20 Ma San Juan
Mountains–derived sanidine. Thus, we tentatively interpret the <9 Ma
Black Mesa deposit to be a remnant of an 11–6 Ma Bidahochi alluvial system
derived from the now-eroded southwestern fringe of the San Juan Mountains.
This alluvial fringe is the probable source for reworking of 40–20 Ma
detrital sanidine and exotic clasts into Oligocene Chuska Sandstone,
Miocene Bidahochi Formation, and ultimately into the <2 Ma White Mesa
alluvium. The <2 Ma age of the White Mesa alluvium does not support
models that the Crooked Ridge paleoriver originated as a late Oligocene to
Miocene San Juan River that ultimately carved across the Kaibab uplift.
Instead, we interpret the Crooked Ridge paleoriver as a 1.9–1.1 Ma
tributary to the Little Colorado River, analogous to modern-day Moenkopi
Wash. We reject the “young sediment in old paleovalley” hypothesis based on
mapping, stratigraphic, and geomorphic constraints. Deep exhumation and
beheading by tributaries of the San Juan and Colorado Rivers caused the
Crooked Ridge paleotributary to be abandoned between 1.9 and 1.1 Ma.
Thermochronologic data also provide no evidence for, and pose substantial
difficulties with, the hypothesis for an earlier (Oligocene–Miocene)
Colorado–San Juan paleoriver system that flowed along the Crooked Ridge
pathway and carved across the Kaibab uplift.

View article:

https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02319.1/595013/Detrital-sanidine-40Ar-39Ar-dating-confirms-lt-2

Reconstructing drainage pathways in the North Atlantic during the
Triassic utilizing heavy minerals, mineral chemistry, and detrital
zircon geochronology

Steven D. Andrews; Andrew Morton; Audrey Decou; Dirk Frei

Abstract:
In this study, single-grain mineral geochemistry, detrital zircon
geochronology, and conventional heavy-mineral analysis are used to
elucidate sediment transport pathways that existed in the North Atlantic
region during the Triassic. The presence of lateral and axial drainage
systems is identified and their source regions are constrained. Axial
systems are suggested to have likely delivered sediment sourced in East
Greenland (Milne Land–Renland) as far south as the south Viking Graben
(>800 km). Furthermore, the data highlight the existence of lateral
systems issuing from Western Norway and the Shetland Platform as well as a
major east-west–aligned drainage divide positioned adjacent to the Milne
Land– Renland region. This divide separated the catchments that flowed
north to the Boreal Ocean from those that flowed south into a series of
endoreic basins and, ultimately, the Tethys Sea. A further potential
drainage divide is identified to the west of Shetland. The data presented
and the conclusions reached have major implications for reservoir
prediction, as well as correlation, throughout the region. Furthermore,
understanding the drainage networks that existed during the Triassic can
help constrain paleogeographic reconstructions and provides an important
framework for the construction of facies models in the region.

View article:

https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02277.1/595014/Reconstructing-drainage-pathways-in-the-North

Subducting oceanic basement roughness impacts on upper-plate tectonic
structure and a backstop splay fault zone activated in the southern
Kodiak aftershock region of the Mw 9.2, 1964 megathrust rupture, Alaska

Anne Krabbenhoeft; Roland von Huene; John J. Miller; Dirk Klaeschen

Abstract:
In 1964, the Alaska margin ruptured in a giant Mw 9.2 megathrust
earthquake, the second largest during worldwide instrumental recording. The
coseismic slip and aftershock region offshore Kodiak Island was surveyed in
1977–1981 to understand the region’s tectonics. We re-processed
multichannel seismic (MCS) field data using current standard Kirchhoff
depth migration and/or MCS traveltime tomography. Additional surveys in
1994 added P-wave velocity structure from wide-angle seismic lines and
multibeam bathymetry. Published regional gravity, backscatter, and
earthquake compilations also became available at this time. Beneath the
trench, rough oceanic crust is covered by ~3–5-km-thick sediment. Sediment
on the subducting plate modulates the plate interface relief. The imbricate
thrust faults of the accreted prism have a complex P-wave velocity
structure. Landward, an accelerated increase in P-wave velocities is marked
by a backstop splay fault zone (BSFZ) that marks a transition from the
prism to the higher rigidity rock beneath the middle and upper slope.
Structures associated with this feature may indicate fluid flow. Farther
upslope, another fault extends >100 km along strike across the middle
slope. Erosion from subducting seamounts leaves embayments in the frontal
prism. Plate interface roughness varies along the subduction zone. Beneath
the lower and middle slope, 2.5D plate interface images show modest relief,
whereas the oceanic basement image is rougher. The 1964 earthquake slip
maximum coincides with the leading and/or landward flank of a subducting
seamount and the BSFZ. The BSFZ is a potentially active structure and
should be considered in tsunami hazard assessments.

View article:

https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02275.1/595015/Subducting-oceanic-basement-roughness-impacts-on

Credit: 
Geological Society of America

Gender assumptions harm progress on climate adaption and resilience

image: Pursuing gender equality in climate change policy and practice is critical.

Image: 
Jacqui Lau.

Scientists say outdated assumptions around gender continue to hinder effective and fair policymaking and action for climate mitigation and adaptation.

Lead author of a new study, Dr Jacqueline Lau from the ARC Centre of Excellence for Coral Reef Studies at James Cook University (Coral CoE at JCU) and WorldFish, said gender--alongside other identities like race, class and age--has a powerful influence on people's experience of, and resilience to, climate change.

She said the four most common and interlinked assumptions found are: women are innately caring and connected to the environment; women are a homogenous and vulnerable group; gender equality is a women's issue and; gender equality is a numbers game.

"Although there is a global mandate to work towards gender equality in climate change mitigation and adaptation, efforts are hindered by a set of assumptions about gender, long critiqued in development studies," Dr Lau said.

The study draws on post-2014 gender and climate change literature, to give an overview of how the gender assumptions manifest across recent work in adaptation, mitigation and broader climate change policy, practice and research.

The review of the literature takes a closer look at how these assumptions narrowly diagnose the causes of gender inequality.

"As a result, we see too many strategies that have unintended--and even counterproductive--consequences," said Dr Pip Cohen, from WorldFish.

"For instance, strategies that target women only may overburden them, cause a backlash, or obscure the vulnerabilities of other groups."

The study offers lessons for a more informed pursuit of gender equality in climate change research, policy and practice.

The authors said progressing gender equality means breaking down stereotypes and prejudices about gender--creating environments to enable all people to exercise their agency to cope, change and adapt.

Dr Lau said she was surprised to find so many examples of gender assumptions in climate change practice. She explained that a first step in disrupting these assumptions is to lay them bare and explain why development research has found them to be problematic.

"The social and cultural expectations about what it is to be a woman or a man in any given society will shape people's wellbeing," Dr Lau said.

She said alongside efforts to dismantle broader barriers to gender equality, better and more coordinated efforts are needed from practitioners and researchers to disrupt and counteract unhelpful assumptions.

"Pursuing gender equality in climate change policy and practice is critical, and decades of experience in development offer lessons for how to do it well," Dr Lau said.

"Ultimately, we want to see equitable opportunities for all people to realise their full potential. Where no one is left behind."

Credit: 
ARC Centre of Excellence for Coral Reef Studies

Key steps discovered in production of critical immune cell

image: Dendritic cell

Image: 
Courtesy of WEHI

WEHI researchers have uncovered a process cells use to fight off infection and cancer that could pave the way for precision cancer immunotherapy treatment.

Through gaining a better understanding of how this process works, researchers hope to be able to determine a way of tailoring immunotherapy to better fight cancer.

Led by Dr Dawn Lin and Dr Shalin Naik and published in Nature Cell Biology, the research provides new insight into the way cells adapt to fight infection.

This research lays the foundation for future studies into the body's response to environmental stressors, such as injury, infection or cancer, at a single cell level.

At a glance

WEHI researchers have studied dendritic cells, a crucial component of the immune system, to gain a deeper understanding of how the body produces these cells to fight cancer and infection.

The study found how the Flt3L hormone increased dendritic cells numbers.

Researchers will now apply this knowledge to improving immunotherapy techniques to create more personalised treatments.

Flt3L hormone plays vital role in fighting off infection

Dendritic cells are immune cells that activate 'killer' T cells, which are vital for clearing viral infections, such as COVID-19, but also for triggering a response to cancers such as melanoma and bowel cancer.

The Flt3L hormone can increase dendritic cell numbers, helping the immune system to fight off cancer and infection.

Dr Naik and his team studied developing immune cells at a single cell level to gain a deeper understanding of how the body uses these cells to trigger immune responses.

"There is one type of dendritic cell that the body uses to fight some infections and cancer. The Flt3L hormone increases numbers of this particular dendritic cell," he said.

"We know quite well how the dendritic cell fights the cancer, but we don't know how the Flt3L hormone increases the numbers of those dendritic cells."

Single-cell barcoding provides vital clues to how dendritic cells function

Researchers used a single-cell 'barcoding' technique to uncover what happened when dendritic cells multiplied.

"By using cellular barcoding - where we insert short synthetic DNA sequences, we call barcodes inside cells - we were able to determine which cells produced dendritic cells in pre-clinical models," Dr Naik said.

"As a result of this research, we now better understand the actions of the Flt3L hormone that is currently used in cancer immunotherapy trials, and how it naturally helps the body fight cancer and infection. This is a first step to design better precision immunotherapy treatments for cancer."

Using single cell technology to improve immunotherapy treatment

This research answers a 50-year-long question as to what causes a stem cell to react in response to immense stress, such as infection or inflammation.

"We have known that the Flt3L hormone increases the number of dendritic cells for decades but now there is a focus on applying this knowledge to cancer immunotherapy and potentially to infection immunotherapy as well," Dr Naik said.

"The next stage in our research is to create 'dendritic cell factories' using our new knowledge, to produce millions to billions of these infection fighting cells and then use those in immunotherapy treatments."

"These findings are a vital first step to improving immunotherapy treatments for patients, to help them better fight cancer and infection."

Credit: 
Walter and Eliza Hall Institute

Indoor air quality study shows aircraft in flight may have lowest particulate levels

image: Georgia Tech researcher Jean C. Rivera-Rios obtains air samples from an office on the campus of the Georgia Institute of Technology.

Image: 
John Toon, Georgia Tech

If you're looking for an indoor space with a low level of particulate air pollution, a commercial airliner flying at cruising altitude may be your best option. A newly reported study of air quality in indoor spaces such as stores, restaurants, offices, public transportation -- and commercial jets -- shows aircraft cabins with the lowest levels of tiny aerosol particles.

Conducted in July 2020, the study included monitoring both the number of particles and their total mass across a broad range of indoor locations, including 19 commercial flights in which measurements took place throughout departure and arrival terminals, the boarding process, taxiing, climbing, cruising, descent, and deplaning. The monitoring could not identify the types of the particles and therefore does not provide a direct measure of coronavirus exposure risk.

"We wanted to highlight how important it is to have a high ventilation rate and clean air supply to lower the concentration of particles in indoor spaces," said Nga Lee (Sally) Ng, associate professor and Tanner Faculty Fellow in the School of Chemical and Biomolecular Engineering and the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology. "The in-flight cabin had the lowest particle mass and particle number concentration."

The study, believed to be the first to measure both size-resolved particle mass and number in commercial flights from terminal to terminal and a broad range of indoor spaces, has been accepted for publication in the journal Indoor Air and posted online at the journal's website. Supported by Delta Air Lines, the research may be the first to comprehensively measure particle concentrations likely to be encountered by passengers from terminal to terminal.

As scientists learn more about transmission of the coronavirus, the focus has turned to aerosol particles as an important source of viral spread indoors. Infected people can spread the virus as they breathe, talk, or cough, creating particles ranging in size from less than a micron -- one millionth of a meter -- to 1,000 microns. The larger particles quickly fall out of the air, but the smaller ones remain suspended.

"Especially in poorly ventilated spaces, these particles can be suspended in the air for a long period of time, and can travel to every corner of a room," Ng said. "If they are viral particles, they can infect people who may be at a considerable distance from a person emitting the particles."

To better understand the circulation of airborne particles, Delta approached Ng to conduct a study of multiple indoor environments, with a strong focus on air travel conditions. Using handheld instruments able to measure the total number of particles and their mass, Georgia Tech researchers examined air quality in a series of Atlanta area restaurants, stores, offices, homes, and vehicles -- including buses, trains, and private automobiles.

They trained Delta staff to conduct the same type of measurements in terminals, boarding areas, and a variety of aircraft through all phases of flight. The Delta staff recorded their locations as they moved through the terminals, and the instruments produced measurements consistent with the restaurants and stores they passed on their way to and from boarding and departure gates.

"The measurements started as soon as they stepped into the departure terminal," Ng said. "We were thinking about the whole trip, what a person would encounter from terminal to terminal."

In flight, aircraft air is exchanged between 10 and 30 times per hour. Some aircraft bring in exclusively outside air, which at cruising altitude is largely free of pollutant particles found in air near the ground. Other aircraft mix outdoor air with recirculated air that goes through HEPA filters, which remove more than 99% of particles.

In all, the researchers evaluated measurements from 19 commercial flights with passenger loads of approximately 50%. The flights included a mix of short- and medium-length flights, and aircraft ranging from the CRJ-200 and A220 to the 757, A321, and 737.

Among all the spaces measured, restaurants had the highest particle levels because of cooking being done there. Stores were next, followed by vehicles, homes, and offices. The average sub-micron particle number concentration measured in restaurants, for instance, was 29,400 particles per cubic centimeter, and in offices it was 2,473 per cubic centimeter.

"We have quite a comprehensive data set to look at the size distribution of particles across these different spaces," Ng said. "We can now compare indoor air quality in a variety of different spaces."

Because of the portable instruments used, the researchers were unable to determine the source of the particles, which could have included both biological and non-biological sources. "Further studies can include direct measurements of viral loads and tracing particle movements in indoor spaces," she added.

Jonathan Litzenberger, Delta's managing director of Global Cleanliness Strategy, said the research helps advance the company's goals of protecting its customers and employees.

"Keeping the air clean and safe during flight is one of the most foundational layers of protection Delta aims to provide to our customers and employees," he said. "We are always working to better understand the travel environment and confirm that the measures we are implementing are working."

Overall, the study highlights the importance of improving indoor air quality as a means of reducing coronavirus transmission.

"Regardless of whether you are in an office or an aircraft, having a higher ventilation rate and good particle filtration are the keys to reducing the total particle concentration," said Ng. "That should also reduce the concentration of any viral particles that may be present."

Credit: 
Georgia Institute of Technology

The expanding possibilities of bio-based polymers

image: ICIQ ACS kleij ginger root

Image: 
ICIQ/Francesco Della Monica

Finding innovative and sustainable solutions to our material needs is one of the core objectives of green chemistry. The myriad plastics that envelop our daily life - from mattresses to food and cars - are mostly made from oil-based monomers which are the building blocks of polymers. Therefore, finding bio-based monomers for polymer synthesis is attractive to achieve more sustainable solutions in materials development.

In a paper published in ACS Sustainable Chemistry & Engineering, researchers from the Kleij group present a new route to prepare biobased polyesters with tuneable properties. The researchers are building upon the multifunctional structure of the terpene β-elemene: three double bonds which have distinct reactivity, allowing to selectively transform these bonds and thus tweaking the functionalities in the backbone of the polymer. "This multi-functional terpene scaffold is rather unique and allows to fine-tune structural diversity and prospectively to modulate polymer and material properties," explains Arjan Kleij, ICIQ group leader and ICREA professor.

In collaboration with the company Isobionics, the researchers utilised β-elemene obtained through an innovative sugar-fermentation route. This process has proved to be a promising start for the use of β-elemene as a raw material for polymerisation. "Isobionic's sugar-fermentation route completely changes the scale of β-elemene availability, which now can be used in polymer production," explains Francesco Della Monica, postdoctoral researcher in the Kleij group working in the European SUPREME project, an MSCActions Individual Fellowship and first author of the paper.

Via a ring-opening copolymerisation reaction (ROCOP), the researchers combined β-elemene oxides and phthalic anhydride (a common monomer used in the preparation of polyesters) to create the biobased linear polymer poly(BEM-alt-PA) and its related structure, crosslinked-poly(BED-alt-PA). These transformations were achieved with catalytic systems (iron and aluminium aminotriphenolate complexes combined with bis-(triphenylphosphine) iminium chloride) developed previously by the group using non-critical, abundant elements for catalytic polymerisation.

Once the polyester is prepared, there are two remaining double bonds from the original terpene building block that can be easily and selectively addressed and functionalised, allowing to tailor the final polyester. "These post-modification reactions on a biobased polymer are quite rare. Most of the biobased monomers that are available don't present functionalities," remarks Della Monica.

The paper is a starting point for further development of β-elemene based polymers that allow tailoring the properties of the final material (depending on its use) through easy post-polymerisation modifications. The paper does not address the biodegradation of the material, although for Della Monica, "depending on the final use, the ideal thing may not be biodegradation but to create a recyclable polymer: i.e., take a starting material, create the polymer, use it, recover it, and then degrade it in a controlled fashion and reuse that material. Now that we have the idea of a circular economy within grasp, we need circular processes," concludes the scientist.

Credit: 
Institute of Chemical Research of Catalonia (ICIQ)

Dethroning electrocatalysts for hydrogen production with inexpensive alternative material

image: Electrochemical water splitting demands highly active, easily produced, and cost-effective electrocatalysts for the oxygen evolution reaction (OER). An iron (Fe)/calcium (Ca)-based
bimetallic oxide, CaFe2O4, exhibits outstanding OER activity in alkaline media. CaFe2O4 is expected to be a promising OER electrocatalyst for water splitting.

Image: 
Tokyo Tech

Today, we can say without a shadow of doubt that an alternative to fossil fuels is needed. Fossil fuels are not only non-renewable sources of energy but also among the leading causes of global warming and air pollution. Thus, many scientists worldwide have their hopes placed on what they regard as the fuel of tomorrow: hydrogen (H2). Although H2 is a clean fuel with incredibly high energy density, efficiently generating large amounts of it remains a difficult technical challenge.

Water splitting--the breaking of water molecules--is among the most explored methods to produce H2. While there are many ways to go about it, the best-performing water splitting techniques involve electrocatalysts made from expensive metals, such as platinum, ruthenium, and iridium. The problem lies in that known electrocatalysts made from abundant metals are rather ineffective at the oxygen evolution reaction (OER), the most challenging aspect of the water-splitting process.

In a recent study published in ACS Applied Energy Materials, a team of scientists at Tokyo Institute of Technology, Japan, found a remarkable electrocatalyst candidate for cost-effective water splitting: calcium iron oxide (CaFe2O4). Whereas iron (Fe) oxides are mediocre at the OER, previous studies had noted that combining it with other metals could boost their performance to actually useful levels. However, as Assistant Professor and lead author Dr Yuuki Sugawara comments, no one had focused on CaFe2O4 as a potential OER electrocatalyst. "We wanted to unveil the potential of CaFe2O4 and elucidate, through comparisons with other iron-based bimetallic oxides, crucial factors that promote its OER activity," he explains.

To this end, the team tested six kinds of iron-based oxides, including CaFe2O4. They soon found that the OER performance of CaFe2O4 was vastly greater than that of other bimetallic electrocatalysts and even higher than that of iridium oxide, a widely accepted benchmark. Additionally, they tested the durability of this promising material and found that it was remarkably stable; no significant structural nor compositional changes were seen after measurement cycles, and the performance of the CaFe2O4 electrode in the electrochemical cell remained high.

Eager to understand the reason behind the exceptional capabilities of this unexplored electrocatalyst, the scientists carried out calculations using density functional theory and discovered an unconventional catalytic mechanism. It appears that CaFe2O4 offers an energetically favorable pathway for the formation of oxygen bonds, which is a limiting step in the OER. Although more theoretical calculations and experiments will be needed to be sure, the results indicate that the close distance between multiple iron sites plays a key role.

The newly discovered OER electrocatalyst could certainly be a game changer, as Dr Sugawara remarks, "CaFe2O4 has many advantages, from its easy and cost-effective synthesis to its environmental friendliness. We expect it will be a promising OER electrocatalyst for water splitting and that it will open up a new avenue for the development of energy conversion devices." In addition, the new OER boosting mechanism found in CaFe2O4 could lead to the engineering of other useful catalysts. Let us hope these findings help pave the way to the much-needed hydrogen society of tomorrow!

Credit: 
Tokyo Institute of Technology

Ultrasonic cleaning of salad could reduce instances of food poisoning

A new study has shown that gentle streams of water carrying sound and microscopic air bubbles can clean bacteria from salad leaves more effectively than current washing methods used by suppliers and consumers. As well as reducing food poisoning, the findings could reduce food waste and have implications for the growing threat of anti-microbial resistance.

Salad and leafy green vegetables may be contaminated with harmful bacteria during growing, harvesting, preparation and retail leading to outbreaks of food poisoning which may be fatal in vulnerable groups.

Because there is no cooking process to reduce the microbial load in fresh salads, washing is vital by the supplier and the consumer.

Washing with soap, detergent bleach or other disinfectants is not recommended and the crevices in the leaf surface means washing with plain water may leave an infectious dose on the leaf. Even if chemicals are used, they may not penetrate the crevices.

In this new study, published in the journal Ultrasound in Medicine and Biology, scientists used acoustic water streams to clean spinach leaves directly sourced from the field crop, then compared the results with leaves rinsed in plain water at the same velocity.

Professor Timothy Leighton of the University of Southampton, who invented the technology and led this research, explains: "Our streams of water carry microscopic bubbles and acoustic waves down to the leaf. There the sound field sets up echoes at the surface of the leaves, and within the leaf crevices, that attract the bubbles towards the leaf and into the crevices. The sound field also causes the walls of the bubbles to ripple very quickly, turning each bubble into a microscopic 'scrubbing' machine. The rippling bubble wall causes strong currents to move in the water around the bubble, and sweep the microbes off the leaf. The bacteria, biofilms, and the bubbles themselves, are then rinsed off the leaf, leaving it clean and free of residues."

The results showed that the microbial load on samples cleaned with the acoustic streams for two minutes was significantly lower six days after cleaning than on those treated without the added sound and bubbles. The acoustic cleaning also caused no further damage to the leaves and demonstrated the potential to extend food shelf life, which has important economic and sustainability implications.

Improving how food providers clean fresh produce could have a major role to play in combating the threat of anti-microbial resistance. In 2018 and 2019, there were fatal outbreaks of different strains of E. coli on romaine lettuce in the USA and Canada and samples from humans infected showed strains that are resistant to antibiotics.

University of Southampton PhD student Weng Yee (Beverly) Chong, who was part of the research team added: "I am very grateful to Vitacress and EPSRC for funding my PhD. I came from an engineering background, and took Professor Leighton's classes, but he told me that I could be a trans-disciplinary PhD student, and become a microbiologist whilst increasingmy engineering skills. I am also very grateful to Sloan Water Technology Ltd.: They opened up their laboratories for use by students like me, so that I can keep working on my experiments. It is an exciting environment to work in because they are doing so much inventive work to combat the pandemic and infections as a whole."

Previously as part of her PhD Beverly has studied how the technology could reduce the infection risk to horses and other livestock through hay cleaning.

The work was sponsored by Vitacress, whose Group Technical Director Helen Brierley said: "Ensuring food safety for our products is an essential requirement. At Vitacress, we wash our produce in natural spring water, and this type of ground-breaking new technology helps to enhance our process whilst ensuring our commitment to protect the environment is maintained. We are always interested in new developments and are excited to see the results of this research".

Credit: 
University of Southampton

A fluid solution to dendrite growth in lithium metal batteries

image: Lithium metal batteries are prone to growth of metal dendrites that can cause batteries to short or explode. Engineers at UC Davis show that a crossflow of ions near the cathode can prevent this problem. In this figure, increasing the flow rate over the electrode reduced the growth of dendrites on the surface.

Image: 
Jiandi Wan, UC Davis

A new paper from associate professor Jiandi Wan's group in the UC Davis Department of Chemical Engineering, published in Science Advances, proposes a potential solution to dendrite growth in rechargeable lithium metal batteries. In the paper, Wan's team prove that flowing ions near the cathode can potentially expand the safety and lifespans of these next-generation rechargeable batteries.

Lithium metal batteries use lithium metal as the anode. These batteries have a high charge density and potentially double the energy of conventional lithium ion batteries, but safety is a big concern. When they charge, some ions are reduced to lithium metal at the cathode surface and form irregular, tree-like microstructures known as dendrites, which can eventually cause a short circuit or even an explosion.

The theory is that dendrite growth is caused by the competition of mass transfer and reduction rate of lithium ions near the cathode surface. When the reduction rate of ions is much faster than the mass transfer, it creates an electroneutral gap called the space-charged layer near the cathode that contains no ions. The instability of this layer is thought to cause dendrite growth, so reducing or eliminating it might reduce dendrite growth and therefore extend the life of a battery.

Dendrite growth reduced 99 percent

Wan's idea was to flow ions through the cathode in a microfluidic channel to restore a charge and offset this gap. In the paper, the team outlined their proof-of concept tests, finding that this flow of ions could reduce dendrite growth by up to 99 percent.

For Wan, the study is exciting because it shows the effectiveness of applying microfluidics to battery-related problems and paves the way for future research in this area.

"With this fundamental study and microfluidic approaches, we were able to quantitatively understand the effect of flow on dendrite growth," he said. "Not many groups have studied this yet."

Though it is likely not possible to directly incorporate microfluidics in real batteries, Wan's group is looking at alternative ways to apply the fundamental principles from this study and introduce local flows near the cathode surface to compensate cations and eliminate the space charge layer.

"We are quite excited to explore the new applications of our study," he said. "We are already working on design of the cathode surface to introduce convective flows."

Credit: 
University of California - Davis

Yale team finds dozens of genes that block regeneration of neurons

When central nervous system cells in the brain and spine are damaged by disease or injury, they fail to regenerate, limiting the body's ability to recover. In contrast, peripheral nerve cells that serve most other areas of the body are more able to regenerate. Scientists for decades have searched for molecular clues as to why axons -- the threadlike projections which allow communication between central nervous system cells -- cannot repair themselves after stroke, spinal cord damage, or traumatic brain injuries.

In a massive screen of 400 mouse genes, Yale School of Medicine researchers have identified 40 genes actively involved in suppression of axon regeneration in central nervous system cells. By editing out one of those genes, they were able to restore axons in ocular nerves of mice damaged by glaucoma.

The findings are reported March 2 in the journal Cell Reports.

"This opens a new chapter in regeneration research," said Stephen Strittmatter, the Vincent Coates Professor of Neurology and professor of neuroscience and senior author of the study.

Over the past several decades, Strittmatter and other scientists have found a handful of genes involved in suppressing regeneration of central nervous system cells. But the advent of RNAs to silence gene expression and new gene editing technologies capable of removing single genes and gauging their functional impact has allowed researchers to greatly expand their search for other culprits.

Among the 400 candidate genes the Yale team had previously identified in cultures of cortical neurons, they were able to show that one in 10 of those genes had direct in vivo impact on axon regeneration in central nervous system cells in mice. One of the 40 genes edited out encodes for an immune system regulator known as interleukin-22. Elimination of this immune mediator altered the expression of many neuronal regeneration genes and greatly increased axon regeneration in mouse models of glaucoma, they found.

Future research will explore how modifying or blocking those 40 genes might affect the repair of neurons damaged by stroke and traumatic brain and spinal cord injuries, Strittmatter said.

Credit: 
Yale University

Rating tornado warnings charts a path to improve forecasts

image: A funnel cloud from a tornado in Kansas on May 24, 2016, inside the United States' so-called "Tornado Alley."

Image: 
Lane Pearman/Flickr

The United States experiences more tornadoes than any other country, with a season that peaks in spring or summer depending on the region. Tornadoes are often deadly, especially in places where buildings can't withstand high winds.

Accurate advanced warnings can save lives. A study from the University of Washington and the National Oceanic and Atmospheric Administration describes a new way to rate and possibly improve tornado warnings. It finds that nighttime twisters, summer tornadoes and smaller events remain the biggest challenges for the forecasting community.

"This new method lets us measure how forecast skill is improving, decreasing or staying the same in different situations," said Alex Anderson-Frey, a UW assistant professor of atmospheric sciences. "The tornado forecasting community needs to know what we're doing best at, and where we can focus training and research in the future."

She is lead author of the paper published online in December in the Bulletin of the American Meteorological Society.

Though the southern and central U.S. see the most tornadoes, every state can experience twisters. Scientific understanding of tornadoes is biased toward populated places, Anderson-Frey said, where people are more likely to observe and report the events.

"As population density increases in different areas, including outside the U.S., I think we're getting more of an idea of the range of environments in which tornadoes can actually form," Anderson-Frey said.

The paper develops a new method to rate the skill of a tornado warning based on the difficulty of the environment. It then evaluates thousands of tornadoes and associated warnings over the continental United States between 2003 and 2017.

The NOAA-funded study finds that nighttime tornadoes have a lower probability of detection and a higher false-alarm rate than the environmental conditions would suggest. Summertime tornadoes, occurring in June, July or August, also are more likely to evade warning.

The nighttime events may be harder to forecast because the absence of daytime warming makes the conditions less favorable, and because there were fewer eyewitness reports, Anderson-Frey said. Summer events may be more difficult because summer has more relatively weak tornadoes that occur in marginal environments, meaning on the edge of conditions that produce a tornado.

Larger events -- those rated 2 or above on the enhanced Fujita scale -- actually generated better warnings than expected for the conditions. The results can inform how research, training or observational technology could improve future tornado warnings.

"The forecasting community is not just looking at the big, photogenic situations that will crop up in the Great Plains. We're looking at tornadoes in regions where vulnerability is high, including in regions that don't normally get tornadoes, where by definition the vulnerability is high," Anderson-Frey said.

"There's a real effort in the forecasting research community to bring in the human element -- being able to identify where we can do the most good."

Although tornado forecasts and warnings are improving overall, so are some types of risk. Populations are growing and moving into new, remote environments. Mobile or manufactured homes without anchored foundations are less able to withstand high winds.

"What really excites me about this work is the opportunity to look at performance by how difficult the warning situation was," said co-author Harold Brooks at NOAA's National Severe Storm Laboratory in Norman, Oklahoma. "We have the chance to measure improvement through the years taking into account that some situations and years may be harder or easier forecasts."

Anderson-Frey She moved from Oklahoma to join the UW faculty in 2019. In related research, she is now analyzing data for past tornadoes to determine the environmental conditions that can produce events in unexpected places, like the 2018 tornado that struck Port Orchard in Washington state.

"I'm working on applying a machine-learning technique to studying what prototypical tornadic environments look like in different parts of the United States," she said.

Credit: 
University of Washington

Oregon researchers unveil the weaving fractal network of connecting neurons

image: Neurons and dendrites form a fractal network on top of a large electrode in the left side of the image. University of Oregon physicist Richard Taylor is leading a project to develop tiny fractal inspired electrons that will efficiently connect with neurons to restore vision loss in people with retinal diseases. Taylor's student Willem Griffiths used false colors to highlight the activity on the electrode.

Image: 
Image courtesy of Richard Taylor

EUGENE, Ore. - March 2, 2021 - High-resolution imaging and 3D computer modeling show that the dendrites of neurons weave through space in a way that balances their need to connect to other neurons with the costs of doing so.

The discovery, reported in Nature Scientific Reports Jan. 27, emerged as researchers sought to understand the fractal nature of neurons as part of a University of Oregon project to design fractal-shaped electrodes to connect with retinal neurons to address vision loss due to retinal diseases.

"The challenge in our research has been understanding how the neurons we want to target in the retina will connect to our electrodes," said Richard Taylor, a professor and head of the UO physics department. "Essentially, we have to fool the neurons into thinking that the electrode is another neuron by making the two have the same fractal character."

Working with collaborators at the University of Auckland and University of Canterbury in New Zealand, confocal microscopy of neurons in the hippocampal region of a rat's brain revealed an intricate interplay of branches weaving through space at multiple size scales before connecting to other neurons. That, Taylor said, raised the question, why adopt such a complicated pattern?

With the help of UO post-doctoral researcher Saba Moslehi, doctoral students Julian H. Smith and Conor Rowland turned to 3D modeling to explore what happens when they manipulated the dendrites of more than 1,600 neurons into unnatural forms, straightening them or curling them up.

"By distorting their branches and looking at what happens, we were able to show that the fractal weaving of the natural branches is balancing the ability of neurons to connect with their neighbors to form natural electric circuits while balancing the construction and operating costs of the circuits," Rowland said.

Using a fractal analysis known as the box-counting technique, the researchers were able to assign fractal dimensions, or D values, that quantify the relative contributions of the coarse- and fine-scaled dendrites to a neuron's fractal pattern. These D values, Taylor said, will be important in optimizing his team's tiny electrodes for implanting at the back of eyes to stimulate retinal neurons.

"Our implants will have to accommodate the neurons' weaving branches through careful selection of their D values," said Taylor, a member of the UO's Materials Science Institute. "Unlike building a straight runway so a pilot can land efficiently, our electrodes will need to act like a weaving runway so that the neurons can connect without changing their behavior."

Nature's fractals benefit from how they grow at multiple scales, said Taylor, who has long turned to fractals as bioinspiration. While trees have the most-recognized form of fractal branching, this work, he said, highlights how neurons are different from trees.

"Whereas the fractal character of trees originates predominantly from the distribution of branch sizes, the neurons also use the way their branches weave through space to generate their fractal character," Taylor said.

Taylor, a Cottrell Scholar of the Research Council for Science Advancement, was granted a sweeping U.S. patent in 2015 for not only his development of artificial fractal-based implants related to vision but also to all such implants that link signaling activity with nerves for any purpose in animal and human biology.

Taylor and co-authors closed their paper by raising the possibility that the D values of neuronal networking may benefit research on numerous brain-related diseases. For Alzheimer's disease, Taylor said, D values could be a measure for understanding declines in connectivity between neurons.

"A lot of diseases result in losing connectivity, and neuron D values may be dropping as they move into a pathological state," he said.

Credit: 
University of Oregon

A quantum internet is closer to reality, thanks to this switch

image: Using a programmable wavelength-selective switch can help increase the number of users in a quantum network without increasing photon loss from the switching device, a new study shows.

Image: 
Purdue University/Navin Lingaraju

WEST LAFAYETTE, Ind. -- When quantum computers become more powerful and widespread, they will need a robust quantum internet to communicate.

Purdue University engineers have addressed an issue barring the development of quantum networks that are big enough to reliably support more than a handful of users.

The method, demonstrated in a paper published in Optica, could help lay the groundwork for when a large number of quantum computers, quantum sensors and other quantum technology are ready to go online and communicate with each other.

The team deployed a programmable switch to adjust how much data goes to each user by selecting and redirecting wavelengths of light carrying the different data channels, making it possible to increase the number of users without adding to photon loss as the network gets bigger.

If photons are lost, quantum information is lost - a problem that tends to happen the farther photons have to travel through fiber optic networks.

"We show a way to do wavelength routing with just one piece of equipment - a wavelength-selective switch - to, in principle, build a network of 12 to 20 users, maybe even more," said Andrew Weiner, Purdue's Scifres Family Distinguished Professor of Electrical and Computer Engineering. "Previous approaches have required physically interchanging dozens of fixed optical filters tuned to individual wavelengths, which made the ability to adjust connections between users not practically viable and photon loss more likely."

Instead of needing to add these filters each time that a new user joins the network, engineers could just program the wavelength-selective switch to direct data-carrying wavelengths over to each new user - reducing operational and maintenance costs as well as making a quantum internet more efficient.

The wavelength-selective switch also can be programmed to adjust bandwidth according to a user's needs, which has not been possible with fixed optical filters. Some users may be using applications that require more bandwidth than others, similarly to how watching shows through a web-based streaming service uses more bandwidth than sending an email.

For a quantum internet, forming connections between users and adjusting bandwidth means distributing entanglement, the ability of photons to maintain a fixed quantum mechanical relationship with one another no matter how far apart they may be to connect users in a network. Entanglement plays a key role in quantum computing and quantum information processing.

"When people talk about a quantum internet, it's this idea of generating entanglement remotely between two different stations, such as between quantum computers," said Navin Lingaraju, a Purdue Ph.D. student in electrical and computer engineering. "Our method changes the rate at which entangled photons are shared between different users. These entangled photons might be used as a resource to entangle quantum computers or quantum sensors at the two different stations."

Purdue researchers performed the study in collaboration with Joseph Lukens, a research scientist at Oak Ridge National Laboratory. The wavelength-selective switch that the team deployed is based on similar technology used for adjusting bandwidth for today's classical communication.

The switch also is capable of using a "flex grid," like classical lightwave communications now uses, to partition bandwidth to users at a variety of wavelengths and locations rather than being restricted to a series of fixed wavelengths, each of which would have a fixed bandwidth or information carrying capacity at fixed locations.

"For the first time, we are trying to take something sort of inspired by these classical communications concepts using comparable equipment to point out the potential advantages it has for quantum networks," Weiner said.

The team is working on building larger networks using the wavelength-selective switch. The work was funded by the U.S. Department of Energy, the National Science Foundation and Oak Ridge National Laboratory.

Credit: 
Purdue University