Culture

MS experts call for increased focus on progressive MS rehabilitation research

image: Dr. DeLuca, an expert in cognitive MS research, is senior vice president of Research and Training at Kessler Foundation.

Image: 
Kessler Foundation

East Hanover, NJ. May 28, 2021. An international team of multiple sclerosis (MS) experts has identified four under-researched areas that are critical to advancing symptom management for progressive MS, recommending interdisciplinary collaboration among scientists, clinicians, industry leaders, and those with progressive MS. Their call to action was published in Multiple Sclerosis Journal on March 15, 2021, in the article "Prioritizing progressive MS rehabilitation research: A call from the International Progressive MS Alliance" (doi: 10.1177/1352458521999970). The Alliance was represented by authors from Canada, the United States, the UK, Australia, Denmark, Belgium, Germany, and Switzerland.

Recent advances in MS research have resulted in a variety of disease-modifying therapies that can significantly improve quality of life for people with certain phenotypes of the disease. For example, those with relapsing-remitting MS have access to more than 20 therapies. However, these phenotypes represent only some in the MS community.

A major barrier to the development of rehabilitation therapies for progressive MS is the fact that the vast majority of studies conducted to explore rehabilitative therapies involve people with relapsing-remitting MS, not progressive MS. In addition, trials are often designed based on strategies used for pharmaceutical trials, which are not necessarily conducive to clinical rehabilitation trials. Unfortunately, this lack of clinical data to inform therapies for progressive MS leaves this population with few options to manage potentially debilitating symptoms that can lead to challenges such as loss of a job, personal and family stress, and financial strain.

In this article, experts from research, medicine, and industry highlight four major symptoms affecting people with progressive MS that should be the focus of new research: fatigue, mobility and upper extremity impairment, pain, and cognitive impairment. They contend that rehabilitative therapies show great promise for managing these symptoms and for improving physical and cognitive function as well as quality of life, and that directing research efforts toward rehabilitation is critical to developing effective therapies.

"There is a strong need to study the effect of early preventive interventions and to evaluate management of existing symptoms," says co-author John DeLuca, PhD, Senior Vice President for Research and Training at Kessler Foundation. "Effective symptom management and rehabilitation remain far behind in progressive MS. We have little empirical rehabilitation data, and our understanding of mechanisms underlying symptoms and treatment responses is incomplete." Dr. DeLuca emphasizes, "We have ample evidence from research in other clinical areas that rehabilitation can improve quality of life and find support from diverse payers and stakeholders. Our aim is to bring attention to the pressing need to develop rehabilitation treatment interventions for the progressive MS community."

Credit: 
Kessler Foundation

Penn researchers discover drug that blocks multiple SARS-CoV-2 variants in mice

The drug diABZI -- which activates the body's innate immune response -- was highly effective in preventing severe COVID-19 in mice that were infected with SARS-CoV-2, according to scientists in the Perelman School of Medicine at the University of Pennsylvania. The findings, published this month in Science Immunology, suggest that diABZI could also treat other respiratory coronaviruses.

"Few drugs have been identified as game-changers in blocking SARS-CoV-2 infection. This paper is the first to show that activating an early immune response therapeutically with a single dose is a promising strategy for controlling the virus, including the South African variant B.1.351, which has led to worldwide concern," said senior author Sara Cherry, PhD, a professor of Pathology and Laboratory Medicine and scientific director of the High-Throughput Screening (HTS) Core at Penn Medicine. "The development of effective antivirals is urgently needed for controlling SARS-CoV-2 infection and disease, especially as dangerous variants of the virus continue to emerge."

The SARS-CoV-2 virus initially targets epithelial cells in the respiratory tract. As the first line of defense against infection, the respiratory tract's innate immune system recognizes viral pathogens by detecting their molecular patterns. Cherry and her research team first sought to better understand this effect by observing human lung cell lines under the microscope that had been infected with SARS-CoV-2. They found that the virus is able to hide, delaying the immune system's early recognition and response. The researchers predicted that they may be able to identify drugs -- or small molecules with drug-like properties -- that could set off this immune response in the respiratory cells earlier and prevent severe SARS-CoV-2 infection.

To identify antiviral agonists that would block SARS-CoV-2 infection, the researchers performed high throughput screening of 75 drugs that target sensing pathways in lung cells. They examined their effects on viral infection under microscopy and identified nine candidates --including two cyclic dinucleotides (CDNs) -- that significantly suppressed infection by activating STING (the simulation of interferon genes).

Since CDNs have low potency and make poor drugs, according to Cherry, she and her team decided to also test a newly-developed small molecule STING agonist called diABZI, which is not approved by the Food and Drug Administration but is currently being tested in clinical trials to treat some cancers. The researchers found that diABZI potently inhibits SARS-CoV-2 infection of diverse strains, including variant of concern B.1.351, by stimulating interferon signaling.

Finally, the researchers tested the effectiveness of diABZI in transgenic mice that had been infected with SARS-CoV-2. Because the drug needed to reach the lungs, diABZI was administered through a nasal delivery. diABZI-treated mice showed much less weight loss than the control mice, had significantly-reduced viral loads in their lungs and nostrils, and increased cytokine production -- all supporting the finding that diABZI stimulates interferon for protective immunity.

Cherry said that the study's findings offer promise that diABZI could be an effective treatment for SARS-CoV-2 that could prevent severe COVID-19 symptoms and the spread of infection. Additionally, since diABZI has been shown to inhibit human parainfluenza virus and rhinovirus replication in cultured cells, the STING agonist may be more broadly effective against other respiratory viruses.

"We are now testing this STING agonist against many other viruses," Cherry said. "It's really important to remember that SARS-CoV-2 is not going to be the last coronavirus that we will see and will need protection against."

Credit: 
University of Pennsylvania School of Medicine

Exoskeleton therapy improves mobility, cognition and brain connectivity in people with MS

image: A research participant in the MS pilot study does exercise training in the Ekso NR at Kessler Foundation.

Image: 
Kessler Foundation/Jody Banks

East Hanover, NJ. May 28, 2021. A team of multiple sclerosis (MS) experts at Kessler Foundation led the first pilot randomized controlled trial of robotic-exoskeleton assisted exercise rehabilitation (REAER) effects on mobility, cognition, and brain connectivity in people with substantial MS-related disability. Their results showed that REAER is likely an effective intervention, and is a promising therapy for improving the lives of those with MS.

The article, "A pilot randomized controlled trial of robotic exoskeleton-assisted exercise rehabilitation in multiple sclerosis," (doi: 10.1016/j.msard.2021.102936) was published on April 4, 2021, by Multiple Sclerosis and Related Disorders. It is available open access at https://www.msard-journal.com/article/S2211-0348(21)00203-0/fulltext.

The authors, are Ghaith J. Androwis, PhD, Brian M. Sandroff, PhD, Peter Niewrzol, MA, Glenn R. Wylie, DPhil, Guang Yue, PhD, and John DeLuca, PhD, of Kessler Foundation, and Farris Fakhoury, DPT, of Kessler Institute for Rehabilitation.

It is common for people with MS to experience impairments in both mobility and cognition, and few therapies exist to manage the range of debilitating symptoms. This lack of treatment options is a major problem for people with MS, especially those with substantial MS-related neurological disability.

Previous research shows that exercise rehabilitation, such as walking, is an effective approach to symptom management, with some research suggesting that even a single exercise rehabilitation intervention can improve both mobility and cognition.

Yet there is a lack of efficacy of exercise rehabilitation on mobility and cognitive outcomes in people with MS who have substantial disability. Adaptive exercise rehabilitation approaches such as body-weight supported treadmill training and robot-assisted gait training have not demonstrated convincing results. Moreover, adaptive interventions lack key interactions between patients and therapists that may improve efficacy.

In this pilot study of 10 participants with significant MS-related neurological disability, researchers explored the use of robotic exoskeletons to manage symptoms. Rehabilitation exercise using robotic exoskeletons is a relatively new approach that enables participants to walk over-ground in a progressive regimen that involves close engagement with a therapist. The Foundation has dedicated a Ekso NR to MS studies to facilitate further research in this area.

As compared to conventional gait training, REAER allows participants to walk at volumes needed to realize functional adaptations--via vigorous neurophysiological demands--that lead to improved cognition and mobility. Effects on brain activity patterns were studied using the functional MRI capabilities of the Rocco Ortenzio Neuroimaging Center at Kessler Foundation.

Investigators compared participants' improvement after four weeks of REAER vs four weeks of conventional gait training, looking at functional mobility, walking endurance, cognitive processing speed, and brain connectivity.

The results were positive: Relative to conventional gait training, four weeks of REAER was associated with large improvements in functional mobility (ηp2=.38), cognitive processing speed (ηp2=.53), and brain connectivity outcomes, most significantly between the thalamus and ventromedial prefrontal cortex (ηp2=.72). "Four weeks is relatively short for an exercise training study," noted Dr. Sandroff, senior research scientist at Kessler Foundation and director of the Exercise Neurorehabilitation Research Laboratory. "Seeing improvements within this timeframe shows the potential for exercise to change how we treat MS. Exercise is really powerful behavior that involves many brain regions and networks that can improve over time and result in improved function."

"This is particularly exciting because therapy using robotic exoskeletons shows such promise for improving the lives of people with co-occurring mobility and cognitive disability, a cohort that likely has the greatest potential to benefit from this new technology," said Dr. Androwis, lead author and research scientist in the Center for Mobility and Rehabilitation Engineering Research at Kessler Foundation. "We're eager to design a larger trial to further study these effects. Based on our initial results, we're optimistic that this approach may be superior to the current standard of care."

Credit: 
Kessler Foundation

Researchers create new CRISPR tools to help contain mosquito disease transmission

video: Larvae of the vasa-Cas9 Culex quinquefasciatus mosquito line, which was generated as part of a new genetic toolkit designed to help stop mosquito disease transmission.

Image: 
Gantz Lab, UC San Diego

Since the onset of the CRISPR genetic editing revolution, scientists have been working to leverage the technology in the development of gene drives that target pathogen-spreading mosquitoes such as Anopheles and Aedes species, which spread malaria, dengue and other life-threatening diseases.

Much less genetic engineering has been devoted to Culex genus mosquitoes, which spread devastating afflictions stemming from West Nile virus--the leading cause of mosquito-borne disease in the continental United States--as well as other viruses such as the Japanese encephalitis virus (JEV) and the pathogen causing avian malaria, a threat to Hawaiian birds.

University of California San Diego scientists have now developed several genetic editing tools that help pave the way to an eventual gene drive designed to stop Culex mosquitoes from spreading disease. Gene drives are designed to spread modified genes, in this case those that disable the ability to transmit pathogens, throughout the targeted wild population.

As detailed in the journal Nature Communications, Xuechun Feng, Valentino Gantz and their colleagues at Harvard Medical School and National Emerging Infectious Diseases Laboratories developed a Cas9/guide-RNA expression "toolkit" designed for Culex mosquitoes. Since such little attention in genetic engineering has been devoted to Culex mosquitoes, the researchers were required to develop their toolkit from scratch, starting with a careful examination of the Culex genome.

"My coauthors and I believe that our work will be impactful for scientists working on the biology of the Culex disease vector since new genetic tools are deeply needed in this field," said Gantz, an assistant research scientist in the Division of Biological Sciences at UC San Diego. "We also believe the scientific community beyond the gene drive field will welcome these findings since they could be of broad interest."

While Culex mosquitoes are less problematic in the United States, they are much more of a health risk in Africa and Asia, where they transmit the worm causing filariasis, a disease that can lead to a chronic debilitating condition known as elephantiasis.

The researchers also demonstrated that their tools could work in other insects.

"These modified gRNAs can increase gene drive performance in the fruit fly and could potentially offer better alternatives for future gene drive and gene-editing products in other species," said Gantz.

Gantz and his colleagues have now tested their new tools to ensure proper genetic expression of the CRISPR components and are now poised to apply them to a gene drive in Culex mosquitoes. Such a gene drive construct could be used to halt pathogen transmission by Culex mosquitoes, or alternatively employed to suppress the mosquito population to prevent biting.

Credit: 
University of California - San Diego

Exoskeleton-assisted walking may improve bowel function in people with spinal cord injury

image: Two types of exoskeletons were used in this multi-site study, ReWalk and Ekso GT. This photo shows an Ekso GT in the Tim & Caroline Reynolds Center for Spinal Stimulation at Kessler Foundation

Image: 
Kessler Foundation

East Hanover, NJ. May 28, 2021. A team of researchers has shown that physical intervention plans that included exoskeleton-assisted walking helped people with spinal cord injury evacuate more efficiently and improved the consistency of their stool. This finding was reported in Journal of Clinical Medicine on March 2, 2021, in the article "The Effect of Exoskeletal-Assisted Walking on Spinal Cord Injury Bowel Function: Results from a Randomized Trial and Comparison to Other Physical Interventions" (doi: 10.3390/jcm10050964).

The authors are Peter H. Gorman, MD, of the University of Maryland School of Medicine, Gail F. Forrest, PhD, of Kessler Foundation's Tim and Caroline Reynolds Center for Spinal Stimulation, Dr. William Scott, of VA Maryland Healthcare System, Pierre K. Asselin, MS, Stephen Kornfeld, MD, Eunkyoung Hong, PhD, and Ann M. Spungen, EdD, of the James J. Peters VA Medical Center.

Bowel dysfunction, a common experience after spinal cord injury, can lead to chronic constipation and incontinence, causing discomfort and frustration. In one survey, more than a third of men with spinal cord injury reported that bowel and bladder dysfunction had the most significant effect on their lives post-injury. Unfortunately, these issues are not easily managed.

Rehabilitation professionals have traditionally managed bowel dysfunction using approaches that target the gastrointestinal system or require manual intervention, but some newer research suggests that physical activity and upright posture may enhance bowel motility. However, few studies have explored the possibility that exoskeletal-assisted walking--in which a person with spinal cord injury wears a robotic suit, enabling them to stand and walk--may be an effective addition to existing intervention plans.

In this study, the research team investigated whether exoskeletal-assisted walking improved bowel function in people with chronic spinal cord injury. They performed a three-center, randomized, controlled, crossover clinical trial in which 50 participants completed 36 sessions of exoskeletal-assisted walking. The researchers evaluated bowel function as a secondary outcome in 49 participants. Bowel function was measured via a 10-question bowel function survey, the Bristol Stool Form Scale, and the Spinal Cord Injury Quality of Life Bowel Management Difficulties instrument.

Results showed that the exoskeletal-assisted walking program provided some improvement in bowel function when compared to a control group. "We saw a notable reduction in bowel evacuation time, with 24 percent of participants reporting an improved experience," said Dr. Forrest, co-author and associate director of the Center for Mobility and Rehabilitation Engineering Research at Kessler Foundation. "We also noted that participants' stools trended toward better consistency, supporting our hypothesis that this intervention may improve several measures of bowel function."

"Our results support the idea that walking, and not just standing, may have a beneficial effect on bowel function," said Dr. Gorman, co-author and chief of the Division of Rehabilitation Medicine at the University of Maryland Rehabilitation and Orthopaedic Institute. "Our goal is to improve the quality of life of those with chronic spinal cord injury, and these encouraging results will help inform future studies on the emerging field of mobility intervention."

Credit: 
Kessler Foundation

New GSA Bulletin articles published ahead of print in May

Boulder, Colo., USA: The Geological Society of America regularly publishes
articles online ahead of print. For April, GSA Bulletin topics
include multiple articles about the dynamics of China and Tibet; new
insights into the Chicxulub impact structure; and the dynamic topography of
the Cordilleran foreland basin. You can find these articles at

https://bulletin.geoscienceworld.org/content/early/recent

.

Tectonic and eustatic control of Mesaverde Group
(Campanian–Maastrichtian) architecture, Wyoming-Utah-Colorado region,
USA

Keith P. Minor; Ronald J. Steel; Cornel Olariu

Abstract:
We describe and analyze the depositional history and stratigraphic
architecture of the Campanian and Maastrichtian succession of the southern
greater Green River basin of Wyoming, USA, and surrounding areas to better
understand the interplay between tectonic and eustatic drivers that built
the stratigraphy. By integrating new measured sections with published
outcrop, well-log, and paleogeographic data, two new stratigraphic
correlation diagrams, 35 new paleogeographic reconstructions, and six new
tectonic diagrams were created for this part of the Western Interior
Seaway. From this work, two time-scales of organization are evident: (1)
100−300 k.y.-scale, mainly eustatically driven regressive-transgressive
shoreline oscillations that generated repeated sequences of
alluvial-coastal plain-shoreline deposits, passing basinward to subaqueous
deltas, then capped by transgressive estuarine/barrier lagoon deposits, and
(2) 3.0−4.0 m.y.-scale, tectonically driven groups of 10 to 15 of these
eustatically driven units stacked in an offset arrangement to form larger
clastic units, which are herein referred to as clastic wedges. Four
regional clastic wedges are recognized, based on the architectures of these
clastic packages. These are the: (1) Adaville, (2) Rock Springs, (3) Iles,
and (4) Williams Fork clastic wedges. Pre-Mesaverde deposition in the
Wyoming-Utah-Colorado (USA) region during the Middle Cretaceous was
characterized by thickening of the clastic wedge close to the thrust-front,
driven primarily by retroarc foreland basin (flexural) tectonics. However,
a basinward shift in deposition during the Santonian into the early
Campanian (Adaville clastic wedge) signaled a change in the dominant
stratigraphic drivers in the region. Shoreline advance accelerated in the
early to middle Campanian (Rock Springs clastic wedge), as the end of
activity in the thrust belt, growing importance of flat-slab subduction,
and steady eastward migration of the zone of dynamic subsidence led to loss
of the foredeep and forebulge, with the attendant formation of a
low-accommodation shelf environment. This “flat-shelf” environment promoted
large shoreline advances and retreats during sea-level rise and fall.
During the middle to late Campanian (Iles clastic wedge), deep erosion on
the crest of the Moxa Arch, thinning on the crests of the Rock Springs and
Rawlins uplifts, and subsequent Laramide-driven basin formation occurred as
the Laramide blocks began to partition the region. The next clastic package
(Williams Fork clastic wedge) pushed the shoreline over 400 km away from
the thrust belt during the late Campanian. This was followed by a very
large and persistent marine transgression across the region, with the
formation of a Laramide-driven deepwater turbidite basin with toe-of-slope
fans into the early Maastrichtian. The Mesaverde Group in the
Wyoming-Utah-Colorado region is thus characterized by: (1) a succession of
four tectonically driven classic wedges, each comprised of a dozen or so
eustatically driven packages that preserve large basinward and landward
shoreline shifts, (2) broad regional sand and silt dispersal on a
low-accommodation marine shelf setting, (3) a progressive, tectonically
driven, basinward shift of deposition with offset, basinward stacking of
successive clastic wedges, and (4) the gradual formation of various uplifts
and sub-basins, the timing and sizes of which were controlled by the
movement of deep-seated Laramide blocks. The Mesaverde Group in the
Wyoming-Utah-Colorado region provides an outstanding opportunity to study
the dynamic interaction among the tectonic control elements of a subducting
plate (crustal loading-flexure, dynamic subsidence/uplift, and regional
flat-slab basin partitioning), as well as the dynamic interaction of
tectonic and eustatic controls.

View article:

https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B36032.1/598755/Tectonic-and-eustatic-control-of-Mesaverde-Group

A new K-Ar illite dating application to constrain the timing of
subduction in West Sarawak, Borneo

Qi Zhao; Yi Yan; Satoshi Tonai; Naotaka Tomioka; Peter D. Clift ...

Abstract:
The timing of subduction is a fundamental tectonic problem for tectonic
models, yet there are few direct geological proxies for constraining it.
However, the matrix of a tectonic mélange formed in a subduction-accretion
setting archives the physical/chemical attributes at the time of
deformation during the subduction-accretion process. Thus, the deformation
age of the matrix offers the possibility to directly constrain the period
of the subduction-accretion process. Here we date the Lubok Antu tectonic
mélange and the overlying Lupar Formation in West Sarawak, Borneo by K-Ar
analysis of illite. The ages of authigenic illite cluster around 60 Ma and
36 Ma. The maximum temperatures calculated by vitrinite reflectance values
suggest that our dating results were not affected by external heating.
Thus, the ages of authigenic illite represent the deformation age of the
mélange matrix and the timing of the Rajang Unconformity, indicating that
the subduction in Sarawak could have continued until ca. 60 Ma and the
thermal and/or fluid flow events triggered by a major uplift of the Rajang
Group occurred at ca. 36 Ma. Furthermore, this study highlights the
potential of using the tectonic mélange to extract the timeframe of
subduction zone episodic evolution directly.

View article:

https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35895.1/598747/A-new-K-Ar-illite-dating-application-to-constrain

Constraining the effects of dynamic topography on the development of
Late Cretaceous Cordilleran foreland basin, western United States

Zhiyang Li; Jennifer Aschoff

Abstract:
Dynamic topography refers to the vertical deflection (i.e., uplift and
subsidence) of the Earth’s surface generated in response to mantle flow.
Although dynamic subsidence has been increasingly invoked to explain the
subsidence and migration of depocenters in the Late Cretaceous North
American Cordilleran foreland basin (CFB), it remains a challenging task to
discriminate the effects of dynamic mantle processes from other subsidence
mechanisms, and the spatial and temporal scales of dynamic topography is
not well known. To unravel the relationship between sedimentary systems,
accommodation, and subsidence mechanisms of the CFB through time and space,
a high-resolution chronostratigraphic framework was developed for the Upper
Cretaceous strata based on a dense data set integrating >600 well logs
from multiple basins/regions in Wyoming, Utah, Colorado, and New Mexico,
USA. The newly developed stratigraphic framework divides the Upper
Cretaceous strata into four chronostratigraphic packages separated by
chronostratigraphic surfaces that can be correlated regionally and
constrained by ammonite biozones. Regional isopach patterns and shoreline
trends constructed for successive time intervals suggest that dynamic
subsidence influenced accommodation creation in the CFB starting from ca.
85 Ma, and this wave of subsidence increasingly affected the CFB by ca. 80
Ma as subsidence migrated from the southwest to northeast. During 100−75
Ma, the depocenter migrated from central Utah (dominantly flexural
subsidence) to north-central Colorado (dominantly dynamic subsidence).
Subsidence within the CFB during 75−66 Ma was controlled by the combined
effects of flexural subsidence induced by local Laramide uplifts and
dynamic subsidence. Results from this study provide new constraints on the
spatio-temporal footprint and migration of large-scale (>400 km × 400
km) dynamic topography at an average rate ranging from ∼120 to 60 km/m.y.
in the CFB through the Late Cretaceous. The wavelength and location of
dynamic topography (subsidence and uplift) generated in response to the
subduction of the conjugate Shatsky Rise highly varied through both space
and time, probably depending on the evolution of the oceanic plateau (e.g.,
changes in its location, subduction angle and depth, and buoyancy).
Careful, high-resolution reconstruction of regional stratigraphic
frameworks using three-dimensional data sets is critical to constrain the
influence of dynamic topography. The highly transitory effects of dynamic
topography need to be incorporated into future foreland basin models to
better reconstruct and predict the formation of foreland basins that may
have formed under the combined influence of upper crustal flexural loading
and dynamic subcrustal loading associated with large-scale mantle flows.

View article:

https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35838.1/598220/Constraining-the-effects-of-dynamic-topography-on

Mid-Cretaceous thick carbonate accumulation in Northern Lhasa (Tibet):
eustatic vs. tectonic control?

Yiwei Xu; Xiumian Hu; Eduardo Garzanti; Marcelle BouDagher-Fadel; Gaoyuan
Sun ...

Abstract:
Widespread accumulation of thick carbonates is not typical of orogenic
settings. During the mid-Cretaceous, near the Bangong suture in the
northern Lhasa terrane, the shallow-marine carbonates of the Langshan
Formation, reaching a thickness up to ∼1 km, accumulated in an
epicontinental seaway over a modern area of 132 × 103 km 2, about half of the Arabian/Persian Gulf. The origin of
basin-wide carbonate deposits located close to a newly formed orogenic belt
is not well understood, partly because of the scarcity of paleogeographic
studies on the evolution of the northern Lhasa. Based on a detailed
sedimentological and stratigraphic investigation, three stages in the
mid-Cretaceous paleogeographic evolution of northern Lhasa were defined:
(1) remnant clastic sea with deposition of Duoni/Duba formations (Early to
early Late Aptian, ca. 125−116 Ma); (2) expanding carbonate seaway of
Langshan Formation (latest Aptian−earliest Cenomanian, ca. 116−99 Ma); and
(3) closure of the carbonate seaway represented by the Daxiong/Jingzhushan
formations (Early Cenomanian to Turonian, ca. 99−92 Ma). Combined with data
on tectonic subsidence and eustatic curves, we emphasized the largely
eustatic control on the paleogeographic evolution of the northern Lhasa
during the latest Aptian−earliest Cenomanian when the Langshan carbonates
accumulated, modulated by long-term slow tectonic subsidence and high
carbonate productivity.

View article:

https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35930.1/598221/Mid-Cretaceous-thick-carbonate-accumulation-in

Early and middle Miocene ice sheet dynamics in the Ross Sea: Results
from integrated core-log-seismic interpretation

Lara F. Pérez; Laura De Santis; Robert M. McKay; Robert D. Larter; Jeanine
Ash ...

Abstract:
Oscillations in ice sheet extent during early and middle Miocene are
intermittently preserved in the sedimentary record from the Antarctic
continental shelf, with widespread erosion occurring during major ice sheet
advances, and open marine deposition during times of ice sheet retreat.
Data from seismic reflection surveys and drill sites from Deep Sea Drilling
Project Leg 28 and International Ocean Discovery Program Expedition 374,
located across the present-day middle continental shelf of the central Ross
Sea (Antarctica), indicate the presence of expanded early to middle Miocene
sedimentary sections. These include the Miocene climate optimum (MCO ca.
17−14.6 Ma) and the middle Miocene climate transition (MMCT ca. 14.6−13.9
Ma). Here, we correlate drill core records, wireline logs and reflection
seismic data to elucidate the depositional architecture of the continental
shelf and reconstruct the evolution and variability of dynamic ice sheets
in the Ross Sea during the Miocene. Drill-site data are used to constrain
seismic isopach maps that document the evolution of different ice sheets
and ice caps which influenced sedimentary processes in the Ross Sea through
the early to middle Miocene. In the early Miocene, periods of localized
advance of the ice margin are revealed by the formation of thick sediment
wedges prograding into the basins. At this time, morainal bank complexes
are distinguished along the basin margins suggesting sediment supply
derived from marine-terminating glaciers. During the MCO,
biosiliceous-bearing sediments are regionally mapped within the depocenters
of the major sedimentary basin across the Ross Sea, indicative of
widespread open marine deposition with reduced glacimarine influence. At
the MMCT, a distinct erosive surface is interpreted as representing
large-scale marine-based ice sheet advance over most of the Ross Sea
paleo-continental shelf. The regional mapping of the seismic stratigraphic
architecture and its correlation to drilling data indicate a regional
transition through the Miocene from growth of ice caps and inland ice
sheets with marine-terminating margins, to widespread marine-based ice
sheets extending across the outer continental shelf in the Ross Sea.

View article:

https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35814.1/598222/Early-and-middle-Miocene-ice-sheet-dynamics-in-the

Late Quaternary aggradation and incision in the headwaters of the
Yangtze River, eastern Tibetan Plateau, China

Yang Yu; Xianyan Wang; Shuangwen Yi; Xiaodong Miao; Jef Vandenberghe ...

Abstract:
River aggradation or incision at different spatial-temporal scales are
governed by tectonics, climate change, and surface processes which all
adjust the ratio of sediment load to transport capacity of a channel. But
how the river responds to differential tectonic and extreme climate events
in a catchment is still poorly understood. Here, we address this issue by
reconstructing the distribution, ages, and sedimentary process of fluvial
terraces in a tectonically active area and monsoonal environment in the
headwaters of the Yangtze River in the eastern Tibetan Plateau, China.
Field observations, topographic analyses, and optically stimulated
luminescence dating reveal a remarkable fluvial aggradation, followed by
terrace formations at elevations of 55−62 m (T7), 42−46 m (T6), 38 m (T5),
22−36 m (T4), 18 m (T3), 12−16 m (T2), and 2−6 m (T1) above the present
floodplain. Gravelly fluvial accumulation more than 62 m thick has been
dated prior to 24−19 ka. It is regarded as a response to cold climate
during the last glacial maximum. Subsequently, the strong monsoon
precipitation contributed to cycles of rapid incision and lateral erosion,
expressed as cut-in-fill terraces. The correlation of terraces suggests
that specific tectonic activity controls the spatial scale and geomorphic
characteristics of the terraces, while climate fluctuations determine the
valley filling, river incision and terrace formation. Debris and colluvial
sediments are frequently interbedded in fluvial sediment sequences,
illustrating the episodic, short-timescale blocking of the channel ca. 20
ka. This indicates the potential impact of extreme events on geomorphic
evolution in rugged terrain.

View article:

https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35983.1/596999/Late-Quaternary-aggradation-and-incision-in-the

Late Neoproterozoic to early Paleozoic paleogeographic position of the
Yangtze block and the change of tectonic setting in its northwestern
margin: Evidence from detrital zircon U-Pb ages and Hf isotopes of
sedimentary rocks

Bingshuang Zhao; Xiaoping Long; Jin Luo; Yunpeng Dong; Caiyun Lan ...

Abstract:
The crustal evolution of the Yangtze block and its tectonic affinity to
other continents of Rodinia and subsequent Gondwana have not been well
constrained. Here, we present new U-Pb ages and Hf isotopes of detrital
zircons from the late Neoproterozoic to early Paleozoic sedimentary rocks
in the northwestern margin of the Yangtze block to provide critical
constraints on their provenance and tectonic settings. The detrital zircons
of two late Neoproterozoic samples have a small range of ages (0.87−0.67
Ga) with a dominant age peak at 0.73 Ga, which were likely derived from the
Hannan-Micangshan arc in the northwestern margin of the Yangtze block. In
addition, the cumulative distribution curves from the difference between
the depositional age and the crystalline age (CA−DA) together with the
mostly positive εHf(t) values of these zircon crystals
(−6.8 to +10.7, ∼90% zircon grains with εHf[t]
> 0) suggest these samples were deposited in a convergent setting during
the late Neoproterozoic. In contrast, the Cambrian−Silurian sediments share
a similar detrital zircon age spectrum that is dominated by Grenvillian
ages (1.11−0.72 Ga), with minor late Paleoproterozoic (ca. 2.31−1.71 Ga),
Mesoarchean to Neoarchean (3.16−2.69 Ga), and latest Archean to early
Paleoproterozoic (2.57−2.38 Ga) populations, suggesting a significant
change in the sedimentary provenance and tectonic setting from a convergent
setting after the breakup of Rodinia to an extensional setting during the
assembly of Gondwana. However, the presence of abundant Grenvillian and
Neoarchean ages, along with their moderately to highly rounded shapes,
indicates a possible sedimentary provenance from exotic continental
terrane(s). Considering the potential source areas around the Yangtze block
when it was a part of Rodinia or Gondwana, we suggest that the source of
these early Paleozoic sediments had typical Gondwana affinities, such as
the Himalaya, north India, and Tarim, which is also supported by their
stratigraphic similarity, newly published paleomagnetic data, and
tectono-thermal events in the northern fragments of Gondwana. This implies
that after prolonged subduction in the Neoproterozoic, the northwestern
margin of the Yangtze block began to be incorporated into the assembly of
Gondwana and then accept sediments from the northern margin of Gondwanaland
in a passive continental margin setting.

View article:

https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35980.1/597000/Late-Neoproterozoic-to-early-Paleozoic

Constraining the duration of the Tarim flood basalts (northwestern
China): CA-TIMS zircon U-Pb dating of tuffs

Yu-Ting Zhong; Zhen-Yu Luo; Roland Mundil; Xun Wei; Hai-Quan Liu ...

Abstract:
The Early Permian Tarim large igneous province (LIP) in northwestern China
comprises voluminous basaltic lava flows, as well as ultramafic and silicic
intrusions. The age and duration of the Tarim LIP remains unclear, and thus
the rate of magma production and models of potential environmental effects
are uncertain. Here we present high-precision chemical abrasion−isotope
dilution−thermal ionization mass spectrometry zircon U-Pb ages for three
newly discovered tuff layers interlayered with lava flows in the
Kupukuziman and Kaipaizileike formations in the Keping area (Xinjiang,
northwest China). The volcanism of the Kupukuziman Formation is constrained
to a short duration from 289.77 ± 0.95 to 289.41 ± 0.52 Ma. An age for the
overlying Kaipaizileike Formation is 284.27 ± 0.39 Ma, bracketing the
duration of the entire eruptive phase of the Tarim flood basalts at ∼5.5
m.y. The low eruption rate and relatively long duration of magmatism is
consistent with a plume incubation model for the Tarim LIP.

View article:

https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B36053.1/597001/Constraining-the-duration-of-the-Tarim-flood

Late Pleistocene−Holocene flood history, flood-sediment provenance and
human imprints from the upper Indus River catchment, Ladakh Himalaya

Choudhurimayum Pankaj Sharma; Poonam Chahal; Anil Kumar; Saurabh Singhal;
YP Sundriyal ...

Abstract:
The Indus River, originating from Manasarovar Lake in Tibet, runs along the
Indus Tsangpo suture zone in Ladakh which separates the Tethyan Himalaya in
the south from the Karakoram zone to the north. Due to the barriers created
by the Pir-Panjal ranges and the High Himalaya, Ladakh is located in a rain
shadow zone of the Indian summer monsoon (ISM) making it a high-altitude
desert. Occasional catastrophic hydrological events are known to endanger
lives and properties of people residing there. Evidence of such events in
the recent geologic past that are larger in magnitude than modern
occurrences is preserved along the channels. Detailed investigation of
these archives is imperative to expand our knowledge of extreme floods that
rarely occur on the human timescale. Understanding the frequency,
distribution, and forcing mechanisms of past extreme floods of this region
is crucial to examine whether the causal agents are regional, global, or
both on long timescales. We studied the Holocene extreme flood history of
the Upper Indus catchment in Ladakh using slackwater deposits (SWDs)
preserved along the Indus and Zanskar Rivers. SWDs here are composed of
stacks of sand-silt couplets deposited rapidly during large flooding events
in areas where a sharp reduction of flow velocity is caused by local
geomorphic conditions. Each couplet represents a flood, the age of which is
constrained using optically stimulated luminescence for sand and
accelerator mass spectrometry and liquid scintillation counter 14C for charcoal specks from hearths. The study suggests
occurrence of large floods during phases of strengthened ISM when the
monsoon penetrated into arid Ladakh. Comparison with flood records of
rivers draining other regions of the Himalaya and those influenced by the
East Asian summer monsoon (EASM) indicates asynchronicity with the Western
Himalaya that confirms the existing anti-phase relationship of the ISM-EASM
that occurred in the Holocene. Detrital zircon provenance analysis
indicates that sediment transportation along the Zanskar River is more
efficient than the main Indus channel during extreme floods. Post−Last
Glacial Maximum human migration, during warm and wet climatic conditions,
into the arid upper Indus catchment is revealed from hearths found within
the SWDs.

View article:

https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35976.1/597002/Late-Pleistocene-Holocene-flood-history-flood

New insights into the formation and emplacement of impact melt rocks
within the Chicxulub impact structure, following the 2016 IODP-ICDP
Expedition 364

Sietze J. de Graaff; Pim Kaskes; Thomas Déhais; Steven Goderis; Vinciane
Debaille ...

Abstract:
This study presents petrographic and geochemical characterization of 46
pre-impact rocks and 32 impactites containing and/or representing impact
melt rock from the peak ring of the Chicxulub impact structure (Yucatán,
Mexico). The aims were both to investigate the components that potentially
contributed to the impact melt (i.e., the pre-impact lithologies) and to
better elucidate impact melt rock emplacement at Chicxulub. The impactites
presented here are subdivided into two sample groups: the lower impact melt
rock−bearing unit, which intrudes the peak ring at different intervals, and
the upper impact melt rock unit, which overlies the peak ring. The
geochemical characterization of five identified pre-impact lithologies
(i.e., granitoid, dolerite, dacite, felsite, and limestone) was able to
constrain the bulk geochemical composition of both impactite units. These
pre-impact lithologies thus likely represent the main constituent
lithologies that were involved in the formation of impact melt rock. In
general, the composition of both impactite units can be explained by mixing
of the primarily felsic and mafic lithologies, but with varying degrees of
carbonate dilution. It is assumed that the two units were initially part of
the same impact-produced melt, but discrete processes separated them during
crater formation. The lower impact melt rock−bearing unit is interpreted to
represent impact melt rock injected into the crystalline basement during
the compression/excavation stage of cratering. These impact melt rock
layers acted as delamination surfaces within the crystalline basement,
accommodating its displacement during peak ring formation. This movement
strongly comminuted the impact melt rock layers present in the peak ring
structure. The composition of the upper impact melt rock unit was
contingent on the entrainment of carbonate components and is interpreted to
have stayed at the surface during crater development. Its formation was not
finalized until the modification stage, when carbonate material would have
reentered the crater.

View article:

https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35795.1/597003/New-insights-into-the-formation-and-emplacement-of

Isotopic spatial-temporal evolution of magmatic rocks in the Gangdese
belt: Implications for the origin of Miocene post-collisional giant
porphyry deposits in southern Tibet

Chen-Hao Luo; Rui Wang; Roberto F. Weinberg; Zengqian Hou

Abstract:
Crustal growth is commonly associated with porphyry deposit formation
whether in continental arcs or collisional orogens. The Miocene high-K
calc-alkaline granitoids in the Gangdese belt in southern Tibet, associated
with porphyry copper deposits, are derived from the juvenile lower crust
with input from lithospheric mantle trachytic magmas, and are characterized
by adakitic affinity with high-Sr/Y and La/Yb ratios as well as high Mg#
and more evolved isotopic ratios. Researchers have argued, lower crust with
metal fertilization was mainly formed by previous subduction-related
modification. The issue is that the arc is composed of three stages of
magmatism including Jurassic, Cretaceous, and Paleocene−Eocene, with peaks
of activity at 200 Ma, 90 Ma, and ca. 50 Ma, respectively. All three stages
of arc growth are essentially similar in terms of their whole-rock
geochemistry and Sr-Nd-Hf isotopic compositions, making it difficult to
distinguish Miocene magma sources. This study is based on ∼430 bulk-rock
Sr-Nd isotope data and ∼270 zircon Lu-Hf isotope data and >800
whole-rock geochemistry analyses in a 900-km-long section of the Gangdese
belt. We found large scale variations along the length of the arc where the
Nd-Hf isotopic ratios of the Jurassic, Cretaceous, and Paleocene−Eocene arc
rocks change differently from east to west. A significant feature is that
the spatial distribution of Nd-Hf isotopic values of the Paleocene−Eocene
arc magmas and the Miocene granitoids, including metallogenic ones, are
“bell-shaped” from east to west, with a peak of εNd(t) and εHf (t) at ∼91°E. In contrast, the Jurassic and Cretaceous arc
magmas have different isotopic distribution patterns as a function of
longitude. The isotopic spatial similarity of the Paleocene−Eocene and
Miocene suites suggests that the lower crust source of the metallogenic
Miocene magmas is composed dominantly of the Paleocene−Eocene arc rocks.
This is further supported by abundant inherited zircons dominated by
Paleocene−Eocene ages in the Miocene rocks. Another important discovery
from the large data set is that the Miocene magmatic rocks have higher Mg # and more evolved Sr-Nd-Hf isotopic compositions than all
preceding magmatic arcs. These characteristics indicate that the
involvement of another different source was required to form the Miocene
magmatic rocks. Hybridization of the isotopically unevolved primary magmas
with isotopically evolved, lithospheric mantle-derived trachytic magmas is
consistent with the geochemical, xenolith, and seismic evidence and is
essential for the Miocene crustal growth and porphyry deposit formation. We
recognize that the crustal growth in the collisional orogen is a two-step
process, the first is the subduction stage dominated by typical magmatic
arc processes leading to lower crust fertilization, the second is the
collisional stage dominated by partial melting of a subduction-modified
lower crust and mixing with a lithospheric mantle-derived melt at the
source depth.

View article:

https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B36018.1/596769/Isotopic-spatial-temporal-evolution-of-magmatic

Oxygen isotope (δ18O) trends measured from Ordovician
conodont apatite using secondary ion mass spectrometry (SIMS):
Implications for paleo-thermometry studies

Cole T. Edwards; Clive M. Jones; Page C. Quinton; David A. Fike

Abstract:
The oxygen isotopic compositions (δ18O) of minimally altered phosphate
minerals and fossils, such as conodont elements, are used as a proxy for
past ocean temperature. Phosphate is thermally stable under low to moderate
burial conditions and is ideal for reconstructing seawater temperatures
because the P-O bonds are highly resistant to isotopic exchange during
diagenesis. Traditional bulk methods used to measure conodont δ18O include
multiple conodont elements, which can reflect different environments and
potentially yield an aggregate δ18O value derived from a mixture of
different water masses. In situ spot analyses of individual elements using
micro-analytical techniques, such as secondary ion mass spectrometry
(SIMS), can address these issues. Here we present 108 new δ18O values using
SIMS from conodont apatite collected from four Lower to Upper Ordovician
stratigraphic successions from North America (Nevada, Oklahoma, and the
Cincinnati Arch region of Kentucky and Indiana, USA). The available
elements measured had a range of thermal alteration regimes that are
categorized based on their conodont alteration index (CAI) as either low
(CAI = 1−2) or high (CAI = 3−4). Though individual spot analyses of the
same element yield δ18O values that vary by several per mil (‰), most form
a normal distribution around a mean value. Isotopic variability of
individual spots can be minimized by avoiding surficial heterogeneities
like cracks, pits, or near the edge of the element and the precision can be
improved with multiple (≥4) spot analyses of the same element. Mean δ18O
values from multiple conodonts from the same bed range between 0.0 and 4.3‰
(median 1.0‰), regardless of low or high CAI values. Oxygen isotopic values
measured using SIMS in this study reproduce values similar to published
trends, namely, δ18O values increase during the Early−Middle Ordovician and
plateau by the mid Darriwilian (late Middle Ordovician). Twenty-two of the
measured conodonts were from ten sampled beds that had been previously
measured using bulk analysis. SIMS-based δ18O values from these samples are
more positive by an average of 1.7‰ compared to bulk values, consistent
with observations by others who attribute the shift to carbonate- and
hydroxyl-related SIMS matrix effects. This offset has implications for
paleo-temperature model estimates, which indicate that a 4 °C temperature
change corresponds to a 1‰ shift in δ18O (‰). Although this uncertainty
precludes precise paleo-temperature reconstructions by SIMS, it is valuable
for identifying spatial and stratigraphic trends in temperature that might
not have been previously possible with bulk approaches.

View article

:
https://pubs.geoscienceworld.org/gsa/gsabulletin/article-abstract/doi/10.1130/B35891.1/596655/Oxygen-isotope-18O-trends-measured-from-Ordovician

Credit: 
Geological Society of America

How retroviruses become infectious

video: The proteins of the virus capsid, which contains the genetic information, are much more flexible in their shape than previously thought. The small IP6 molecules (0:38) stabilize the protein hexamers (grey) and pentamers (orange).

Image: 
Marti Obr, IST Austria

Viruses are perfect molecular machines. Their only goal is to insert their genetic material into healthy cells and thus multiply. With deadly precision, they thereby can cause diseases that cost millions of lives and keep the world on edge. One example for such a virus, although currently less discussed, is HIV that causes the ongoing global AIDS-epidemic. Despite the progress made in recent years, 690 000 people died in 2019 alone as a result of the virus infection. "If you want to know the enemy, you have to know all its friends," says Martin Obr, postdoc at the Schur group at IST Austria. Together with his colleagues, he therefore studies a virus belonging to the same family as HIV - the Rous sarcoma virus, a virus causing cancer in poultry. With its help, he now gained new insights into the important role a small molecule plays in the assembly of these type of viruses.

Protecting the virus blueprint

In their study, published in the journal Nature Communications, the team together with collaborators at Cornell University and the University of Missouri focused on the late phase of retrovirus replication. "It is a long way from an infected cell to the mature virus particle that can infect another cell," explains first author Martin Obr. A new particle buds from the cell in an immature, non-infectious state. It then forms a protective shell, a so-called capsid, around its genetic information and becomes infectious. This protective shell consists of a protein, which is organized in hexamers and a few pentamers. The team discovered that a small molecule called IP6 plays a major role in stabilizing the protein shell within the Rous sarcoma virus.

"If the protective shell is not stable, the genetic information of the virus could be released prematurely and will be destroyed, but if it's too stable the genome can't exit at all and, therefore, becomes useless," says Assistant Professor Florian Schur. In a previous study, he and his colleagues were able to show IP6 is important in the assembly of HIV. Now, the team proved it to be as important in other retroviruses showing just how essential the small molecule is in the virus life cycle. "When building a car, you have all these big metal parts, like the hood, the roof and the doors - the screws are connecting everything. In our case, the big parts are the capsid proteins and the IP6 molecules are the screws," says Obr.

Unexpected flexibility

Further developing cryo-electron tomography, a technique that allows scientists to look at extremely small samples in their natural state, the team was able to see how variable the shapes formed by capsid proteins are. "Now we ask ourselves: Why does the virus change the shape of its capsid? What is it adapting to?" says postdoc Martin Obr. Different capsid shapes within the same type of virus could point to differences in the infectivity of virus particles. "Whatever happens, happens for a reason but there is no clear answer yet," says Florian Schur. Further developing the technology to get to the bottom of these highly optimized pathogens remains a challenging and fascinating task for the scientists.

Credit: 
Institute of Science and Technology Austria

Data from 45 million mobile users further shows poorer people less able to stay at home COVID rules

People living in deprived, less affluent neighborhoods spent less time indoors at home during lockdown, according to a study that tracked data from millions of mobile phone users across the United States.

The study, published in the journal Annals of the American Association of Geographers, adds to growing evidence that low earners are less likely to comply with stay-at-home orders, either because they simply can't afford to, or because they work in professions in which working from home is not possible.

The finding is concerning given the fact that vulnerable groups are already at greater risk from COVID.

In March 2020, the US like many countries in the world entered a state of lockdown, with its citizens advised to stay at home to curb the spread of Coronavirus. Non-essential businesses closed, with people asked to work from home.

To investigate levels of compliance with these orders, researchers analyzed anonymous tracking data from 45 million mobile phone users across the United States. The authors calculated how much time residents in New York, Los Angeles, Chicago, Dallas, Houston, Washington D.C., Miami, Philadelphia, Atlanta, Phoenix, Boston, and San Francisco spent at home in the period between 1 January 2020 to 31 August 2020.

They then compared this with demographic information about the neighborhoods in which people lived, collected through The American Community Survey (ACS), a demographics survey program conducted by the U.S. Census Bureau.

The findings revealed that people living in areas with a higher percentage of wealthy residents, and with a higher average household income level tended to spend more time at home under the stay-at-home orders than people living in poor communities. This finding was valid across all cities that the researchers looked at.

The study also showed that education was correlated with compliance, as people who lived in neighborhoods with a high percentage of postgraduates tended to spend longer at home.

"Our study reveals the luxury nature of stay-at-home orders, which lower-income groups cannot afford to comply with," says author Xiao Huang, Assistant Professor of Geosciences at the University of Arkansas.

"This disparity exacerbates long-standing social inequality issues present in the United States, potentially causing unequal exposure to a virus that disproportionately affects vulnerable populations."

In the UK, too, it has been well-documented that those in more deprived and ethnically diverse communities are at greater risk from the virus.

Data from the Office for National Statistics (ONS) shows that those living in the most deprived neighbourhoods have been more than twice as likely to die from COVID as those in the least deprived. One of the reasons for this is thought to be that low-income workers typically have jobs that cannot be done from home, placing them at greater risk of contracting COVID-19.

They are also more likely to have insecure 'zero hours' contracts, making them worry that if they do not go into work they might not have a job to return to.

Previous research by SAGE has also shown that people who earn less than £20,000, or who have savings of less than £100 are three times less likely to self-isolate.

The authors of the study argue that more needs to be done to protect vulnerable groups from the effects of COVID.

"We must confront systemic social inequality and call for a high-priority assessment of the long-term impact of COVID-19 on geographically and socially disadvantaged groups," says Xiao Huang.

Credit: 
Taylor & Francis Group

Blood test detects childhood tumors based on their epigenetic profiles

image: Extracting tumor epigenetics from blood

Image: 
Tatjana Hirschmugl

A new study exploits the characteristic epigenetic signatures of childhood tumors to detect, classify and monitor the disease. The scientists analyzed short fragments of tumor DNA that are circulating in the blood. These "liquid biopsy" analyses exploit the unique epigenetic landscape of bone tumors and do not depend on any genetic alterations, which are rare in childhood cancers. This approach promises to improve personalized diagnostics and, possibly, future therapies of childhood tumors such as Ewing sarcoma. The study has been published in Nature Communications.

A study led by scientists from St. Anna Children's Cancer Research Institute (St. Anna CCRI) in collaboration with CeMM Research Center for Molecular Medicine of the Austrian Academy of Sciences provides an innovative method for "liquid biopsy" analysis of childhood tumors. This method exploits the fragmentation patterns of the small DNA fragments that tumors leak into the blood stream, which reflect the unique epigenetic signature of many childhood cancers. Focusing on Ewing sarcoma, a bone tumor of children and young adults with unmet clinical need, the team led by Eleni Tomazou, PhD, St. Anna CCRI, demonstrates the method's utility for tumor classification and monitoring, which permits close surveillance of cancer therapy without highly invasive tumor biopsies.

In tumors, cancer cells constantly divide, with some of the cancer cells dying in the process. These cells often release their DNA into the blood stream, where it circulates and can be analyzed using genomic methods such as high-throughput DNA sequencing. Such "so-termed liquid biopsy" analyses provide a minimally invasive alternative to conventional tumor biopsies that often require surgery, holding great promise for personalized therapies. For example, it becomes possible to check frequently for molecular changes in the tumor. However, the use of liquid biopsy for childhood cancers has so far been hampered by the fact that many childhood tumors have few genetic alterations that are detectable in DNA isolated from blood plasma.

Exploiting tumor-specific epigenetic patterns

Cell-free DNA from dying tumor cells circulates in the blood in the form of small fragments. Their size is neither random nor determined solely by the DNA sequence. Rather, it reflects how the DNA is packaged inside the cancer cells, and it is influenced by the chromatin (i.e., complex of DNA, protein and RNA) structure and epigenetic profiles of these cells. This is very relevant because epigenetic patterns - sometimes referred to as the "second code" of the genome - are characteristically different for different cell types in the human body. Epigenetic mechanisms lead to changes in gene function that are not based on changes in the DNA sequence but are passed on to daughter cells. The analysis of cell-free DNA fragmentation patterns provides a unique opportunity to learn about the epigenetic regulation inside the tumor without having to surgically remove tumor cells or even know whether and where in the body a tumor exists.

"We previously identified unique epigenetic signatures of Ewing sarcoma. We reasoned that these characteristic epigenetic signatures should be preserved in the fragmentation patterns of tumor-derived DNA circulating in the blood. This would provide us with a much-needed marker for early diagnosis and tumor classification using the liquid biopsy concept", explains Dr. Tomazou, Principal Investigator of the Epigenome-based precision medicine group at St. Anna CCRI.

Machine learning increases sensitivity

The new study benchmarks various metrics for analyzing cell-free DNA fragmentation, and it introduces the LIQUORICE algorithm for detecting circulating tumor DNA based on cancer-specific chromatin signatures. The scientists used machine-learning classifiers to distinguish between patients with cancer and healthy individuals, and between different types of pediatric sarcomas. "By feeding these machine learning algorithms with our extensive whole genome sequencing data of tumor-derived DNA in the blood stream, the analysis becomes highly sensitive and in many instances outperforms conventional genetic analyses", says Dr. Tomazou.

When asked about potential applications, she explains: "Our assay works well, we are very excited. However, further validation will be needed before it can become part of routine clinical diagnostics." According to the scientists, their approach could be used for minimally invasive diagnosis and, but also as a prognostic marker, monitoring which patient responds to therapy. Additionally, it might serve as a predictive marker during neoadjuvant therapy (i.e., chemotherapy before surgery), potentially enabling dose adjustments according to treatment response. "Right now, most patients receive very high doses of chemotherapy, while some patients may be cured already with a less severe therapy, which would reduce their risk of getting other cancers later in life. There is a real medical need for adaptive clinical trials and personalized treatment of bone tumors in children."

Credit: 
St. Anna Children's Cancer Research Institute

When to release free and paid apps for maximal revenue

Researchers from Tulane University and University of Maryland published a new paper in the Journal of Marketing that examines the dynamic interplay between free and paid versions of an app over its lifetime and suggests a possible remedy for the failure of apps.

The study, forthcoming in the Journal of Marketing, is titled "Managing the Versioning Decision over an App's Lifetime" and is authored by Seoungwoo Lee, Jie Zhang, and Michel Wedel.

Is it really over for paid mobile apps? The mobile app industry is unique because free apps are much more prevalent than paid apps in most app categories, contrary to many other product markets where free products primarily play a supportive role to the paid products. Apps have been trending toward the free version in the past decade, such that in July 2020, 96% of apps on the Google Play platform were free. However, 63% of the free apps had fewer than a thousand downloads per month and 60% of app publishers generated less than $500 per month in 2015.

Are there ways for paid apps to make free apps more profitable? And how can app publishers improve profitability by strategically deploying or eliminating the paid and free versions of an app over its lifetime? To answer these questions, the research team investigated app publisher's decisions to offer the free, paid, or both versions of an app by considering the dynamic interplays between the free and paid versions. The findings offer valuable insights for app publishers on how to manage the versioning decision over an app's lifetime.

First, the researchers demonstrate how the free and paid versions influence each other's current demand, future demand, and in-app revenues. They find that either version's cumulative user base stimulates future demand for both versions via social influence, yet simultaneously offering both versions hurts the demand for each other in the current period. Also, the presence of a paid version reduces the in-app purchase rate and active user base and, therefore, the in-app purchase and advertising revenues of a free app, but the presence of a free version appears to have little negative impact on the paid version of an app. Therefore, app publishers should be mindful of the paid version's negative impact on the free version. In general, simultaneously offering both versions helps a publisher achieve cost savings via economies of scale, but it reduces revenues from each version compared to when either version is offered alone.

Second, analyses show that the most common optimal launch strategy is to offer the paid version first. Paid apps can generate download revenues from the first day of sales, while in-app revenues from either version rely on a sizeable user base which takes time to build. So, publishers can rely on paid apps to generate operating capital and recuperate development and launch costs much more quickly. Nonetheless, there are variations across app categories, which are related to differences in apps' abilities to monetize from different revenue sources. For example, the percentage of utility apps that should launch a paid app is particularly high because they have a lower ability to monetize the free app through in-app purchase items and advertising. In contrast, entertainment apps should mostly launch a free version because they have high availability of in-app ad networks and in-app purchase items.

Third, the optimal versioning decisions and their evolutionary patterns change over an app's ages and vary by app category. The evolutionary patterns of optimal versioning decisions show that, for most apps, the relative profitability of the free version tends to increase with app age while that of the paid version tends to decline. Therefore, the profitability of simultaneously offering both versions tends to increase with app age until a certain point, after which the free-only version will take over as the most common optimal versioning decision, which occurs about 1.5 years after launch on average for the (relatively more successful) apps in the data. Also, there is substantial cross-category variations in the versioning evolution patterns. For example, unlike for the other categories examined, the optimal versioning decision for most utility apps in our data is to stay with the paid-only option throughout an app's lifetime.

This research reveals the dynamic interplay between free and paid versions of an app over its lifetime and suggests a possible remedy for the failure of apps. As the researchers explain, "Many apps that start out with a free version fail because they cannot generate enough revenue to sustain early-stage operations. We urge app publishers to pay close attention to the interplay between free and paid app versions and to improve the profitability of free apps by strategically deploying or eliminating their paid version counterparts over an app's lifetime."

Credit: 
American Marketing Association

Helping doctors manage COVID-19

image: Chest x-rays used in the COVID-Net study show differing infection extent and opacity in the lungs of COVID-19 patients.

Image: 
University of Waterloo

Artificial intelligence (AI) technology developed by researchers at the University of Waterloo is capable of assessing the severity of COVID-19 cases with a promising degree of accuracy.

A study, which is part of the COVID-Net open-source initiative launched more than a year ago, involved researchers from Waterloo and spin-off start-up company DarwinAI, as well as radiologists at the Stony Brook School of Medicine and the Montefiore Medical Center in New York.

Deep-learning AI was trained to analyze the extent and opacity of infection in the lungs of COVID-19 patients based on chest x-rays. Its scores were then compared to assessments of the same x-rays by expert radiologists.

For both extent and opacity, important indicators of the severity of infections, predictions made by the AI software were in good alignment with scores provided by the human experts.

Alexander Wong, a systems design engineering professor and co-founder of DarwinAI, said the technology could give doctors an important tool to help them manage cases.

"Assessing the severity of a patient with COVID-19 is a critical step in the clinical workflow for determining the best course of action for treatment and care, be it admitting the patient to ICU, giving a patient oxygen therapy, or putting a patient on a mechanical ventilator," Wong said.

"The promising results in this study show that artificial intelligence has a strong potential to be an effective tool for supporting frontline healthcare workers in their decisions and improving clinical efficiency, which is especially important given how much stress the ongoing pandemic has placed on healthcare systems around the world."

A paper on the research, Towards computer-aided severity assessment via deep neural networks for geographic and opacity extent scoring of SARS-CoV-2 chest X-rays, appears in the journal Scientific Reports.

Credit: 
University of Waterloo

Next-gen electric vehicle batteries: These are the questions we still need to answer

The next generation of electric vehicle batteries, with greater range and improved safety, could be emerging in the form of lithium metal, solid-state technology.

But key questions about this promising power supply need to be answered before it can make the jump from the laboratory to manufacturing facilities, according to University of Michigan researchers. And with efforts to bring electric vehicles to a larger part of the population, they say, those questions need answering quickly.

Jeff Sakamoto and Neil Dasgupta, U-M associate professors of mechanical engineering, have been leading researchers on lithium metal, solid-state batteries over the past decade. In a perspective piece in the journal Joule, Sakamoto and Dasgupta lay out the main questions facing the technology. To develop the questions, they worked in close collaboration with leaders in the auto industry.

Major automakers are going all-in on electric vehicles this year, with many announcing plans to phase out internal-combustion engine cars in the coming years. Lithium-ion batteries enabled the earliest EVs and they remain the most common power supply for the latest models coming off assembly lines.

Those lithium-ion batteries are approaching their peak performance in terms of the EV range on a single charge. And they come with the need for a heavy and bulky battery management system--without which there is risk of onboard fires. By utilizing lithium metal for the battery anode along with a ceramic for the electrolyte, researchers have demonstrated the potential for doubling EV range for the same size battery while dramatically reducing the potential for fires.

"Tremendous progress in advancing lithium metal solid-state batteries was made over the last decade," Sakamoto said. "However, several challenges remain on the path to commercializing the technology, especially for EVs."

Questions that need to be answered to capitalize on that potential include:

How can we produce ceramics, which are brittle, in the massive, paper-thin sheets lithium metal batteries require?

Do lithium metal batteries' use of ceramics, which require energy to heat them up to more than 2,000 degrees Fahrenheit during manufacturing, offset their environmental benefits in electric vehicles?

Can both the ceramics and the process used to manufacture them be adapted to account for defects, such as cracking, in a way that does not force battery manufacturers and automakers to drastically revamp their operations?

A lithium metal solid-state battery would not require the heavy and bulky battery management system that lithium-ion batteries need to maintain durability and reduce the risk of fire. How will the reduction in mass and volume of the battery management system--or its removal altogether--affect performance and durability in a solid-state battery?

The lithium metal needs to be in constant contact with the ceramic electrolyte, meaning additional hardware is needed to apply pressure to maintain contact. What will the added hardware mean for battery pack performance?

Sakamoto, who has his own startup company focused on lithium metal solid-state batteries, says the technology is having a moment right now. But the enthusiasm driving the moment, he says, must not get ahead of itself.

Credit: 
University of Michigan

Researchers create machine learning model to predict treatment with dialysis or death for hospitalized COVID-19 patients

Paper Title: Predictive Approaches for Acute Dialysis Requirement and Death in COVID-19

Journal: The Clinical Journal of the American Society of Nephrology (published online May 24, 2021)

Authors: Girish Nadkarni, MD, Associate Professor in the Department of Medicine (Nephrology), Clinical Director of the Hasso Plattner Institute for Digital Health, and Co-Chair of the Mount Sinai Clinical Intelligence Center at the Icahn School of Medicine at Mount Sinai; Lili Chan, MD, Assistant Professor in the Department of Medicine (Nephrology) at the Icahn School of Medicine at Mount Sinai; Akhil Vaid, MD, postdoctoral fellow in the Department of Genetics and Genomic Sciences at the Icahn School of Medicine at Mount Sinai, and member of the Mount Sinai Clinical Intelligence Center and the Hasso Plattner Institute for Digital Health at Mount Sinai; and other coauthors.

Bottom Line: SARS-CoV-2, the virus that causes COVID-19, has infected more than 103 million people worldwide. Acute kidney injury (AKI) treated with dialysis was a common complication in patients who were hospitalized with COVID-19. Acute kidney injury is associated with increased risks for morbidity and mortality. Early prediction of which patients will need dialysis or experience critical illness leading to mortality during hospital care can enhance appropriate monitoring, and better inform conversations with patients and their caretakers.

Results: The Mount Sinai team developed and tested five different algorithms to predict patients requiring treatment with dialysis or critical illness leading to death on day 1, 3, 5, and 7 of the hospital stay, using data from the first 12 hours of admission to the Mount Sinai Health System. Out of the five models, the XGBoost without imputation method, outperformed all others with higher precision and recall.

Why the Research Is Interesting: While the Mount Sinai model requires further external review, such machine learning models can potentially be deployed throughout healthcare systems to help determine which COVID-19 patients are most at risk for adverse outcomes of the coronavirus. Early recognition of at-risk patients can enhance closer monitoring of patients and prompt earlier discussions regarding goals of care.

Who: More than 6,000 adults with COVID-19 admitted to five hospitals within the Mount Sinai Health System.

When: COVID-19 patients admitted from March 10 to December 26, 2020.

What: The study uses a machine learning model to determine COVID-19 patients most at risk for treatment requiring dialysis or critical illness leading to death.

How: The team used data on adults hospitalized with COVID-19 throughout the Mount Sinai Health System to develop and validate five models for predicting treatment with dialysis or death at various time periods --1, 3, 5 and 7 days --following hospital admission. Patients admitted to Mount Sinai Hospital in Manhattan were used for internal validation, while the other four hospital locations were part of the external validation cohort. Assessed features included demographics, comorbidities, laboratory results, and vital signs within 12 hour of hospital admission.

The five models created and tested were: the logistic regression, LASSO, random forest, and XGBoost with and without imputation. Out of the total model approaches used, XGBoost without imputation had the highest area under the receiver curve and area under the precision recall curve on internal validation for all time points. This model also had the highest test parameters on external validation across all time windows. Features including red cell distribution width, creatinine, and blood urea nitrogen were major drivers of model prediction.

Study Conclusions: Mount Sinai researchers have developed and validated a machine learning model to identify hospitalized COVID-19 patients at risk of acute kidney injury and death. The XGBoost model without imputation had the best performance compared to standard and other machine learning models. Widespread use of electronic health records makes the deployment of prediction models, such as this one, possible.

Said Mount Sinai's Dr. Girish Nadkarni of the research:
The near universal use of electronic health records has created a tremendous amount of data, which has enabled us to generate prediction models that can directly aid in the care of patients. A version of this model is currently deployed at Mount Sinai Hospital in patients who are admitted with COVID-19.

Said Mount Sinai's Dr. Lili Chan of the research:
As a nephrologist, we were overwhelmed with the increase in patients who had AKI during the initial surge of the COVID-19 pandemic. Prediction models like this enable us to identify, early on in the hospital course, those at risk of severe AKI (those that required dialysis) and death. This information will facilitate clinical care of patients and inform discussions with patients and their families.

Said Mount Sinai's Dr. Akhil Vaid of the research:
Machine learning allows us to discern complex patterns in large amounts of data. For COVID-19 inpatients, this means being able to more easily identify incoming at-risk patients, while pinpointing the underlying factors that are making them better or worse. The underlying algorithm, XGBoost, excels in accuracy, speed, and other under-the-hood features that allow for easier deployment and understanding of model predictions.

Credit: 
The Mount Sinai Hospital / Mount Sinai School of Medicine

Solving a double murder arouses international interest

image: Andreas Tillmar, docent and adjunct senior lecturer in forensic genetics at the Department of Biomedical and Clinical Sciences, Linköping University.

Image: 
Edis Portori

The technology using DNA-based genealogy that solved a double murder in Linköping opens completely new possibilities in investigating serious crime. LiU researchers are now involved in spreading new knowledge about the technology, which brings hope to police forces and has aroused major international interest.

"We want to tell others about the problems that we faced when working with this pilot case, and how we dealt with them. We can prevent others reinventing the wheel, and make sure that the knowledge available is extended and improved", says Andreas Tillmar.

He is forensic geneticist at the National Board of Forensic Medicine, and adjunct senior lecturer in the Department of Biomedical and Clinical Sciences at Linköping University. His research is focussed on developing methods to obtain genetic information from low-quality DNA samples, such that they give sufficient information, for example, to allow searches in genealogical databases. His methods contributed to the murders in Linköping in 2004 being solved.

Together with colleagues from the Swedish Police Authority and the National Forensic Centre, among others, Andreas Tillmar has published an article in the prestigious scientific journal Forensic Science International: Genetics. The article is a case study of a double murder in October 2004, in which an eight-year-old boy and a 56-year-old woman were stabbed to death. The case was finally solved in June 2020.

The Police Authority conducted a legal inquiry early in 2019 which concluded that the double murder was to be used as a pilot case to test the DNA-based genealogy method. After this, it took 1.5 years of collaboration between researchers and authorities before the murderer could be arrested. The successful resolution was a result of the police being able to use commercial genealogy databases and in this way gain access to a significantly larger selection of people to search.

The article describes the technical, legal and ethical aspects that it was necessary to solve during the work, and how the joint work finally gave a solution.

When the Swedish pilot case started in 2019, the technology of DNA-based genealogical research had been used only to a very limited extent. The first known case, from 2018, had resulted in a serial killer known as the Golden State killer being arrested and convicted in the US.

"This case aroused much attention in the media, but the knowledge behind the arrest was never published, since the technology had been managed by a private company. It wanted to keep the knowledge to itself for commercial reasons. It's different in our case: we have knowledge that is in strong demand and we want to spread it", says Andreas Tillmar.

The article describes not only the painstaking work that resulted in improved DNA-based methods: it also gives examples of legal and ethical questions.

The legal questions concern such matters as the current legislation on personal privacy. It is not obvious that detectives are to be able to use genetic information from commercial DNA-based genealogy databases. "It's a grey area. Technology is often one step ahead of the law."

Ethical dilemmas that arise with this type of DNA analysis include the fact that the police obtain the DNA information of individuals and in this way insight into their private lives. This includes kinship relationships, and their risk of developing certain genetic diseases.

"Thus, there is a risk of a conflict between that two important principles: the right of the individual to privacy against the aspiration of society to solve serious crime", says Andreas Tillmar.

He points out that the solution to the double murder has aroused considerable international interest.

"As far as we know, we are the first outside the US to use the technology. We hope that others can benefit from our work, and that we can improve these DNA-based methods through, for example, international collaboration," says Andreas Tillmar.

Credit: 
Linköping University

Tiniest of moments proves key for baby's healthy brain

image: Noelle D. Dwyer, PhD, and her team have made new discoveries about brain development.

Image: 
Courtesy Dwyer Lab

University of Virginia School of Medicine researchers have shed new light on how our brains develop, revealing that the very last step in cell division is crucial for the brain to reach its proper size and function.

The new findings identify a potential contributor to microcephaly, a birth defect in which the head is underdeveloped and abnormally small. That's because the head grows as the brain grows. The federal Centers for Disease Control estimates that microcephaly affects from 1 in 800 children to 1 in 5,000 children in the United States each year. The condition is associated with learning disabilities, developmental delays, vision and hearing loss, movement impairment and other problems.

"By understanding the genetic causes of microcephaly, even though they are rare, we can also help to understand how some viral infections can cause of microcephaly, such as Zika virus or cytomegalovirus," said researcher Noelle D. Dwyer, PhD, of UVA's Department of Cell Biology.

Understanding Brain Development

Dwyer and her team aim to understand how small changes in individual cells can lead to dramatic changes in the brain. In this case, they have identified an important role for abscission, the final step in cell division. During abscission, a new, or "daughter," cell severs its connection to its "mother" cell. Think of it like cutting the cord when a new baby arrives in the world.

Scientists have suspected that a particular cellular protein, Cep55, is essential for proper abscission. Dwyer wanted to investigate that, to determine what would happen if the protein were absent. She and her colleagues were surprised to find that abscission could still occur in their lab mice. However, the process took longer than usual, and the failure rate went up substantially.

Notably, the neural stem cells that failed abscission signaled that they needed to be removed from the brain, the researchers report. That led to massive numbers of cells dying and being removed. That's in contrast to cells elsewhere in the body, which don't call for their own removal when abscission fails.

"Neural stem cells in the prenatal brain seem to have tighter 'quality control' than cells in other parts of the body. If their DNA or organelles are damaged, they have this hair-trigger response to sacrifice themselves, so that they don't make abnormal brain cells that might cause brain malfunction, or brain tumors," Dwyer said. "Brain can still function. Other tissues seem to have a higher tolerance for damaged cells and don't activate this cell-death response."

Blocking the neural stem cells' signal for removal helped the brains of lab mice grow larger, Dwyer found, but this restored only part of the brain's normal size. Further, normal brain organization and function remained disrupted. This shows the importance of proper abscission in healthy brain development, the researchers say.

Dwyer noted that blocking the cell death signal with drugs or gene therapy could help restore brain growth in certain types of microcephaly, but it also might make brain function worse. "That's why it's important to test these ideas in animal models and cell-culture models," she said.

UVA's new findings align with what scientists have known about the gene that makes the Cep55 protein. People who have mutations in the Cep55 gene suffer severe defects in their brain and central nervous system, while the rest of their bodies are relatively spared. Dwyer's new research helps explain why that is.

The new findings also benefit the battle against cancer. "Cep55 mutations are also found associated with many human cancers, so understanding the normal function of Cep55 in dividing cells in the brain helps inform cancer researchers how its altered function could lead to abnormal cell division that can initiate or fuel tumor growth," Dwyer said.

Dwyer noted the important contributions of Jessica Little and Katrina McNeely, who recently completed their PhDs in Dwyer's lab. Little is an MD-PhD student in UVA's Cell & Developmental Biology program who graduated this spring; McNeely was a Neuroscience graduate student who defended her dissertation last year.

Credit: 
University of Virginia Health System