Earth

Recent Atlantic ocean warming unprecedented in nearly 3,000 years

image: Climate scientists Nicholas Balascio and Francois Lapointe working with an ice auger to drill in the 3.5m thick ice at South Sawtooth Lake, Ellesmere Island, Nunavut Territory, Canada. An ice auger extension is needed because the ice is so thick. Coring lake sediments can only be done after this tedious work, Lapointe notes.

Image: 
Mark B. Abbott

AMHERST, Mass. - Taking advantage of unique properties of sediments from the bottom of Sawtooth Lake in the Canadian High Arctic, climate scientists have extended the record of Atlantic sea-surface temperature from about 100 to 2,900 years, and it shows that the warmest interval over this period has been the past 10 years.

A team led by Francois Lapointe and Raymond Bradley in the Climate System Research Center of the University of Massachusetts Amherst and Pierre Francus at University of Québec-INRS analyzed "perfectly preserved" annual layers of sediment that accumulated in the lake on northern Ellesmere Island, Nunavut, which contain titanium left over from centuries of rock weathering. By measuring the titanium concentration in the different layers, scientists can estimate the relative temperature and atmospheric pressure over time.

The newly extended record shows that the coldest temperatures were found between about 1400-1600 A.D., and the warmest interval occurred during just the past decade, the authors report. Francus adds, "Our unique data set constitutes the first reconstruction of Atlantic sea surface temperatures spanning the last 3,000 years and this will allow climatologists to better understand the mechanisms behind long-term changes in the behavior of the Atlantic Ocean."

When temperatures are cool over the North Atlantic, a relatively low atmospheric pressure pattern is found over much of the Canadian High Arctic and Greenland. This is associated with slower snow melt in that region and higher titanium levels in the sediments. The opposite is true when the ocean is warmer - atmospheric pressure is higher, snow melt is rapid and the concentration of titanium decreases.

Lapointe says, "Using these strong links, it was possible to reconstruct how Atlantic sea surface temperatures have varied over the past 2,900 years, making it the longest record that is currently available." Details appear this week in Proceedings of the National Academy of Sciences.

The researchers report that their newly reconstructed record is significantly correlated with several other independent sediment records from the Atlantic Ocean ranging from north of Iceland to offshore Venezuela, confirming its reliability as a proxy for the long-term variability of ocean temperatures across a broad swath of the Atlantic. The record is also similar to European temperatures over the past 2,000 years, they point out.

Fluctuations in sea surface temperatures, known as the Atlantic Multidecadal Oscillation (AMO), are also linked to other major climatic upheavals such as droughts in North America and the severity of hurricanes. However, because measurements of sea surface temperatures only go back a century or so, the exact length and variability of the AMO cycle has been poorly understood.

Climate warming in the Arctic is now twice or three times faster than the rest of the planet because of greenhouse gas emissions from burning fossil fuels, warming can be amplified or dampened by natural climate variability, such as changes in the surface temperature of the North Atlantic, which appear to vary over cycles of about 60-80 years.

Lapointe, who has carried out extensive fieldwork in the Canadian Arctic over the past decade, notes that "It has been common in recent summers for atmospheric high-pressure systems - clear-sky conditions - to prevail over the region. Maximum temperatures often reached 20 degrees Celsius, 68 degrees Fahrenheit, for many successive days or even weeks, as in 2019. This has had irreversible impacts on snow cover, glaciers and ice caps, and permafrost."

Bradley adds that, "The surface waters of the Atlantic have been consistently warm since about 1995. We don't know if conditions will shift towards a cooler phase any time soon, which would give some relief for the accelerated Arctic warming. But if the Atlantic warming continues, atmospheric conditions favoring more severe melting of Canadian Arctic ice caps and the Greenland ice sheet can be expected in the coming decades."

In 2019, Greenland Ice Sheet lost more than 500 billion tons of mass, a record, and this was associated with unprecedented, persistent high pressure atmospheric conditions."

Lapointe notes, "Conditions like this are currently not properly captured by global climate models, underestimating the potential impacts of future warming in Arctic regions."

Credit: 
University of Massachusetts Amherst

Deep neural networks show promise for predicting future self-harm based on clinical notes

image: Artificial neural network with a chip.

Image: 
Mikemacmarketing. Licensed under CC BY 2.0

According to the American Foundation for Suicide Prevention, suicide is the 10th leading cause of death in the U.S., with over 1.4 million suicide attempts recorded in 2018. Although effective treatments are available for those at risk, clinicians do not have a reliable way of predicting which patients are likely to make a suicide attempt.

Researchers at the Medical University of South Carolina and University of South Florida report in JMIR Medical Informatics that they have taken important steps toward addressing the problem by creating an artificial intelligence algorithm that can automatically identify patients at high risk of intentional self-harm, based on the information in the clinical notes in the electronic health record.

The study was led by Jihad Obeid, M.D., co-director of the MUSC Biomedical Informatics Center, and Brian Bunnell, Ph.D., formerly at MUSC and currently an assistant professor in the Department of Psychiatry and Behavioral Neurosciences at the University of South Florida.

The team used complex artificial neural networks, a form of artificial intelligence also known as deep learning, to analyze unstructured, textual data in the electronic health record. Deep learning methods progressively use layers of artificial networks to extract higher information from raw input data. The team showed that these models, once trained, could identify patients at risk of intentional self-harm.

"This kind of work is important because it leverages the latest technologies to address an important problem like suicide and identifies patients at risk so that they can be referred to appropriate management," said Obeid.

Thus far, researchers have primarily relied on structured data in the electronic health record for the identification and prediction of patients at risk. Structured data refers to tabulated information that has been entered into designated fields in the electronic health record as part of clinical care. For example, when physicians diagnose patients and assign International Classification of Disease (ICD) codes, they are creating structured data. This sort of tabulated, structured data is easy for computer programs to analyze.

However, 80% to 90% of the pertinent information in the electronic health record is trapped in text format. In other words, the clinical notes, progress reports, plan-of-care notes and other narrative texts in the electronic health record represent an enormous untapped resource for research. Obeid's study is unique because it uses deep neural networks to "read" clinical notes in the electronic health record and identify and predict patients at risk for self-harm.

After regulatory ethics review and approval of the proposed research by the Institutional Review Board at MUSC, Obeid began by identifying patient records associated with ICD codes indicative of intentional self-harm in MUSC's research data warehouse. That warehouse, which was created with support from the South Carolina Clinical & Translational Research Institute, provides MUSC researchers access to patient electronic health record data, provided they have obtained the necessary permissions.

In order to simulate a real-world scenario, Obeid and his team divided the clinical records into two time categories: 2012 to 2017 records that were used for training the models and 2018-2019 records that were used for testing the trained models. First, they looked at the clinical notes taken during the hospital visit in which the ICD code was assigned. Using that as the training data set, the models "learned" which patterns of language in the clinical notes of the patients' electronic medical records were associated with the assignment of an ICD code of intentional self-harm. Once the models were trained, they could identify those patients based solely on their analysis of text in the clinical notes, with an accuracy of 98.5%. Experts manually reviewed a subset of records to confirm the model's accuracy.

Next, the team tested whether the most accurate of the models could use clinical notes in the electronic health record to predict future self-harm. To this end, Obeid's team identified the records of patients who had presented with intentional self-harm and trained the model using their clinical notes between six months to one month prior to the intentional self-harm hospital visit. They then tested whether the trained models could correctly predict if these patients would later present with intentional self-harm.

Predicting future self-harm based solely on clinical notes proved to be more challenging than identifying current at-risk patients due to the extra "noise" that is introduced when vast amounts of patient history are included in the model. Historical clinical notes tend to be varied and not always relevant. For example, if a patient was seen for depression or other mental health issues six months prior to his or her hospital visit for intentional self-harm, then the clinical notes were likely to include relevant information. However, if the patient came in for a condition unrelated to mental health, then the notes were less likely to include relevant information.

While the inclusion of irrelevant information introduces a lot of noise into the data analysis, all of this information must be included across patients in the models to predict outcomes. As a result, the model was less accurate at predicting which patients would later present for intentional self-harm than simply classifying current patients for suicide risk. Nonetheless, the predictive accuracy of this model was very competitive with that previously reported for models that relied on structured data, reaching an accuracy of almost 80% with relatively high sensitivity and precision.

Obeid's team has shown the feasibility of using deep learning models to identify patients at risk of intentional self-harm based on clinical notes alone. The study also showed that models can be used to predict, with fairly good fidelity, which patients will present in the future for intentional self-harm based on clinical notes in their electronic health record.

These early results are promising and could have large impacts at the clinical level. If deep learning models can be used to predict which patients are at high-risk for suicide based on clinical notes, then clinicians can refer high-risk patients early for appropriate treatment. Using these models to classify patients as at risk for self-harm could also facilitate enrollment into clinical studies and trials of potential new treatments relevant to suicide.

In future studies, Obeid aims to evaluate changes in the predictive time window for his models, for example, looking at records one year before a patient's presentation for intentional self-harm instead of six months. The team also intends to examine other outcomes such as suicide or suicidal ideation. And while the models work well at MUSC, Obeid must now show that they can be generalized to other institutions.

"Can the models be trained in one location and transferred to another location and still work?" asked Obeid. "If the answer is yes, then this saves critical resources because other institutions will not have to perform expensive and time-consuming manual chart reviews to confirm that the models are getting it right during the training periods."

Credit: 
Medical University of South Carolina

Mapping out rest stops for migrating birds

image: Migratory bird populations face rapid declines due to many interacting factors including light pollution, climate change, and habitat loss and degradation. Researchers hope that the stopover-to-passage ratio can offer additional insight and renewed interest in understanding stopover sites. Pictured here: Orchard Oriole.

Image: 
Colorado State University/ Kyle Horton

Each spring, billions of land birds -- thrushes, warblers, orioles, tanagers, and more -- migrate through the night, navigating the coast of the Gulf of Mexico. Even greater numbers migrate in the fall. During the day, these birds stop to rest, recover and refuel for the next leg of their journey. These two phases of migration -- passage (flight) and stopover (rest) -- are well understood in ornithology but had previously only been studied independently.

Recently published research in the journal Ecology Letters combines these components into a new metric called the stopover-to-passage ratio. This study is the result of a collaboration between researchers at the Smithsonian Migratory Bird Center, Cornell Lab of Ornithology, University of Maryland Center for Environmental Science, Colorado State University, Georgetown University, University of Massachusetts and University of Delaware.

"The stopover-to-passage ratio is an indicator of the number of migrants that stop to rest during migration and those that continue heading north or south, depending on the season. The ratio varies from site to site," said co-author Kyle Horton, assistant professor at Colorado State University and an alumnus of UD's College of Agriculture and Natural Resources. "It's highly useful, from a conservation standpoint, to know if the majority of birds fly over a site or predominantly stop at a site to refuel or rest. The answer to this question can have important implications for what action is ultimately done on-the-ground to help migratory birds."

"Characterization of stopover habitat use relative to passage represents a fundamental gap in our knowledge," said Emily Cohen, lead author and assistant professor at the University of Maryland Center for Environmental Science, Appalachian Laboratory. "This gap primarily exists because a methodology to collect broad-scale information about distributions of birds in terrestrial habitats during the day and in the airspace at night has only recently become possible with weather surveillance radar."

Archived since the mid-1990s but only freely available since 2004, weather radar data collected by NOAA, the National Oceanic and Atmospheric Administration, now enables researchers to map the nocturnal habits of migratory bird populations. It is a herculean effort to process and synthesize these vast data sets; scientists must distinguish bird movement from precipitation data on the radar based on density, speed and knowledge of the natural history of bird behavior. Calculating both the traffic patterns of the birds in flight and their activity in stopover sites, the research team created migration maps and calculated the stopover-to-passage ratio along the entire U.S. Gulf Coast.

"Our findings were not what we expected," said Jeff Buler, University of Delaware associate professor of wildlife ecology and senior author on the paper. "We understand the phenology of migration quite well, so we know the absolute number of birds moving through an area at the peak of migration. The density of birds on the ground also peaks around the same time. When looking at stopover-to-passage ratio, we thought that we would see more birds stopping during the peak of migration but we actually found the opposite."

Even though fewer birds migrate outside of the peak window, a larger percentage of that bird population stops at particular resting and foraging sites, indicating that those lands are of critical importance at that time.

"We saw a high stopover-to-passage ratio in the panhandle of Florida, which was unexpected because in the spring there aren't as many birds moving through that area," said Buler. "What that tells us is that the birds that are moving through that area need to stop, and it actually is indirect evidence that these are likely migrants that are coming from South America. They're flying over the Caribbean and the Atlantic Ocean, so they're making a farther journey than those that are just crossing the Gulf of Mexico. That first place to land in Florida is really important to them and most of them have to stop because they've run out of gas. From a conservation perspective, this really opens up a question of whether we need to rethink how we prioritize conserving stopover areas."

Currently, breeding ground habitat receives far more conservation attention and protection than migratory stopover habitat. However, with migratory bird populations facing rapid declines due to many interacting factors including light pollution, climate change, and habitat loss and degradation, researchers hope that the stopover-to-passage ratio can offer additional insight and renewed interest in often overlooked stopover sites.

"These results show the critical importance of the habitats around the U.S. coast of the Gulf of Mexico and Florida for sustaining North America's migratory birds. We show for the first time that over half of the birds migrating through these coastlines stop there," said Cohen. "Further, disparities in disproportionate selection and absolute abundance at stopover sites revealed potential migratory bottlenecks where geography or restricted habitat may disproportionately concentrate birds along migration routes, highlighting that density of use alone is not a comprehensive measure of the conservation value of a stopover site for migrating birds, a topic that has not been addressed during migration. The areas where the stopover-to-passage ratio is high are potentially more important for migrating birds than was previously thought."

"Linking aerial and terrestrial habitats with this new metric provides a unique opportunity to understand how migrating birds, in this case very large numbers of them, use a region where we know drastic and rapid changes are occurring," said co-author Andrew Farnsworth of the Cornell Lab of Ornithology. "Whether for prioritization of critical areas or for developing dynamic conservation planning, this kind of quantitative science is invaluable for supporting decision-making that can safeguard this incredible region and the spectacular movements of birds that occur here annually."

Credit: 
University of Delaware

Thinning and prescribed fire treatments reduce tree mortality

ALBANY, Calif. -- To date in 2020, 1,217 wildfires have burned 1,473,522 million acres of National Forest System lands in California; 8,486 wildfires have burned over 4 million acres across all jurisdictions in California. This current fire activity comes after forests in the region experienced an extreme drought accompanied by warmer than normal temperatures from 2012 to 2015, resulting in the deaths of over 147 million trees, mostly from bark beetles. These dead trees are now adding more fuel to this summer's wildfires, especially in the southern and central Sierra Nevada, where tree mortality was the heaviest.

Frequent fire once kept forests throughout the western US relatively open and prevented excess litter and downed wood from accumulating on the forest floor. After more than a century of fire suppression, many forests became far denser than they once were and more prone to disturbances such as uncharacteristically severe wildfire and drought. A recently released study by USDA Forest Service, Pacific Southwest Research Station research ecologists Eric Knapp and Malcolm North, research entomologist Chris Fettig, along with co-authors Alexis Bernal and Dr. Jeffrey Kane (Humboldt State University) suggests that if forests had been closer to their historic densities, tree mortality would likely not have been as severe. Published in the journal Forest Ecology and Management, the study found that between 2014 and 2018, 34% of trees in unthinned areas died compared with only 11% of trees in thinned areas.

This study compared two different types of thinning - a 'High Variability' method that restored a structure with more of the trees in groups and groups intermixed with small gaps, similar to what forests of California, shaped by fire, once looked like, and a 'Low Variability' method with relatively evenly spaced individual trees. One goal of the 'High Variability' method was to create and accentuate habitat for forest dwelling plant and animal species. Half of the study units were also treated with prescribed fire. The study found both thinning methods were equally effective at reducing tree mortality and increasing tree growth.

"Our findings show that thinning and prescribed fire can produce a diverse forest that not only provides greater variety of habitats but is also resilient to extreme drought," said Dr. Knapp. "Because of these potential habitat benefits and just a more natural look, high variability approaches to forest thinning also tend to have broader support in the community".

The large number of dead trees are adding to a pre-existing fuel problem by falling to the ground and creating more material to burn in areas that are already prone to uncharacteristically severe wildfire. Prescribed fire is one tool for reducing these fuels. "Results from our study demonstrated that trees in areas treated with prescribed fire were less likely to die if the forest was thinned first," said Dr. Knapp.

Credit: 
USDA Forest Service - Pacific Southwest Research Station

UMaine researcher: How leaves reflect light reveals evolutionary history of seed plants

The way leaves reflect light can illuminate the evolutionary history of seed plants, according to an international team of scientists led by a University of Maine researcher. 

Plant reflectance spectra, or the light profile leaves reflect across different wavelengths, capture the change and diversification of seed plants as a result of evolution, according to Dudu Meireles. The UMaine assistant professor of plant evolution and systematics and colleagues from the United States, Canada, Switzerland and England explored how spectra have evolved and diversified over the last 350 million years of plant evolution.   

Researchers found that by measuring the light spectrum reflected by a leaf, they can identify the plant, learn about its chemistry and evolution, and pinpoint its place in the tree of life, Meireles says. Spectra also can be used to "provide breakthrough assessments of leaf evolution and plant phylogenetic diversity at global scales," the group wrote in its report for the study. Meireles says he hopes to eventually perform these measurements remotely using unmanned aerial vehicles, airplanes, or satellites.

"We know little about how plant traits and chemistry evolved because collecting the data is difficult and slow, but spectra enables us to gather those data at unprecedented rates." Meireles says.  

New Phytologist, an international plant science research journal, published the group's findings in its October 2020 issue and promoted the study on the cover. The cover also features art by Adriana Cavalcanti of Orono, an Intermedia MFA student. According to the journal, the leaf art created from hole punches of autumn foliage evokes "how light reflectance spectra capture leaf chemical diversity and reveal the evolutionary history of plants." 

Cavalcanti says she created the artwork, titled "Biomimicry," last fall by hole punching a variety of leaves and gluing the different circles to reclaimed wood to form a multicolored leaf. The ways technology intertwines with science and nature inspired the piece, she says, adding how nature observations have influenced several technological advancements. 

While working on his plant reflectance spectra research, Meireles encouraged Cavalcanti, who is also a botanist, to submit "Biomimicry" to New Phytologist, she says. The journal, which accepts original artwork that reflects the primary topic of each issue, selected her piece to represent Merieles' findings.  

"When I heard that my work was selected for the cover, it was a big surprise. First, because I don't usually see art pieces on the cover of scientific journals. Secondly, because It wasn't created with that in mind," Cavalcanti says. "For me, it is just an example of how open minded we need to be about our own art creations; the importance of leaving space for people to get their own perception about art."

The research team conducted the study using a dataset of more than 16,000 leaf-level reflectance spectra, ranging from visible to infra-red light, from 544 seed plant species in the tropical and temperate latitudes of the Americas and Europe. They measured leaf spectra using two full-range field spectroradiometers, leaf clips and artificial light sources. 

While spectra highlights the phylogenetic history of seed plants, the location of the signal presenting that information in spectra can vary among plant lineages, according to researchers. They found, for example, that the signal yielding the evolutionary record for the monocot lineage of plants is located in near-infrared light reflected from their leaves, but the signal for the gymnosperm lineage resides in short-wave infrared light reflected from leaves. To monitor plant diversity, Meireles says scientists have to measure the full spectrum of light reflected from leaves rather than a handful of bands. 

The team created a model that can help simulate how different evolutionary dynamics, such as convergent adaptation to shade, affect spectra. Their framework also revealed that evolution constrains the variation of spectra in seed plants to different extents, particularly for the visible region associated with pigments such as chlorophyll and carotenoids. 

Meireles and his colleagues hope that increasing availability of high-resolution spectral data not only for leaves, but also at the canopy- and landscape-levels will help improve how scientists monitor plant biodiversity. 

"Ecosystem function, and by extension human wellbeing, depend on biodiversity. We must monitor diversity to understand, manage and preserve it, and reflectance spectra is one of the best tools we have to do that job efficiently." Meireles says. 

Credit: 
University of Maine

Oncotarget: Genomic markers of midostaurin drug sensitivity in leukemia patients

image: RGL4 expression correlates with response to midostaurin in FLT3-ITD positive samples. (A) Distribution of midostaurin AUC for FLT3-ITD positive samples with breakpoints for most and least sensitive set at the 20th and 80th AUC percentile, respectively (N = 41). (B) Violin plots of RGL4 expression in midostaurin sensitive and least sensitive samples. (C) Positive correlation (Spearman rho = 0.36) between midostaurin AUC and RGL4 expression across all FLT3-ITD positive samples (N = 41).

Image: 
Correspondence to - Mara W. Rosenberg - rosenberg.mara@gmail.com

Oncotarget Volume 11, Issue 29 reported that acute myeloid leukemia is a heterogeneous malignancy with the most common genomic alterations in NPM1, DNMT3A, and FLT3. Midostaurin was the first FLT3 inhibitor FDA approved for AML and is standard of care for FLT3 mutant patients undergoing induction chemotherapy.

As there is a spectrum of response, we hypothesized that biological factors beyond FLT3 could play a role in drug sensitivity and that select FLT3-ITD negative samples may also demonstrate sensitivity.

The Oncotarget authors performed an ex vivo drug sensitivity screen on primary and relapsed AML samples with corresponding targeted sequencing and RNA sequencing. They observed a correlation between FLT3-ITD mutations and midostaurin sensitivity as expected and observed KRAS and TP53 mutations correlating with midostaurin resistance in FLT3-ITD negative samples.

Further, they identified genes differentially expressed in sensitive vs. resistant samples independent of FLT3-ITD status.

Within FLT3-ITD mutant samples, over-expression of RGL4, oncogene and regulator of the Ras-Raf-MEK-ERK cascade, distinguished resistant from sensitive samples.

"Within FLT3-ITD mutant samples, over-expression of RGL4, oncogene and regulator of the Ras-Raf-MEK-ERK cascade, distinguished resistant from sensitive samples"

Dr. Mara W. Rosenberg from The Oregon Health and Science University said, "Acute Myeloid Leukemia (AML) is a heterogeneous malignancy, most commonly affecting individuals ≥60 years of age."

FLT3 mutations occur in approximately 30% of de novo AML cases, of which 25% are ITD mutations and 5% are tyrosine kinase domain point mutations.

Per the 2017 European Leukemia Network Guidelines, a complete diagnostic work-up should include screening for the presence of FLT3 mutations, as well as mutant-to-wild-type allelic ratios.

There are many past and present clinical trials examining the activity of tyrosine kinase inhibitors against FLT3 mutant AML, including sunitinib, midostaurin, lestaurtinib, sorafenib, ponatinib, crenolanib, gilteritinib, and quizartinib.

Factors predictive of the development of resistance include the initial presence of multiple leukemic clones, low FLT3-mutant allelic ratio, or additional primary mutations in the FLT3 kinase domain.

The authors hypothesized that there are additional genomic alterations and gene expression changes outside of FLT3-ITD mutations that can influence AML sample resistance or sensitivity to midostaurin and aimed to further characterize these factors.

The Rosenberg Research Team concluded in their Oncotarget Research Paper, "we identify genomic alterations that correlate with midsotaurin response independent of FLT3-ITD status, propose that Ras-Raf-MEK-ERK inhibition in combination therapy could limit resistance to midostaurin, and suggest that within the overall AML population there may be therapeutic benefit of midostaurin in patients with certain expression profiles."

Sign up for free Altmetric alerts about this article

DOI - https://doi.org/10.18632/oncotarget.27656

Full text - https://www.oncotarget.com/article/27656/text/

Correspondence to - Mara W. Rosenberg - rosenberg.mara@gmail.com

Keywords -
acute myeloid leukemia,
drug sensitivity,
midostaurin,
drug resistance,
FLT3

About Oncotarget

Oncotarget is a weekly, peer-reviewed, open access biomedical journal covering research on all aspects of oncology.

To learn more about Oncotarget, please visit https://www.oncotarget.com or connect with:

SoundCloud - https://soundcloud.com/oncotarget
Facebook - https://www.facebook.com/Oncotarget/
Twitter - https://twitter.com/oncotarget
LinkedIn - https://www.linkedin.com/company/oncotarget
Pinterest - https://www.pinterest.com/oncotarget/
Reddit - https://www.reddit.com/user/Oncotarget/

Oncotarget is published by Impact Journals, LLC please visit http://www.ImpactJournals.com or connect with @ImpactJrnls

Journal

Oncotarget

DOI

10.18632/oncotarget.27656

Credit: 
Impact Journals LLC

DNA-peptide interactions create complex behaviours which may have helped shape biology

image: Short double-stranded DNA (Oligo dsDNA) co-assemble in an end-to-end stacked fashion with a cationic peptide (poly-L-lysine) to form rigid bundles, resulting in formation of liquid crystal coacervate droplets.

Image: 
Tommaso P Fraccia

Deoxyribonucleic acid (DNA)-protein interactions are extremely important in biology. For example, each human cell contains about 2 meters worth of DNA, but this is packaged into a space about one million times smaller. The information in this DNA allows the cell to copy itself. This extreme packaging is mainly accomplished in cells by wrapping the DNA around proteins. Thus, how DNA and proteins interact is of extreme interest to scientists trying to understand how biology organises itself. New research by scientists at the Earth-Life Science Institute (ELSI) at Tokyo Institute of Technology and the Institut Pierre-Gilles de Gennes, ESPCI Paris, Université PSL suggests that the interactions of DNA and proteins have deep-seated propensities to form higher-ordered structures such as those which allow the extreme packaging of DNA in cells.

Modern living cells are principally composed of a few classes of large molecules. DNA gets the lion's share of attention as it is the repository of the information cells use to build themselves generation after generation. This information-rich DNA is normally present as a double-stranded caduceus of two polymers wrapped around each other, with much of what makes the information DNA contains obscured to the external environment because the information-bearing parts of the molecules are engaged with their complementary strand. When DNA is copied into ribonucleic acid (RNA), its strands are pulled apart to allow its more complex surfaces to interact, which enables it to be copied into single-stranded RNA polymers. These RNA polymers are finally read out by biological processes into proteins, which are polymers of a variety of amino acids with extremely complicated surface properties. Thus, DNA and RNA are somewhat predictable in terms of their chemical behaviour as polymers, while proteins are not.

Polymeric molecules, those composed or repeated types of subunits, can display complex behaviours when mixed with other chemicals, especially when dissolved in a solvent like water. Chemists have developed a complex set of terms for how compounds behave when they are mixed. For example, the proteins in cow's milk are considered a colloidal (or homogeneous noncrystalline suspended mixture which does not settle and cannot be separated by physical means) suspension in water. When lemon juice is added to milk, the suspended proteins reorganise themselves to produce the visible self-organisation of curds, which do separate into a new phase. There are other types of this phenomenon chemists have discovered over the years, for example, liquid crystals (LC). LCs are formed when a molecules have an elongated shape or the tendency to make linear aggregates (like stacks of molecules one on top of each other): the resulting material presents a mixture of the properties of a crystal and a liquid: the material thus has a certain degree of order like a solid (for example, parallel orientation of the molecules) but still retains its fluidity (molecules can easily slip on and by each other). We all experience liquid crystals in the various screens we interact with every day "LCDs," or liquid crystal displays, which use these variable properties to make the images we see on our device screens. In their work, Fraccia and Jia, showed that double-stranded DNA and peptides can generate many different LC phases in a very peculiar way: the LCs actually form in membraneless droplets, called coacervates, where DNA and peptides are spontaneously co-assembled and ordered. This process brings DNA and peptides to very high concentrations, comparable to that of a cell's nucleus, which is 100-1000 times greater of that of the very diluted initial solution (which is the maximum concentration that can likely be achieved on early Earth). Thus, such spontaneous behaviour can in principle favour the formation of the first cell-like structures on early Earth, which would take advantage of the ordered, but fluid, LC matrix in order to gain stability and functionality, and to favour the growth and the evolution of primitive biomolecules.

The cut-off between when these higher-order properties begin to present themselves is not always clear cut. When molecules interact at the molecular level, they often "self-organise." One can think of the process of adding sand to a sandpile: as one sprinkles more and more sand to a pile, it tends to form a "low energy" final state - a pile. Though the addition of sand grains may cause some new structures to form locally, at some point, the addition of one more grain causes a landslide in the pile which reinforces the conical shape of the pile.

Though we all benefit from the existence of these phenomena, the scientific community may be missing important aspects of the implications of this type of self-organisation, Jia and Fraccia argue. The combination of these collective material self-organising effects may be relevant at many scales of biology and may be important for biomolecular structure transitions in cell physiology and disease. In particular, the researchers discovered that various liquid crystalline structures could be accessed continuously simply by changes in environmental conditions, even as simple as changes in salinity or temperature; given the numerous unexplored conditions, this work suggests many more novel self-organised LC mesophases with potential biological function could be discovered in the near future.

This new understanding of biopolymeric self-organisation may also be important for understanding how life self-organised to become living in the first place. Understanding how primitive collections of molecules could have structured themselves into collectively behaving aggregates is a significant avenue of future research.

"When the general public hears about liquid crystals, they might think of TV screens and engineering applications. However, very few would immediately think of basic science. Most researchers would not even make the connection between LCs and the origins of life. We hope this work will help increase the public's understanding of LCs in the context of the origins of life," says co-author Jia.

Finally, this work may also be relevant to disease. For example, recent discoveries regarding diseases including Alzheimer's, Parkinson's, Huntington's Disease, and ALS (Lou Gehrig's Disease) have pointed to intracellular phase transitions and separation leading to membraneless droplets as potential major causes.

The researchers noted that though their work was heavily impacted by the pandemic, they did their best to keep working under the global shutdowns and travel restrictions.

Credit: 
Tokyo Institute of Technology

Climate change undermines the safety of buildings and infrastructure in Europe

IMAGE: Mean temperature anomaly for DJF (December-January-February) and JJA (June-July-August) seasons under the concentration scenarios RCP4.5 (first row) and RCP8.5 (second row); 2056-2085 vs 1971-2000. Data processing by DATACLIME....

Image: 
We acknowledge the World Climate Research Programme's Working Group on Regional Climate, and the Working Group on Coupled Modelling. We thank the climate modelling groups. We acknowledge the Earth System...

Buildings and infrastructure also need to adapt to the changing climate. Updating structural design standards is crucial to improving European climate resilience and ensuring the safety of constructions, that are expected to suffer from changes in atmospheric variables and more frequent and intense extreme weather events.

In 2017, the Joint Research Centre (JRC) - the European Commission's science and knowledge service - established the scientific network on adaptation of structural design to climate change. A network of experts, which includes the CMCC Foundation, dedicated to studying how research can help decision-makers take predicted changes in the climate system into account when amending the Eurocodes, the European standards for structural design.

The role of expected increases in temperature in Europe over the coming decades is at the centre of two new reports realised by the network, the first focused on thermal actions on structures (Thermal design of structures and the changing climate), and the other on corrosion in the context of a changing climate (Expected implications of climate change on the corrosion of structures).

In their contribution to these publications, CMCC researchers from the REMHI division - Regional Models and geo-Hydrological Impacts - analyzed temperature variations and other atmospheric variables expected over the next 50 years, a period that usually represents the use lifespan of a structure built today. The study used the results of the projections included in the EURO-CORDEX ensemble.

The first study, considered the "worst-case" scenario (RCP8.5) - or rather the "high emissions scenario", predicts a growth in greenhouse gas emissions at current rates for the future as a reference scenario to investigate the case study of Italy, noting for the entire country a relevant temperature increase by 2070.

"Taking as a reference the maximum and minimum temperature levels that are expected to occur at least once in 50 years, we found a significant increase in both the maximum values of the maximum temperature - which in some areas of Italy can reach +6°C - and the maximum values of the minimum temperature - with variations up to +8°C in the mountain ranges," explains Guido Rianna, CMCC researcher and one of the authors of the study. "The increase in minimum temperature may not be that relevant for buildings, as it implies that constructions will be exposed to less rigid temperatures than today, and therefore less stress. Instead, the increase in the maximum expected temperature could lead to the need for a revision of building standards to ensure the safety of constructions: linear structures such as bridges and viaducts, for example, are subject to thermal expansion."

The second publication is about a study - conducted on a European scale - on the expected variation in air temperatures and relative humidity in 2070 due to climate change, aimed at understanding to what extent these atmospheric variables may affect the corrosion of buildings in the future. Indeed, increasing temperature and relative humidity can accelerate the corrosion process of steel structures or bars embedded in reinforced concrete, undermining their resistance and therefore threatening the safety of buildings.

"Climate simulations tell us that temperatures in the next 50 years are increasing significantly throughout Europe, albeit with regional differences," continues Rianna. "The extent of this increase is between 3 and 5°C on average and depends on the climate change mitigation measures that will be implemented." Here too, the authors explain, an amendment of the Eurocodes may be necessary, in order to take into account the acceleration of the corrosion process in buildings induced by climate change and provide for measures to limit it. Future changes in relative humidity, the study explains, are not significant. Indicating that the real engine of corrosion processes of structures on a European scale will be represented by increases in temperature, rather than humidity.

"These publications are the result of a series of studies aimed at supporting the definition and revision of the European standards for structural design most suited to the world of the future," says Paola Mercogliano, director of the REMHI division of the CMCC Foundation. "After having analyzed, in the past, the impact of snowpack and in these recent studies, thermal impact, the next step will be to study the impact of wind. Our ultimate goal is to support policy-makers and builders with sound services and information for the update of current structural design standards, considering the various atmospheric phenomena and the different types of constructions, in order to allow for the implementation of effective policies and adaptation actions."

Credit: 
CMCC Foundation - Euro-Mediterranean Center on Climate Change

Crayfish 'trapping' fails to control invasive species

image: Despite being championed by a host of celebrity chefs, crayfish 'trapping' is not helping to control invasive American signal crayfish, according to new research by UCL and King's College London.

Image: 
Eleri Pritchard

Despite being championed by a host of celebrity chefs, crayfish 'trapping' is not helping to control invasive American signal crayfish, according to new research by UCL and King's College London.

There have been grave concerns within the science community and amongst conservationists that American signal crayfish are wiping out other species of crayfish across Europe - including Britain's only native crayfish, the endangered white-clawed crayfish.

In their new study, published in the Journal of Applied Ecology, the researchers find that trapping is ineffective in determining and controlling signal crayfish numbers, as the vast majority of individuals are too small to catch using standard baited traps.

'Trapping to eat' American signal crayfish has been promoted as a potential control measure in recent decades, but scientists warn that this method may exacerbate the problem, as it inadvertently incentivises members of the public to spread the species to new habitats and greatly increases the risk of accidental catches of the strictly protected native species.

Co-author and PhD researcher, Eleri Pritchard (UCL Geography), said: "Invasive signal crayfish from the US were introduced to England in the 1970s. Since then they have spread rapidly, displacing native crayfish, impacting fish and damaging ecosystems."

"While celebrity chefs and conservation charities have, with good intentions, promoted trapping and foraging as a way to control American signal crayfish, our research shows trapping to be ineffective. We are also concerned that trapping risks spreading the fungal pathogen, called crayfish plague, which is lethal to native European crayfish."

The study took place in an upland stream in North Yorkshire, UK and compared the effectiveness of three methods used for surveying population numbers: baited funnel trapping, handsearching, and a novel 'triple drawdown technique', which involves draining a short section of stream in a carefully controlled way and calculating the number of crayfish present, including infants.

To measure effectiveness, scientists compared all methods for determining population size, and for understanding the prevalence of crayfish invasion and the ecological impact.

The triple drawdown technique proved significantly more accurate in determining crayfish populations. Through this method, the researchers identified signal crayfish densities up to 110 per square metre, far exceeding previous estimates for similar streams made by trap sampling.

The triple drawdown method also showed that most of the crayfish caught were tiny (smaller than a 1 pence piece), with only 2.3% of the population large enough to be caught in standard traps. Crucially, signal crayfish can become sexually mature before reaching a 'trappable' size, allowing populations to reproduce despite best efforts to control them through trapping.

Co-author and recognised UK crayfish expert, Paul Bradley (PBA Ecology), said: "We now have strong data to show that trapping does not help to control invasive crayfish. On the contrary, there is growing evidence that trapping furthers the spread.

"In the short-term, conservation efforts should re-focus on promotion of aquatic biosecurity, and in the longer-term we need further research to better understand the invasion biology of American signal crayfish. This will help to inform more effective and sustainable approaches to the management and control of this problematic invasive species".

The scientists strongly recommend that recreational crayfish trapping be curtailed to prevent the further spread of these invasive species. They believe more focus should be placed on initiatives like the UK's national 'Check Clean Dry' campaign, aimed at educating regular water users on the way that non-native invasive aquatic plants and animals can unwittingly be transported between waterbodies on contaminated clothes and equipment, as well as by fish stocking and water transfer schemes.

Dr Emily Smith, Environment Manager at Angling Trust, said: "Signal crayfish have a devastating impact on our waters, harming fish populations, increasing economic costs for fisheries and causing a nuisance for anglers.

"These findings present a new chapter in management practices, providing invaluable intelligence for fisheries and angling clubs, supporting them in refining practical solutions to controlling this highly aggressive species."

The research was carried out with PBA Applied Ecology and funded by the Natural Environment Research Council through the London NERC DTP.

Credit: 
University College London

The deep sea is slowly warming

WASHINGTON--New research reveals temperatures in the deep sea fluctuate more than scientists previously thought and a warming trend is now detectable at the bottom of the ocean.

In a new study in AGU's journal Geophysical Research Letters, researchers analyzed a decade of hourly temperature recordings from moorings anchored at four depths in the Atlantic Ocean's Argentine Basin off the coast of Uruguay. The depths represent a range around the average ocean depth of 3,682 meters (12,080 feet), with the shallowest at 1,360 meters (4,460 feet) and the deepest at 4,757 meters (15,600 feet).

They found all sites exhibited a warming trend of 0.02 to 0.04 degrees Celsius per decade between 2009 and 2019 - a significant warming trend in the deep sea where temperature fluctuations are typically measured in thousandths of a degree. According to the study authors, this increase is consistent with warming trends in the shallow ocean associated with anthropogenic climate change, but more research is needed to understand what is driving rising temperatures in the deep ocean.

"In years past, everybody used to assume the deep ocean was quiescent. There was no motion. There were no changes," said Chris Meinen, an oceanographer at the NOAA Atlantic Oceanographic and Meteorological Laboratory and lead author of the new study. "But each time we go look we find that the ocean is more complex than we thought."

The challenge of measuring the deep

Researchers today are monitoring the top 2,000 meters (6,560 feet) of the ocean more closely than ever before, in large part due to an international program called the Global Ocean Observing System. Devices called Argo floats that sink and rise in the upper ocean, bobbing along in ocean currents, provide a rich trove of continuous data on temperature and salinity.

The deep sea, however, is notoriously difficult to access and expensive to study. Scientists typically take its temperature using ships that lower an instrument to the seafloor just once every ten years. This means scientists' understanding of the day-to-day changes in the bottom half of the ocean lag far behind their knowledge of the surface.

Meinen is part of a team at NOAA carrying out a rare long-term study at the bottom of the ocean, but until recently, the team thought the four devices they had moored at the bottom of the Argentine Basin were just collecting information on ocean currents. Then Meinen saw a study by the University of Rhode Island showcasing a feature of the device he had been completely unaware of. A temperature sensor was built into the instrument's pressure sensor used to study currents and had been incidentally collecting temperature data for the entirety of their study. All they had to do was analyze the data they already had.

"So we went back and we calibrated all of our hourly data from these instruments and put together what is essentially a continuous 10-year-long hourly record of temperature one meter off the seafloor," Meinen said.

Dynamic depths

The researchers found at the two shallower depths of 1,360 and 3,535 meters (4,460 feet and 11,600 feet), temperatures fluctuated roughly monthly by up to a degree Celsius. At depths below 4,500 meters (14,760 feet), temperature fluctuations were more minute, but changes followed an annual pattern, indicating seasons still have a measurable impact far below the ocean surface. The average temperature at all four locations went up over the course of the decade, but the increase of about 0.02 degrees Celsius per decade was only statistically significant at depths of over 4,500 meters.

According to the authors, these results demonstrate that scientists need to take the temperature of the deep ocean at least once a year to account for these fluctuations and pick up on meaningful long-term trends. In the meantime, others around the world who have used the same moorings to study deep sea ocean currents could analyze their own data and compare the temperature trends of other ocean basins.

"There are a number of studies around the globe where this kind of data has been collected, but it's never been looked at," Meinen said. "I'm hoping that this is going to lead to a reanalysis of a number of these historical datasets to try and see what we can say about deep ocean temperature variability."

A better understanding of temperature in the deep sea could have implications that reach beyond the ocean. Because the world's oceans absorb so much of the world's heat, learning about the ocean's temperature trends can help researchers better understand temperature fluctuations in the atmosphere as well.

"We're trying to build a better Global Ocean Observing System so that in the future, we're able to do better weather predictions," Meinen said. "At the moment we can't give really accurate seasonal forecasts, but hopefully as we get better predictive capabilities, we'll be able to say to farmers in the Midwest that it's going to be a wet spring and you may want to plant your crops accordingly."

Credit: 
American Geophysical Union

Researchers discover a cell type responsible for cardiac repair after infarction

image: A Adrián Ruiz-Villalba is the first author of an international study that has identified the heart cells in charge of repairing the damage caused to this organ after infarction

Image: 
University of Malaga

The researcher of the Faculty of Science of the UMA Adrián Ruiz-Villalba, who is also member of the Andalusian Center for Nanomedicine and Biotechnology (BIONAND) and the Biomedical Research Institute of Malaga (IBIMA), is the first author of an international study that has identified the heart cells in charge of repairing the damage caused to this organ after infarction. This study has been recently published in the American Heart Association journal Circulation, first in the world dedicated to cardiovascular research.

These "reparative" cells consist in a population of cardiac fibroblasts that play a crucial role in generating the collagenous scar required to prevent ventricular wall rupture. This research, carried out with scientists from the Center for Applied Medical Research of Pamplona (CIMA) and the University Clinic of Navarra, also reveals the molecular mechanisms that activate these cells and regulate their function.

This discovery, according to the researchers, will enable the identification of new therapeutic targets and the development of targeted therapies to control the cardiac repair process following infarction.

Characterization of reparative cardiac fibroblasts

Cardiac fibroblasts are one of the essential components of the heart. These cells play a key role in maintaining the structure and mechanics of this vital organ. "Recent studies have shown that fibroblasts do not respond homogeneously to heart injury, so the purpose of our study was to define their heterogeneity during ventricular remodeling and understand the mechanisms that regulate the function of these cells", explains Felipe Prósper, researcher of CIMA and the University Clinic of Navarra.

"Through single-cell transcriptomics (single-cell RNA-seq analysis) we identified a subpopulation within cardiac fibroblasts that we named Reparative Cardiac Fibroblasts (RCF) for their role after cardiac injury. We verified that, when a patient suffers a heart attack, these RCF activate to provide a fibrotic response that generates a collagenous scar that prevents heart tissue rupture", says the researcher, who is also member of the Cell Therapy Network (TerCel) and the Health Research Institute of Navarra (IdiSNA).

CTHRC1, a protein related to the collagen that is essential for the healing process

At a molecular level, these scientists have discovered that RCF are characterized by a unique transcriptional profile, that is, a specific information pattern for the expression of the genes involved in their cardiac function. "Among the main differential markers of the transcriptome of these cells we identified the CTHRC1 protein (Collagen Triple Helix Repeat Containing 1), a molecule that is essential in the fibrotic response after myocardial infarction. Specifically, this protein participates in the collagen synthesis of the cardiac extracellular matrix and is crucial to the ventricular remodeling process", says the researcher Adrián Ruiz-Villalba.

According to Ruiz-Villalba, these results suggest that RCF activate the cicatrization of cardiac injuries by segregating the CTHRC1 protein. Therefore, this molecule could be considered as a biomarker associated with the physiologic state of a damaged heart and a potential therapeutic target for patients that have had a myocardial infarction or that suffer from dilated cardiomyopathy.

This study falls within the line of research on cardiac fibrosis of the "Cardiovascular Development and Angiogenesis" group led by José María Pérez Pomares, professor of the Department of Animal Biology of the Faculty of Science of the UMA. This group studies the cellular and molecular mechanisms that underlie this type of pathologies by using different animal models and collaborating with different basic and clinical teams, both national and international.

'Single-Cell RNA-seq Analysis Reveals a Crucial Role for Collagen Triple Helix Repeat Containing 1 (CTHRC1) Cardiac Fibroblasts after Myocardial Infarction' is the result of five years of work, which comprises more than 30 different authors. Likewise, other R&D&I centers have participated in the study, such as NavarraBiomed (Pamplona), Maine Medical Center Research Institute (USA), CeMM Research Centre for Molecular Medicine of the Austrian Academy of Sciences (Vienna) and KU Leuven (Belgium).

Credit: 
University of Malaga

Perovskite materials: Neutrons show twinning in halide perovskites

image: Dr. Michael Tovar working at FALCON at the neutron Source BER II.

Image: 
HZB

A good ten years ago, research teams discovered the class of semi-organic halide perovskites, which are now making a rapid career as new materials for solar cells. The mixed organic-inorganic semiconductors achieved efficiencies of over 25 percent within a few years. They take their name from their basic structure, which is very similar to that of the mineral perovskite (CaTiO3), but contains other components: halide anions, lead cations and organic molecular cations.

In the case of the most important compound of the class, methylammonium lead iodide CH3NH3PbI3 (usually abbreviated as MAPI), which was also studied here, the molecular cations are methylammonium cations and the anions are iodide anions. Although more than 4000 publications on halide perovskites have appeared in 2019 alone, it has not yet been possible to fully understand their structure. In the case of MAPI this was attributed, among other things, to the fact that they are produced as polycrystalline films at elevated temperature and it was assumed that twinning occurs when they are cooled to room temperature.

The formation of twins is complex and can significantly change the material properties. It is therefore exciting to investigate this process more closely. "We have now crystallised MAPI at room temperature and analysed the crystals thus formed with the Laue camera Falcon on BER II," says Dr. Joachim Breternitz, HZB. Together with his colleagues Prof. Susan Schorr and Dr. Michael Tovar, he was able to determine from the data that crystals grown at room temperature also form twins. This gives a new insight into the crystallization and growth process of MAPI. "Our results indicate that the crystallisation nuclei have a higher symmetry than the bulk crystals," explains Breternitz.

With these insights, the synthesis of the technologically important thin films can be specifically optimised.

The neutron source BER II has provided neutrons for research until its scheduled shutdown in December 2019. "This was one of our last experiments at FALCON on BER II and I hope that we were able to make useful contributions right up to the end," says Breternitz.

Credit: 
Helmholtz-Zentrum Berlin für Materialien und Energie

A study indicates that hair loss might be prevented by regulating stem cell metabolism

Hair follicle stem cells, which promote hair growth, can prolong their life by switching their metabolic state. In experiments conducted with mice, a research group active in Helsinki and Cologne, Germany, has demonstrated that a protein called Rictor holds a key role in the process.

The study was published in the Cell Metabolism journal.

New information on mechanisms that regulate stem cells

Ultraviolet radiation and other environmental factors damage our skin and other tissues every day, with the body continuously removing and renewing the damaged tissue. On average, humans shed daily 500 million cells and a quantity of hairs weighing a total of 1.5 grams.

The dead material is replaced by specialised stem cells that promote tissue growth. Tissue function is dependent on the activity and health of these stem cells, as impaired activity results in the ageing of the tissues.

"Although the critical role of stem cells in ageing is established, little is known about the mechanisms that regulate the long-term maintenance of these important cells. The hair follicle with its well understood functions and clearly identifiable stem cells was a perfect model system to study this important question," says Sara Wickstrom.

Reduced metabolic flexibility in stem cells underlying hair loss

At the end of hair follicles' regenerative cycle, the moment a new hair is created, stem cells return to their specific location and resume a quiescent state. The key finding in the new study is that this return to the stem cell state requires a change in the cells' metabolic state. They switch from glutamine-based metabolism and cellular respiration to glycolysis, a shift triggered by signalling induced by a protein called Rictor, in response to the low oxygen concentration in the tissue. Correspondingly, the present study demonstrated that the absence of the Rictor protein impaired the reversibility of the stem cells, initiating a slow exhaustion of the stem cells and hair loss caused by ageing.

The research group created a genetic mouse model to study the function of the Rictor protein, observing that hair follicle regeneration and cycle were significantly delayed in mice lacking the protein. Ageing mice suffering from Rictor deficiency showed a gradual decrease in their stem cell, resulting in loss of hair.

Precursors for developing hair loss drug therapies

Further research will now be conducted to investigate how these preclinical findings could be utilised in human stem cell biology and potentially also in drug therapies that would protect hair follicles from ageing. In other words, the mechanisms identified in the study could possibly be utilised in preventing hair loss.

"We are particularly excited about the observation that the application of a glutaminase inhibitor was able to restore stem cell function in the Rictor-deficient mice, proving the principle that modifying metabolic pathways could be a powerful way to boost the regenerative capacity of our tissues," Wickstrom explains.

Credit: 
University of Helsinki

Cancer-killing T cells 'swarm' to tumors, attracting others to the fight

When immune system T cells find and recognise a target, they release chemicals to attract more T cells which then swarm to help subdue the threat, shows a new study published today in eLife.

The discovery of this swarming behaviour, and the chemical attractants that immune cells use to direct swarms towards tumours, could one day help scientists develop new cancer therapies that boost the immune system. This is particularly important for solid tumours, which so far have been less responsive to current immunotherapies than cancers affecting blood cells.

"Scientists have previously thought that cancer-killing T cells identified tumours by randomly searching for them or by following the chemical trails laid by other intermediary immune cells," says lead author Jorge Luis Galeano Niño, a PhD graduate at UNSW Sydney. "We wanted to investigate this further to see if it's true, or whether T cells locate tumours via another mechanism."

Using 3D tumour models grown in the laboratory and in mouse models, the team showed that cancer-killing T cells can home-in on tumour cells independently of intermediary immune cells. When the T cells find and recognise a tumour, they release chemical signals, which then attract more T cells that sense the signals through a receptor called CCR5, and cause a swarm. "These cells coordinate their migration in a process reminiscent of the swarming observed in some insects and another type of immune cell called neutrophils, which help the body respond to injury and pathogens," Galeano Niño says.

After confirming their results using computer modelling, the team genetically engineered human cells called chimeric antigen receptor (CAR)-T cells and showed they also swarm toward a 3D glioblastoma tumour grown in the laboratory.

CAR-T cells are currently being used to treat certain types of blood cancer. But the new findings suggest that it might also be possible to train these cells to attack solid tumours.

"Although this is fundamental research and at an early stage, the swarming mechanism could be exploited in the future to target CAR-T cells to solid tumours, potentially leading to enhanced immunotherapies that are more effective at infiltrating and destroying these types of tumours," says senior author Maté Biro, EMBL Australia Group Leader at the Single Molecule Science node, UNSW.

"It will also be important to determine whether silencing the swarming mechanism could be beneficial in dampening overzealous T-cell responses following transplant surgery, in autoimmune conditions, or associated with viral infections," he adds.

Credit: 
eLife

Scientists shed new light on mechanisms of malaria parasite motility

image: New insight on the molecular mechanisms that allow malaria parasites to move and spread disease within their hosts has been published today in the open-access eLife journal.

Image: 
Public domain (Pixabay)

New insight on the molecular mechanisms that allow malaria parasites to move and spread disease within their hosts has been published today in the open-access eLife journal.

The movement and infectivity of the parasite Plasmodium falciparum, and ultimately its ability to spread malaria among humans, rely on a large molecular complex called the glideosome. The new findings provide a blueprint for the design of future antimalarial treatments that target both the glideosome motor and the elements that regulate it.

Parasites from the genus Plasmodium, including the deadliest species Plasmodium falciparum, are responsible for half a million deaths from malaria each year. As these parasites are becoming resistant to current artemisinin-based therapies, there are significant efforts to develop new vaccines and preventive treatments.

"This is especially crucial since climate change threatens to increase the reach of the Anopheles mosquitoes that carry the parasites," says lead author Dihia Moussaoui, a PhD student at Institut Curie, Sorbonne University, CNRS, Paris, France. "We wanted to take a deeper look into the molecular mechanisms that enable these parasites to move among the cells of their hosts in order to identify potential new targets for interventions."

The core of the glideosome in Plasmodium parasites features an essential Myosin A motor (PfMyoA) - a primary target for current drugs against malaria. PfMyoA is a critical molecule in the parasite life cycle, partly because it powers the fast motility needed for the parasite's motile spore-like stage. The molecule has a conserved globular motor domain and a lever arm that binds two 'light chains' of molecules, PfELC and MTIP.

In their study, Moussaoui and the Institut Curie team, in collaboration with the Trybus laboratory at the University of Vermont, US, captured the first X-ray structures of the full-length PfMyoA motor in two states of its motor cycle in Plasmodium falciparum. Their work revealed that a unique priming of the PfMyoA lever arm results from specific lever arm/motor domain interactions, allowing for a larger powerstroke to enhance speed of movement.

The lever arm typically contains amino acid sequences called IQ motifs that bind molecular light chains. In PfMyoA, both the first IQ motif and the PfELC that binds to it are so degenerate in their sequence that the existence of an essential light chain has only been recognised in recent studies.

Further analysis of the X-ray structures by the team showed that PfELC is essential for the invasion of red blood cells by Plasmodium falciparum and is a weak link in the assembly of a fully functional glideosome, providing a second novel target for antimalarials.

"The structures described here provide a precise blueprint for designing drugs that could target PfELC binding or PfMyoA full-length motor activity," concludes senior author Anne Houdusse, Team Leader at Institut Curie. "Such treatments would diminish glideosome function, hindering the motility of Plasmodium parasites at the most infectious stage of their life cycle and thereby preventing the development of disease."

Credit: 
eLife