Earth

Text me about cervical cancer

image: Ford learned that women want to receive text messages about their health care, but presently can't because of federal medical privacy and telecommunication laws.

Image: 
Photo by National Cancer Institute on Unsplash

An estimated 14,480 new cases of invasive cervical cancer will be diagnosed in the United States this year, according to the American Cancer Society. Cases that could be prevented or cured with better education from screening to treatment based on improved provider-patient communication, says a Michigan State University researcher.

The issue is particularly acute for Black women, said Sabrina Ford, an associate professor in the Department of Obstetrics, Gynecology and Reproductive Biology within MSU's College of Human Medicine. Ford's research was published online Feb. 1 in the journal Gynecologic Oncology.

"More Black women were being screened for cervical cancer (compared to white women) but they were still dying from cervical cancer at twice the rate," she said. "It didn't make sense."

When Black women were told they had an abnormal cervical cancer finding on their Pap test screening, they often failed to follow up with their medical provider. The reason for this is complicated and two-pronged. One prong is about education and information. Providing clear clinical information in an easily accessible form is key to patient engagement.

"Culture does come into play because Black women do get their information from family, friends and personal experience," she said. "Sometimes there is medical mistrust, shame or fear and so, some women delay or don't follow up."

The other prong is about communication, specifically how medical providers are communicating with their patients. A doctor trying not to unnecessarily alarm a patient might not be telling their patients enough information, for instance that the Pap test screens for cancer. Handing a patient a one-page flyer about cervical cancer may easily get lost, never read or understood depending on a patient's health literacy.

Ford learned that there needs to be improvements on both prongs in order to bridge the disparity gap. Women want to receive text messages about their health care, Ford said, but presently can't because of federal medical privacy and telecommunication laws. With regulatory changes, patients could consent to receive provider text messages, when they fill out initial office paperwork.

Also, Black women reported using their online patient portal, which also provides an opportunity to educate and advise patients so they can make informed decisions.

Another gap is communicating consistent messages. Medical providers should provide uniform information to patients on all fronts: in the office, on the patient portal, flyers and pamphlets or text messages.

"We can't blame the patient. We can't blame the doctor either when the communication isn't clear," she said. "I want to move the needle forward on cervical cancer and HPV. They are highly preventable, curable and could be eradicated."

Credit: 
Michigan State University

Global warming helps invasive species flourish - study models likely combined effects on ecosystems

Increased global temperatures help invasive species establish themselves in ecosystems, new research led by a Swansea University bioscientist has shown.

The study, published by the Royal Society, gives an insight into the probable combined effects of species invasions, which are becoming more common, and global warming.

Climate warming and biological invasions result in the loss of species. They also alter the structure of ecosystems and the ways in which species interact.

While there is already extensive research on how climate change and invasions affect species and ecosystems, we know surprisingly little about their combined effect, acting together in synergy.

This is where the new study marks an important step forward. The work, funded by the EU Horizon programme, involved Dr Miguel Lurgi from the College of Science working with colleagues from the Institut National de la Recherche Agronomique (INRAE) and the Centre National pour la Recherche Scientifique (CNRS) in France.

The team used mathematical simulations to investigate how temperature influences invasions in complex food webs comprised of 30 species. They paid particular attention to the combined - synergistic - effects.

The aim was to provide a theoretical model for how ecological communities are likely to respond to the joint effects of warming and invasions.

The model accounted for factors such as reproduction and death rates, average species body size, and interactions between species - such as predators attacking prey.

The team simulated what happens when an alien species is introduced into an ecosystem. They then ran the simulation forward in time using 40 different temperature values from 0 to 40 degrees Celsius.

This allowed them to model the combined effects on the ecosystem of temperature rises and of the new species being introduced.

They analysed the simulation results to assess the effects of temperature on food web properties before invasion, invasion success, and the effects of invasions on community structure and stability.

They found:

Warmer temperatures modified community structure and dynamics that in turn facilitated invasions.

Warmer temperatures mostly amplified the impacts of invasions on communities when compared with their colder counterparts.

Temperature effects on invasions are mostly indirect and mediated by changes in community structure and stability.

Dr Miguel Lurgi of Swansea University, lead researcher, said:

"Warming and invasions are driving major changes to our ecosystems, and it's essential that we understand their combined effects.

Our study provides a first step in that direction, analysing the synergistic effects of temperature and invasions on communities.

Overall, we found that temperature and invasion act synergistically to increase the rate of species loss, creating smaller and more connected networks.

We have seen with COVID19 how mathematical modelling has been crucial in understanding the likely spread and impact of the virus.

Similarly, our work provides theoretical expectations for the likely response of ecological communities to the joint effects of warming and invasions".

Credit: 
Swansea University

Mobility data used to respond to COVID-19 can leave out older and non-white people

Information on individuals' mobility--where they go as measured by their smartphones--has been used widely in devising and evaluating ways to respond to COVID-19, including how to target public health resources. Yet little attention has been paid to how reliable these data are and what sorts of demographic bias they possess. A new study tested the reliability and bias of widely used mobility data, finding that older and non-White voters are less likely to be captured by these data. Allocating public health resources based on such information could cause disproportionate harms to high-risk elderly and minority groups.

The study, by researchers at Carnegie Mellon University (CMU) and Stanford University, appears in the Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, a publication of the Association for Computing Machinery.

"Older age is a major risk factor for COVID-19-related mortality, and African-American, Native-American, and Latinx communities bear a disproportionately high burden of COVID-19 cases and deaths," explains Amanda Coston, a doctoral student at CMU's Heinz College and Machine Learning Department, who led the study as a summer research fellow at Stanford University's Regulation, Evaluation, and Governance Lab. "If these demographic groups are not well represented in data that are used to inform policymaking, we risk enacting policies that fail to help those at greatest risk and further exacerbating serious disparities in the health care response to the pandemic."

During the COVID-19 pandemic, mobility data have been used to analyze the effectiveness of social distancing policies, illustrate how people's travel affects transmission of the virus, and probe how different sectors of the economy have been affected by social distancing. Yet despite the high-stakes settings in which this information has been used, independent assessments of the data's reliability are lacking.

In this study, the first independent audit of demographic bias of a smartphone-based mobility dataset used in the response to COVID-19, researchers assessed the validity of SafeGraph data. This widely used mobility dataset contains information from approximately 47 million mobile devices in the United States. The data come from mobile applications, such as navigation, weather, and social media apps, where users have opted in to location tracking.

When COVID-19 began, SafeGraph released much of its data for free as part of the COVID-19 Data Consortium to enable researchers, nonprofits, and governments to gain insight and inform responses. As a result, SafeGraph's mobility data have been used widely in pandemic research, including by the Centers for Disease Control and Prevention, and to inform public health orders and guidelines issued by governors' offices, large cities, and counties. Researchers in this study sought to determine whether SafeGraph data accurately represent the broader population.

SafeGraph has reported publicly on the representativeness of its data. But the researchers suggest that because the company's analysis examined demographic bias only at Census-aggregated levels and did not address the question of demographic bias for inferences specific to places of interest (e.g. voting places), an independent audit was necessary.

A major challenge in conducting such an audit is the lack of demographic information--SafeGraph data do not contain demographics such as age and race. In this study, researchers showed how administrative data can provide the demographic information necessary for a bias audit, supplementing the information gathered by SafeGraph. They used North Carolina voter registration and turnout records, which typically include information on age, gender, and race, as well as voters' travel to a polling location on Election Day. Their data came from a private voter file vendor that combines publicly available voter records. In all, the study included 539,000 voters from North Carolina who voted at 558 locations during the 2018 general election. The researchers deemed this sample highly representative of all voters in that state.

The study identified a sampling bias in the SafeGraph data that under-represents two high-risk groups, which the authors called particularly concerning in the context of the COVID-19 pandemic. Specifically, older and minority voters were less likely to be captured by the mobility data. This could lead jurisdictions to under-allocate important health resources, such as pop-up testing sites and masks, to vulnerable populations.

"While SafeGraph information may help people make policy decisions, auxiliary information, including prior knowledge about local populations, should also be used to make policy decisions about allocating resources," suggests Alexandra Chouldechova, assistant professor of statistics and public policy at CMU, who coauthored the study.

The authors also call for more work to determine how mobility data can be more representative, including asking firms that provide this kind of data to be more transparent in including the sources of their data (e.g., identifying which smartphone applications were used to access the information).

Among the study's limitations, the authors note that in the United States, voters tend to be older and include more White people than the general population, so the study's results may underestimate the sampling bias in the general population. Additionally, since SafeGraph provides researchers with an aggregated version of the data for privacy reasons, researchers could not test for bias at the individual voter level. Instead, the authors tested for bias at physical places of interest, finding evidence that SafeGraph is more likely to capture traffic to places frequented by younger, largely White visitors than to places frequented by older, largely non-White visitors.

More generally, the study shows how administrative data can be used to overcome the lack of demographic information, which is a common hurdle in conducting bias audits.

Credit: 
Carnegie Mellon University

Oncotarget: Genomic and neoantigen evolution in head and neck squamous cell carcinoma

image: Properties of patients who have neoantigens in shared genes. (A) The total number of neoantigens was graphed for primary tumors with neoantigens in Ryr3 (n = 4), DNAH7 (n = 5), TTN (n = 5), or no neoantigens (Pri neoAg-, (n = 13)) or relapse tumors with neoantigens in TTN (n = 4), PIK3CA (n = 5), USH2A (n = 5), or no neoantigens (Rel neoAg-, (n = 12)). The numbers under the X axis are the mean of neoantigens. *indicates p

Image: 
Correspondence to - Brian A. Van Tine - bvantine@wustl.edu

Oncotarget published "Genomic and neoantigen evolution from primary tumor to first metastases in head and neck squamous cell carcinoma" which reported that prior work has characterized changes in the mutation burden between primary and recurrent tumors; however, little work has characterized the changes in neoantigen evolution.

These authors characterized genomic and neoantigen changes between 23 paired primary and recurrent head and neck squamous cell carcinoma (HNSCC) tumors.

Within these tumors, they identified 6 genes which have predicted neoantigens in 4 or more patients.

Within HNSCC tumors examined in this Oncotarget research paper, there are neoantigens in shared genes by a subset of patients.

The presence of neoantigens in these shared genes may promote an anti-tumor immune response which controls tumor progression.

Dr. Brian A. Van Tine from The Washington University in St. Louis, The St. Louis Children's Hospital as well as The Siteman Cancer Center said, "Head and neck cancer are a group of heterogeneous tumors with an estimated 644,000 new cases per year worldwide."

The infiltration of immune cells, including T cells, into tumors is associated with improved outcomes and longer survival in HNSCC.

The infiltrating T cells release granules containing perforin and granzyme A and B which directly kill tumor cells or release other cytokines and chemokines that promote the anti-tumor immune response and alter the tumor microenvironment.

For example, infiltrating T cells release interferon gamma which increases expression of PD-L1 and CTLA-4, which may increase the efficacy of immune checkpoint therapy.

Multiple studies have characterized changes in mutation burden in HNSCC, when comparing primary and metastatic tumors, no studies have characterized the shifting neoantigen burden between primary and metastatic tumors within HNSCC.

In this Oncotarget study, the authors characterized the mutational and neoantigen burden between primary and first recurrence tumors in 23 patients with HNSCC.

In this Oncotarget study, the authors characterized the mutational and neoantigen burden between primary and first recurrence tumors in 23 patients with HNSCC

The Van Tine Research Team concluded in their Oncotarget Research Output that there is a shifting neoantigen burden as there are unique neoantigens in primary tumors and different unique neoantigens in the recurrent/metastatic tumors.

The patients which have these neoantigens in shared genes are patients which have higher total numbers of neoantigens.

What is clear is that patients with neoantigens in these shared genes also tend to have increased duration of survival with disease.

The increase in neoantigens and duration of survival with disease tends to be associated with increased CD3 CD8 density in the tumor and CD8A expression.

This suggests that patients with these shared neoantigens are associated with increased CD8 T cell infiltration and increased cytotoxic activity, which extends the patient's life.

Credit: 
Impact Journals LLC

Study explores how environmental exposures before conception may impact fetal development

GRAND RAPIDS, Mich. (March 16, 2021) -- Older age at the time of conception and alcohol consumption during pregnancy have long been known to impact fetal development.

Now, a new report published in Proceedings of the National Academy of Sciences suggests older age and alcohol consumption in the year leading up to conception also may have an impact by epigenetically altering a specific gene during development of human eggs, or oocytes.

Although the study did not determine the ultimate physical effects of this change, it provides important insights into the intricate relationship between environmental exposures, genetic regulation and human development.

"While the outcome of the change isn't clear, our findings give us a valuable look into how environmental factors affect gene regulation through epigenetics and imprinting," said Peter A. Jones, Ph.D., D.Sc. (hon), Van Andel Institute chief scientific officer and the study's senior author. "A better understanding of these complex processes further our understanding of health and disease and -- one day -- may be the foundation for new disease prevention measures."

Today's study centers on a gene called nc886, which is one of about 100 "imprinted" genes that pass from the mother to the fetus. Imprinted genes retain important chemical tags applied by either the mother or the father before conception. The result is an "epigenetic memory" through which non-genetic information, such as maternal age, may flow directly from parent to offspring. To date, nc886 is the only known imprinted gene that exhibits variation in the likelihood of imprinting based on maternal factors.

Using data from 1,100 mother-child pairs from South Africa, Jones and colleagues found the imprinting of nc886 was increased in older mothers but decreased in mothers who drank alcohol the year before conception. The team also investigated cigarette smoking but found no impact on imprinting of nc886.

A 2018 study published by Jones and his collaborators demonstrated that failure to imprint nc886 was associated with higher body mass in children at five years of age. Research by other groups also have linked failure to imprint nc886 with increased survival in people with acute myeloid leukemia, an aggressive type of blood cancer. Most recently, a group in Taiwan found that lack of imprinting on nc886 may reduce response to an anti-diabetic drug.

Credit: 
Van Andel Research Institute

Examining the value of lumbar spine surgery

PHILADEPHIA - Since the 1990s the rate of spinal fusion to treat lower back pain has been on the rise. A new prospective clinical study published in the journal Neurosurgery, the official journal of the Congress of Neurological Surgeons, found that lumbar fusions were three times more likely to be effective and obtain better patient outcomes, when guidelines for fusion were followed. The results suggest that when surgeons operate outside of what the evidence based literature suggests, patients may not have significant improvements in their quality of life and could have increased pain or other limitations.

"Unfortunately, we don't know how many lumbar fusion surgeries are not based on evidence-based best practice, or how these patients do clinically," says neurosurgeon James Harrop, MD, MSHQS professor and chief of the Spine and Peripheral Nerve Surgery division at the Vickie and Jack Farber Institute for Neuroscience - Jefferson Health. "The study goal was to explore what drove the best clinical outcomes for lumbar fusion, specifically the outcomes that patients valued as important to them. Our results indicate that alignment with clinical guidelines was the best predictor of positive outcomes over all other factors we evaluated."

The researchers assessed 325 lumbar fusion cases on whether they conformed to the North American Spine Societies (NASS) lumbar fusion guidelines. Assessments were done in a blinded fashion, and did not influence the decision of a patient's surgical team. The researchers then followed the patients out for 6 months after surgery and had them fill out a validated survey tool (the Oswestry Disability Index - ODI), which assessed patient-reported outcomes measures (PROM). Rather than examining surgical success, the ODI examines patient centric outcomes including: pain, walking, lifting, sleep, social live and sex life.

Dr. Harrop and his colleagues found that of all of the variables they examined - concordance with NASS guidelines, type of surgeon, whether it was the first back surgery or a revision - following guidelines was most strongly associated with positive patient ODI outcomes. And that the patients meeting their criteria for a successful surgery were three times higher in these cases, highlighting how effective lumbar fusions can be for the right patient. "This study shows that the majority of patients did well with a lumbar fusion," says Dr. Harrop, "But for the wrong patients, lumbar fusion can at best do nothing and at worst, create other problems."

The evidence-based guidelines, published by NASS, describe nine criteria including things like trauma, deformity of the spine, certain kinds of axial back pain, tumor, or infection. However, there is still some debate in the field as to when patients fit the criteria.

"For example, after a trauma with a "broken" back, where we know the spine is unstable - we also know a fusion can help," says Dr. Harrop. "That is a minority of the problems we see in practice. For our most common patient, one with degenerative diseases, spinal stability and instability has not been defined and understood as well as it should be. The NASS guidelines certainly help, but we need more research to understand what qualifies as normal range of movement, when is something pathologic and is immobilization through fusion the best option."

The research was initiated and funded by Jefferson as part of an effort to improve patient care and outcomes through rigorous study. "What we really need is support from insurance companies and other agencies to fund and promote research on best practices and evidence-based care. Without that, we cannot debate value," says Dr. Harrop.

Credit: 
Thomas Jefferson University

Boosting insect diversity may provide more consistent crop pollination services

image: Honeybee on cotton plant

Image: 
Deepa Senapathi / University of Reading

Fields and farms with more variety of insect pollinator species provide more stable pollination services to nearby crops year on year, according to the first study of its kind.

An international team of scientists led by the University of Reading carried out the first ever study of pollinator species stability over multiple years across locations all around the world, to investigate how to reduce fluctuations in crop pollination over time.

They found areas with diverse communities of pollinators, and areas with stable populations of dominant species, suffered fewer year-to-year fluctuations in pollinator numbers and species richness.

The findings could influence how agricultural land is managed, as it highlights how land managers and farmers need to consider interventions that support diversity in pollinators on their land to provide long-term benefits to food production.

Dr Deepa Senapathi, an ecologist at the University of Reading who led the study, said: "Most previous research into pollinator stability has focused on space, not time. However, year-to-year variations in pollination services cause boom and bust cycles in crop harvests, which can have a damaging impact on agriculture and livelihoods globally.

"Stable and consistent pollination services are therefore important in underpinning businesses and livelihoods, as well as providing a reliable supply of food for retailers and consumers.

"This study has revealed that the secret to consistent crop harvests could be to encourage pollinator diversity on or near farmland. If we want pollinators to help us, first we need to help them, through land management decisions that preserve and increase the number of insect pollinator species."

Pollination by insects supports the reproduction of at least 78% of wild plants, while contributing to the pollination of 75% of major crops globally. However, wild insect pollinators are declining in areas of north-west Europe and North America where these crops are widely grown, making understanding how their populations change over time, and the impacts of this, increasingly important.

In the new study, the researchers collected wild pollinator data from hundreds of field sites in 12 different countries across six continents over multiple years.

They studied the populations of pollinating insects, such as bees, hoverflies, butterflies, and beetles, in the vicinity of 21 different crop species to explore influences on their stability over time. Their findings are published in the journal Proceedings of the Royal Society B.

Previous studies have shown high pollinator diversity increases fruit and seeds in wild plants, while low diversity affects population stability across landscapes and between seasons. However, the new study is the first to consider how this affects stability over multiple years in locations around the world.

In intensive farming systems where there may be a reduced diversity of pollinators, the research showed that protecting the dominant species there - potentially using managed pollinators to some extent - was also effective in providing long-term stability in pollination.

Land management techniques currently used to boost biodiversity include wildflower areas in and near arable habitats, and crop rotations that allow different species to thrive. Further research will be required to determine which methods work best in different locations.

Credit: 
University of Reading

Scientists shrink pancreatic tumors by starving their cellular 'neighbors'

image: The scientists found elevated levels of “cell drinking,” or macropinocytosis (green) in the stromal cells exposed to a low-nutrient environment (right), compared to normal nutrient levels that surround healthy tissue (left).

Image: 
Sanford Burnham Prebys Medical Discovery Institute

Scientists at Sanford Burnham Prebys Medical Discovery Institute demonstrated for the first time that blocking "cell drinking," or macropinocytosis, in the thick tissue surrounding a pancreatic tumor slowed tumor growth--providing more evidence that macropinocytosis is a driver of pancreatic cancer growth and is an important therapeutic target. The study was published in Cancer Discovery, a journal of the American Association for Cancer Research.

"Now that we know that macropinocytosis is 'revved up' in both pancreatic cancer cells and the surrounding fibrotic tissue, blocking the process might provide a 'double whammy' to pancreatic tumors," says Cosimo Commisso, Ph.D., associate professor and co-director of the Cell and Molecular Biology of Cancer Program at Sanford Burnham Prebys and senior author of the study. "Our lab is investigating several drug candidates that inhibit macropinocytosis, and this study provides the rationale that they should be advanced as quickly as possible."

Pancreatic cancer remains one of the deadliest cancers. Only one in ten people survive longer than five years, according to the American Cancer Society, and its incidence is on the rise. Pancreatic cancer is predicted to become the second-leading cause of cancer-related deaths in the U.S. by 2030.

"If we want to create a world in which all people diagnosed with pancreatic cancer will thrive, we first need to understand the key drivers of tumor growth," says Lynn Matrisian, Ph.D., chief science officer at the Pancreatic Cancer Action Network (PanCAN), who wasn't involved in the study. "This study suggests that macropinocytosis is an important target for drug development, and that progressing this novel treatment approach may help more people survive pancreatic cancer."

Starving the stroma

Pancreatic tumors are surrounded by an unusually thick layer of stroma, or glue-like connective tissue that holds cells together. This stromal barrier makes it difficult for treatments to reach the tumor, and fuels tumor growth by providing the tumor with nutrients. Commisso's previous research showed that rapidly growing pancreatic tumors obtain nutrients through macropinocytosis, an alternative route that normal cells don't use--and he wondered if macropinocytosis in the stroma may also fuel tumor growth.

To test this hypothesis, Commisso and his team blocked macropinocytosis in cells that surround and nourish pancreatic tumors, called pancreatic cancer-associated fibroblasts (CAFs), and co-transplanted the modified cells with pancreatic tumor cells into mice. The scientists found that tumor growth slowed in these mice--compared to control groups in which macropinocytosis remained active in the stroma--suggesting that the approach holds promise as a way to treat pancreatic cancer.

"We are excited about this approach because instead of removing the stroma, which can cause the tumor to spread throughout the body, we simply block the process that is driving tumor growth," says Yijuan Zhang, Ph.D., postdoctoral researcher in the Commisso lab and first author of the study. "We also deciphered the molecular signals that drive macropinocytosis in the stroma, providing new therapeutic avenues for pancreatic cancer researchers to explore."

Promising drug targets identified

Based on their ongoing macropinocytosis research, the scientists have identified many druggable targets that may inhibit the process. Bolstered by this study's findings, they will continue to investigate the promise of drug candidates that inhibit macropinocytosis as potential pancreatic cancer treatments.

"We already knew that macropinocytosis was a very important growth driver for pancreatic cancer, as well as lung, prostate, bladder and colon tumors," says Commisso. "This study further spurs our efforts to advance a drug that targets macropinocytosis, which may be the breakthrough we need to finally put an end to many deadly and devastating cancers."

Credit: 
Sanford Burnham Prebys

Stimulating the immune system to fight cancer

image: Elisabeth Hennes is searching for potential IDO1 inhibitors with the nerwly developed cell-based assay.

Image: 
@MPI of Molecular Physiology

Our immune system is very successful when it comes to warding off viruses and bacteria. It also recognizes cancer cells as potential enemies and fights them. However, cancer cells have developed strategies to evade surveillance by the immune system and to prevent immune response.

In recent years, fighting cancer with the help of the immune system has entered into clinical practice and gained increasing importance as a therapeutical approach. Current therapies apply so-called immune checkpoint inhibitors. Immune checkpoints are located on the surface of cancer cells and slow down the immune response. Targeting these checkpoints can release this tumour-induced brake. Another strategy developed by cancer cells to escape the immune response is the production of the enzyme indoleamine-2,3-dioxygenase (IDO1), which metabolizes tryptophan into kynurenine and thereby interferes with the immune response in two ways: On the one hand, the depletion of tryptophan negatively impacts the growth of T cells, a central component of the immune response, which seek out and block cancer cells. On the other hand, the produced kynurenin inhibits T cells in the immediate environment of the cancer cells.

New Inhibitors against IDO1 - The Quest is on

IDO1 is in the focus of pharmaceutical research because of its cancer-driving effect. However, the search for IDO1 inhibitors has so far been only moderately successful and the first clinically tested IDO1 inhibitor, epacadostat, showed hardly any effect in clinical trials. However, it has not yet been possible to prove whether the inhibitor really blocks IDO1 in the tumour and whether the used dose is sufficient.

In drug discovery, experimental test procedures, so-called assays, are employed to search for new disease modulators in large libraries of thousands of compounds. For this purpose, mostly biochemical assays are applied, where a biochemical reaction is impaired if a substance shows an inhibitory effect in the assay. However, this method has certain disadvantages and limitations, as the test takes place in a test tube and not in the natural, cellular environment of the enzyme. For instance, enzymes like IDO1 are less stable and more reactive outside the protective shell of the cell. In addition, cell-free assays cannot detect indirect inhibitors of the enzyme, that for example interfere with its production or with essential co-factors.

Novel Cell-Based Assay Discovers IDO1 Inhibitors with Different Mechanisms of Action

Scientists led by Herbert Waldmann and Slava Ziegler have now developed a cell-based assay for the discovery of new IDO1 inhibitors that overcomes the limitations of cell-free assays. Elisabeth Hennes, PhD student at the MPI and first author of the study, employed a sensor that measures the conversion of the IOD1 substrate tryptophan into the metabolic product kynurenine in cell culture and thereby detects IDO1 activity. Based on this test strategy, several highly potent inhibitors with different mechanisms of action were identified from a library of more than 150,000 chemical substances: These include substances that directly switch off IDO1 as well as indirect inhibitors that prevent the production of IDO1 itself or that of its important cofactor heme.

"Unfortunately, previous attempts to find a compound that effectively stops the cancer-promoting activity of IDO1 in tumours have met with little success. However, the development of new compounds that can switch off IDO1 via different mechanisms of action could be a promising approach for immunotherapies in the fight against cancer. We hope that our newly developed cell-based assay could contribute to this area of research", says Slava Ziegler.

Credit: 
Max Planck Institute of Molecular Physiology

Modeling the probability of methane hydrate deposits on the seafloor

image: Using Sandia National Laboratories' longstanding expertise in probabi­listic modeling and machine learning algorithms from the U.S. Naval Research Laboratory, the researchers determined the probabil­ity of finding methane hydrate off the coast of North Carolina's Outer Banks.

Image: 
Image courtesy of William Eymold/Sandia National Laboratories

RALEIGH, N.C. -- Methane hydrate, an ice-like material made of compressed natural gas, burns when lit and can be found in some regions of the seafloor and in Arctic permafrost.

Thought to be the world's largest source of natural gas, methane hydrate is a potential fuel source, and if it "melts" and methane gas is released into the atmosphere, it is a potent greenhouse gas. For these reasons, knowing where methane hydrate might be located, and how much is likely there, is important.

A team of researchers from Sandia National Laboratories and the U.S. Naval Research Laboratory have developed a new system to model the likelihood of finding methane hydrate and methane gas that was tested in a region of seafloor off the coast of North Carolina.

While methane hydrate deposits have been found in a variety of locations, there are significant unknowns in terms of how much methane hydrate exists on the seafloor and where. It is challenging to collect samples from the seafloor to find methane hydrate deposits. This is where Sandia's computer modeling expertise comes in.

"This is the first time someone has been able to approach methane hydrate distribution in the same way we approach weather forecasting," said Jennifer Frederick, a computational geoscientist and lead researcher on the project. "When you hear a weather forecast for a 60% chance of two inches of rain, you don't necessarily expect exactly two inches. You understand that there is uncertainty in that forecast, but it is still quite useful. In most places on the seafloor we don't have enough information to produce an exact answer, but we still need to know something about methane and its distribution. By using a probabilistic approach, similar to modern weather forecasting, we can provide useful answers."

The new system combines Sandia's longstanding expertise in probabilistic modeling with machine learning algorithms from the Naval Research Laboratory. The system was tested and refined by modeling the area around Blake Ridge, a hill on the seafloor 90 to 230 miles southeast of North Carolina's Outer Banks with known deposits of methane hydrate and methane gas.

The team shared their model for Blake Ridge and compared it with previous empirical data in a paper published on March 14 in the scientific journal Geochemistry, Geophysics, Geosystems.

'Forecasting' methane by combining uncertainty modeling with machine learning

The Naval Research Laboratory's Global Predictive Seafloor Model provides site-specific details on seafloor properties, such as temperature, overall carbon concentration and pressure. If data is missing for a certain region, the Naval Research Laboratory's model uses advanced machine-learning algorithms to estimate the missing value based on information about another area that may be geographically distant but similar geologically.

The research team imported the data from the Naval Research Laboratory's model into Sandia software that specializes in statistical sampling and analysis, called Dakota. Using Dakota, they determined the most likely value for influential seafloor properties, as well as the natural variation for the values. Then, in a statistical manner, they inserted a value from this expected range for each property into PFLOTRAN, another software maintained and developed at Sandia. PFLOTRAN models how chemicals react and materials move underground or under the seafloor. The team conducted thousands of methane production simulations of the Blake Ridge region. All the software involved in the system is open source and will be available for other oceanographic researchers to use.

"One of the biggest things we found is that there is almost no formation of methane hydrates shallower than 500 meters, which is to be expected given the temperature and pressure needed to form methane hydrate," said William Eymold, a postdoctoral fellow at Sandia and primary author of the paper. Solid methane hydrate is known to form in low-temperature, high-pressure environments where molecules of methane are trapped within well-organized water molecules.

The team also found methane gas formed closer to shore. They were able to compare their model to methane hydrate values calculated by past studies and samples collected a few decades ago by the National Science Foundation's Ocean Drilling Program, he said. For example, methane hydrate was detected in a seafloor sample collected from a hole drilled on Blake Ridge called Site 997.

"The fact that we predicted methane hydrate formation in similar amounts to past studies and observations really showed that the system appears to be working pretty well, and we will be able to apply it to other geographic locations that may have less data," Eymold said.

Importance of methane to the Navy and next steps

The location of methane hydrate deposits and methane gas near the seafloor is important to the Navy.

"Understanding how sound interacts with the seafloor is really important for any kind of naval operation," said Frederick. "Methane gas affects the acoustics dramatically. Even if only 1% or 2% of the pore space in the seafloor sediment is filled with a gas bubble, the speed of sound decreases a hundredfold, or more. This is a very large effect, and if you don't account for it properly, then you're not going to get precise acoustics."

Frederick compared a submarine using sonar to the early arcade game Breakout, where a player moves a paddle horizontally in order to keep a ball bouncing to destroy a wall of bricks. In this analogy, the seafloor serves as the "paddle" to reflect or refract sound waves, or the "ball," in order to get a complete view of obstacles in the ocean. If the paddle started to bounce the ball differently -- or held on to the ball for varying lengths of times -- depending on where the paddle was located, the game would become far more challenging.

So far, the team has used their system to create models of a region of the Norwegian Sea between Greenland and Norway and the shallow waters of the Arctic Ocean offshore of the North Slope of Alaska, two areas of interest to the Navy.

Frederick has also worked with a large team of international experts to assess the amount of methane and carbon dioxide stored in the shallow Arctic seafloor, and how sensitive those deposits would be to rising temperatures.

The team has also created a much coarser model of the whole globe and has started looking at the mid-Atlantic, where methane gas was spotted bubbling out of the seafloor a few years ago.

"It will be interesting to see if our model is able to predict these regions of methane seeps on the seafloor," Frederick said. "We'd like to see if we can predict the distribution of these methane seeps and whether they are consistent with the thermodynamic properties of methane-hydrate stability. When you see a seep, that means that there is a lot of gas beneath the seafloor. That will significantly impact how sound travels through the seafloor, and thus sonar. Also, these deposits could be a source of natural gas for energy production, will impact the ocean ecology and nutrient cycles, and if that gas reaches the atmosphere, it will have climate change implications."

Credit: 
DOE/Sandia National Laboratories

It's snowing plastic

The snow may be melting, but it is leaving pollution behind in the form of micro- and nano-plastics according to a McGill study that was recently published in Environmental Pollution. The pollution is largely due to the relatively soluble plastics found in antifreeze products (polyethylene glycols) that can become airborne and picked up by the snow.

The researchers used a new technique that they have developed to analyze snow samples collected in April 2019 in Montreal for both micro- and nano-sized particles of various plastics. The McGill technique is orders of magnitude more sensitive than any of the other current methods used for tracing plastic in the environment. It allows scientists to detect ultra-trace quantities of many of the most common soluble and insoluble plastics in snow, water, rainfall, and even in soil samples once they have been separated - down to the level of a picogram (or one trillionth of a gram). It is based on using nano-structured mass spectrometry and, unlike other techniques currently in use, the new technique is both recyclable and based on sustainable practices.

"It is important to be able to detect even trace quantities of plastics in the environment," says senior author, Parisa Ariya, from McGill's Departments of Chemistry and Atmospheric and Oceanic Sciences. "Though these plastics may be harmless in themselves, they can pick up toxic organic matter and heavy metals from the environment, which can damage human cells and organs."

The first author, Zi Wang, a PhD Candidate at McGill adds, "Our hope is that this new technique can be used by scientists in different domains gain key information about the quantity of micro- and nano-plastics in urban environments in order to better address their impacts on the ecosystem and on human health."

Credit: 
McGill University

'We marry disorder with order'

image: Professor Rustem Valiullin with a nuclear magnetic resonance spectrometer.

Image: 
Photo: Swen Reichhold, Leipzig University

He and his research group have found a way to more precisely determine the properties of these materials, because they can better account for the underlying disorder. Their article has been designated "ACS Editors' Choice" by the editors of the American Chemical Society journals, who recognise the "importance to the global scientific community" of the Leipzig researchers' work and see it as a breakthrough in the accurate description of phase transition phenomena in disordered porous materials.

In mesoporous materials, the pore openings are far smaller than in a normal sponge: their diameters range from 2 to 50 nanometres and are invisible to the naked eye. Nevertheless, they have a number of interesting properties, including with regard to separating substances. This occurs as a function of molecule and pore size, for example.

Until now, scientific experiments have only been able to approximate the desired properties of these materials. "So it is more down to experience whether you can determine which of the structures can be used for which applications," says the physicist. The problem is that these materials are mostly disordered, which means that pores of different sizes in the material form a complex network structure.

Researchers at Leipzig University developed a model that determines the features that can be observed in such complex pore networks. Professor Valiullin describes the approach as follows: "We can statistically describe how the individual pores in these networks are coupled to each other. We marry disorder with order." This makes it possible to determine the physical phenomena that need to be understood in gas-liquid and solid-liquid phase transitions, for example. And not only in theory: using special mesoporous modelling, it was possible to prove with the aid of modern nuclear magnetic resonance methods that the theoretical results can also be directly applied in practice.

This should make it easier to use such materials in the future, for example to help release drugs into the human body over an extended period - precisely when necessary and desired. Other potential applications for such materials include sensor technology or energy storage and conversion.

Credit: 
Universität Leipzig

Fermented wool is the answer

Why are the red, yellow, and blue colours used in the world's oldest knotted-pile carpet still so vivid and bright, even after almost two and a half thousand years? Researchers at Friedrich-Alexander-Universität Erlangen-Nürnberg have now been able to uncover the secrets behind the so-called Pazyryk carpet using high-resolution x-ray fluorescence microscopy. Their findings have been published in the journal Scientific Reports.

The Pazyryk carpet is the world's oldest example of a knotted-pile carpet and is kept at the State Hermitage Museum in St. Petersburg, Russia. The carpet, which was made out of new wool at around 400 BC, is one of the most exciting examples of central Asian craftsmanship from the Iron Age. Ever since the carpet was discovered in 1947 by Russian archaeologists in a kurgan tomb in the Altai mountains, experts in traditional dyeing techniques have been puzzled by the vivid red, yellow and blue colours of the carpet, which lay buried in extreme conditions for almost two thousand five hundred years.

Red fibres under the microscope

Prof. Dr. Karl Meßlinger from the Institute of Physiology and Pathophysiology at FAU, and x-ray microscopy experts Dr. Andreas Späth and Prof. Dr. Rainer Fink from the Chair of Physical Chemistry II at FAU have now shed some light on this secret. Together, they came up with the idea of imaging the distribution of pigments across the cross section of individual fibres of wool using high-resolution x-ray fluorescence microscopy (μ-XRF). Dr. Späth and Prof. Fink conducted the experiments using the PHOENIX x-ray microscope at the Paul Scherrer Institute in Villigen, Switzerland. With three to five micrometres, the microscope provides sufficient spatial resolution combined with high sensitivity for characteristic chemical elements.

Fermenting sheep's wool before it is dyed increases the brilliance and longevity of the colour. Fermented wool can be identified by the raised position of the layers of the cuticle along the fibres or by the characteristic distribution of pigments across the cross-section of the fibres. The latter is shown in the x-ray fluorescence images (left). The cuticle layer has fallen off the samples of fibre from the Pazyryk carpet (right). The influence of the fermentation process is still visible by comparing the fluorescence images (bottom) with those of recently dyed samples.

The study focused mainly on red wool fibres, as the pigment Turkey red has been in use almost exclusively for centuries in Central Asia and in the Far East to create a characteristic shade of red. Turkey red is a metal organic complex made of alizarin, which is derived from the roots of the rose madder, and aluminium. 'μ-XRF imaging shows the characteristic distribution of the aluminium along the cross section of fermented wool fibres,' explains Dr. Andreas Späth. 'We found the same pattern in fibres from the Pazyryk carpet.' This is by far the earliest example of the fermentation technique and provides an insight into the already highly-developed techniques used by textile craftsmen and women in the Iron Age. The results also show the high potential of x-ray microscopy for analysing samples of textiles from archaeological sites. Up to now, research in this field has used scanning electron microscopy (SEM).

Fermented wool does not fade

Prof. Dr. Karl Meßlinger received a sample of some knots from the Pazyryk carpet 30 years ago in 1991 for analysis with a scanning electron microscope. Together with Dr. Manfred Bieber, an expert in oriental textile dyeing techniques, he previously discovered that SEM imaging can identify wool fibres that have been treated with a special dyeing technique based on previous fermentation of the wool. The fermentation process increases the diffusion of the pigments towards the centre of the wool fibres resulting in significantly more brilliant and permanent colours. Fermented wool can be identified by SEM imaging by means of the characteristic raised position of the outermost layers of the cuticle. 'Traditional Anatolian textile craftspeople are familiar with a less costly yet reliable technique,' says Meßlinger. 'They spread the dyed wool out on a field for several weeks in direct sunlight, then put it in a barn as bedding for their animals before rinsing it out in a stream or river. Only fermented wool retains its colour without any significant bleaching.'

Prof. Meßlinger and Dr. Bieber were able to trace the origins of this traditional dyeing technique back to the 17th century. However, the more the treated textile is used or the more it is exposed to the elements, the less remains of the cuticle layers. Most of the cuticle layers of the world-famous Pazyryk carpet were also missing. The researchers succeeded in proving the effect of fermentation by comparing the fluorescent images with those of samples of wool they fermented and dyed themselves.

Credit: 
Friedrich-Alexander-Universität Erlangen-Nürnberg

Death enables complexity in chemical evolution

image: This is Sijbren Otto, Professor of Systems Chemistry at the University of Groningen, the Netherlands. His work on chemical evolution shows how complexity can evolve in artificial replicators.

Image: 
Sylvia Germes

Simple systems can reproduce faster than complex ones. So, how can the complexity of life have arisen from simple chemical beginnings? Starting with a simple system of self-replicating fibres, chemists at the University of Groningen have discovered that upon introducing a molecule that attacks the replicators, the more complex structures have an advantage. This system shows the way forward in elucidating how life can originate from lifeless matter. The results were published on 10 March in the journal Angewandte Chemie.

The road to answering the question of how life originated is guarded by Spiegelman's monster, named after the American molecular biologist Sol Spiegelman, who some 55 years ago described the tendency of replicators to become smaller when they were allowed to evolve. 'Complexity is a disadvantage during replication, so how did the complexity of life evolve?' asked Sijbren Otto, Professor of Systems Chemistry at the University of Groningen. He previously developed a self-replicating system in which self-replication produces fibres from simple building blocks and, now, he has found a way to beat the monster.

Death

'To achieve this, we introduced death into our system,' Otto explains. His fibres are made up of stacked rings that are self-assembled from single building blocks. The number of building blocks in a ring can vary, but stacks always contain rings of the same size. Otto and his team tweaked the system in such a way that rings of two different sizes were created, containing either three or six building blocks.

Under normal circumstances, fibres that are made up of small rings will outgrow the fibres with larger rings. 'However, when we added a compound that breaks up rings inside the fibres, we found that the bigger rings were more resistant. This means that the more complex fibres will dominate, despite the smaller rings replicating faster. Fibres that are made from small rings are more easily "killed".'

Experiments

Otto acknowledges that the difference in complexity between the two types of fibres is small. 'We did find that the fibres from the larger rings were better catalysts for the benchmark retro-aldol reaction than the simpler fibres that are made from rings with three building blocks. But then again, this reaction doesn't benefit the fibres.' However, the added complexity protects the fibres from destruction, probably by shielding the sulphur-sulphur bonds that link the building blocks into rings.

'All in all, we have now shown that it is possible to beat Spiegelman's monster,' says Otto. 'We did this in a particular way, by introducing chemical destruction, but there may be other routes. For us, the next step is to find out how much complexity we can create in this manner.' His team is now working on a way to automate the reaction, which depends on a delicate balance between the processes of replication and destruction. 'At the moment, it needs constant supervision and this limits the time that we can run it.'

Variants

The new system is the first of its kind and opens a route to more complex chemical evolution. 'In order to achieve real Darwinian evolution that leads to new things, we will need more complex systems with more than one building block,' says Otto. The trick will be to design a system that allows for the right amount of variation. 'When you have unlimited variation, the system won't go anywhere, it will just produce small amounts of all kinds of variants.' In contrast, if there is very little variation, nothing really new will appear.

The results that were presented in the latest paper show that, starting from simple precursors, complexity can increase in the course of evolution. 'This means that we can now see a way forward. But the journey to producing artificial life through chemical evolution is still a long one,' says Otto. However, he has beaten the monster guarding the road to his destination.

Simple Science Summary

One of the big questions in science is how life can originate from lifeless matter. Chemists at the University of Groningen have developed a system in which self-replicating fibres evolve. However, self-replicating systems generally favour more simple replicators since they replicate faster. Through 'survival of the simplest', systems will never produce the complexity that is necessary for life. This problem was solved by adding a substance that can break up the replicators, it 'kills' them. It turned out that more complex replicators are protected against this destruction, which means that in the presence of death, complex replicators can outcompete simpler ones.

Credit: 
University of Groningen

Electromagnetic fields hinder spread of breast cancer, study shows

COLUMBUS, Ohio - Electricity may slow - and in some cases, stop - the speed at which breast cancer cells spread through the body, a new study indicates.

The research also found that electromagnetic fields might hinder the amount of breast cancer cells that spread. The findings, published recently in the journal Bioelectricity, suggest that electromagnetic fields might be a useful tool in fighting cancers that are highly metastatic, which means they are likely to spread to other parts of the body, the authors said.

"We think we can hinder metastasis by applying these fields, but we also think it may be possible to even destroy tumors using this approach," said Vish Subramaniam, senior author of the paper and former professor of mechanical and aerospace engineering at The Ohio State University. Subramaniam retired from Ohio State in December.

"That is unclear at this stage, but we are working on understanding that - how big should the electromagnetic field be, how close should it be to the tumor? Those are the next questions we hope to answer."

The study is among the first to show that electromagnetic fields could slow or stop certain processes of a cancer cell's metabolism, impairing its ability to spread. The electromagnetic fields did not have a similar effect on normal breast cells.

Travis Jones, lead author of the paper and a researcher at Ohio State, compared the effects to what might happen if something interfered with a group running together down a path.

The effect, Subramaniam said, is that some of the cancer cells slow down when confronted with electromagnetic fields.

"It makes some of them stop for a little while before they start to move, slowly, again," he said. "As a group, they appear to have split up. So how quickly the whole group is moving and for how long they are moving becomes affected."

The electromagnetic fields are applied to cancerous cells without touching them, said Jonathan Song, co-author of the paper, associate professor of mechanical and aerospace engineering at Ohio State and co-director of Ohio State's Center for Cancer Engineering.

Song compared the cancer cells with cars. Each cell's metabolism acts as fuel to move the cells around the body, similar to the way gasoline moves vehicles.

"Take away the fuel, and the car cannot move anymore," Song said.

The work was performed on isolated human breast cancer cells in a lab and has not been tested clinically.

The electromagnetic fields appear to work to slow cancer cells' metabolism selectively by changing the electrical fields inside an individual cell. Accessing the internal workings of the cell, without having to actually touch the cell via surgery or another more invasive procedure, is new to the study of how cancer metastasizes, Subramaniam said.

"Now that we know this, we can start to answer other questions, too," Subramaniam said. "How do we affect the metabolism to the point that we not only make it not move but we choke it, we completely starve it. Or can we slow it down to the point where it will always remain weak?"

Credit: 
Ohio State University