Culture

Study uncovers unexpected connection between gliomas, neurodegenerative diseases

A protein typically associated with neurodegenerative diseases like Alzheimer's might help scientists explore how gliomas, a type of cancerous brain tumor, become so aggressive.

The new study, in mouse models and human brain tumor tissues, was published in Science Translational Medicine and found a significant expression of the protein TAU in glioma cells, especially in those patients with better prognoses.

Patients with glioma are given a better prognosis when their tumor expresses a mutation in a gene called isocitrate dehydrogenase 1 (IDH1). In this international collaborative study led by the Instituto de Salud Carlos III-UFIEC in Madrid, Spain, those IDHI mutations stimulated the expression of TAU. Then, the presence of TAU acted as a brake for the formation of new blood vessels, which are necessary for the aggressive behavior of the tumors.

"We report that the levels of microtubule-associated protein TAU, which have been associated with neurodegenerative diseases, are epigenetically controlled by the balance between normal and mutant IDH1/2 in mouse and human gliomas," says co-author Maria G. Castro, Ph.D., a professor of neurosurgery and cell and developmental biology at Michigan Medicine (University of Michigan). "In IDH1/2 mutant tumors, we found that expression levels of TAU decreased with tumor progression."

That means levels of TAU could be used as a biomarker for tumor progression in mutant IDH1/2 gliomas, Castro says.

Credit: 
Michigan Medicine - University of Michigan

Groups publish statements on CT contrast use in patients with kidney disease

OAK BROOK, Ill. - The risk of administering modern intravenous iodinated contrast media in patients with reduced kidney function has been overstated, according to new consensus statements from the American College of Radiology (ACR) and the National Kidney Foundation (NKF), published in the journal Radiology.

Intravenous iodinated contrast media are commonly used with computed tomography (CT) to evaluate disease and to determine treatment response. Although patients have benefited from their use, iodinated contrast media have been denied or delayed in patients with reduced kidney function due to the perceived risks of contrast-induced acute kidney injury. This practice can hinder a timely and accurate diagnosis in these patients.

"The historical fears of kidney injury from contrast-enhanced CT have led to unmeasured harms related to diagnostic error and diagnostic delay," said lead author Matthew S. Davenport, M.D., associate professor of radiology and urology at the University of Michigan in Ann Arbor, Michigan. "Modern data clarify that this perceived risk has been overstated. Our intent is to provide multi-disciplinary guidance regarding the true risk to patients and how to apply a consideration of that risk to modern clinical practice."

These consensus statements were developed to improve and standardize the care of patients with
impaired kidney function who may need to undergo exams that require intravenous iodinated contrast media to provide the clearest images and allow for the most informed diagnosis.

In clinical practice, many factors are used to determine whether intravenous contrast media should be administered. These include probability of an accurate diagnosis, alternative methods of diagnosis, risks of misdiagnosis, expectations about kidney function recovery, and risk of allergic reaction. Decisions are rarely based on a single consideration, such as risk of an adverse event specifically related to kidney impairment. Consequently, the authors advise that these statements be considered in the context of the entire clinical scenario.

Importantly, the report outlines the key differences between contrast-induced acute kidney injury (CI-AKI) and contrast-associated acute kidney injury (CA-AKI). In CI-AKI, a causal relationship exists between contrast media and kidney injury, whereas in CA-AKI, a direct causal relationship has not been demonstrated. The authors suggest that studies that have not properly distinguished the two have contributed to the overstatement of risk.

"A primary explanation for the exaggerated perceived nephrotoxic risk of contrast-enhanced CT is nomenclature," Dr. Davenport said. "'Contrast-induced' acute kidney injury implies a causal relationship. However, in many circumstances, the diagnosis of CI-AKI in clinical care and in research is made in a way that prevents causal attribution. Disentangling contrast-induced AKI (causal AKI) from contrast-associated AKI (correlated AKI) is a critical step forward in improving understanding of the true risk to patients."

The statements answer key questions and provide recommendations for use of intravenous contrast media in treating patients with varying degrees of impaired kidney function.

Although the true risk of CI-AKI remains unknown, the authors recommend intravenous normal saline for patients without contraindication, such as heart failure, who have acute kidney injury or an estimated glomerular filtration rate (eGFR) less than 30 mL/min per 1.73 m2 who are not undergoing maintenance dialysis. In individual and unusual high-risk circumstances (patients with multiple comorbid risk factors), prophylaxis may be considered in patients with an eGFR of 30-44 mL/min per 1.73 m2 at the discretion of the ordering clinician.

The presence of a solitary kidney should not independently influence decision making regarding the risk of CI-AKI. Lowering of contrast media dose below a known diagnostic threshold should be avoided due to the risk of lowering diagnostic accuracy. Also, when feasible, medications that are toxic to the kidneys should be withheld by the referring clinician in patients at high risk. However, renal replacement therapy should not be initiated or altered solely based on contrast media administration.

The authors emphasize that prospective controlled data are needed in adult and pediatric populations to clarify the risk of CI-AKI.

Credit: 
Radiological Society of North America

Visits to pediatricians on the decline

image: A new study lead by the University of Pittsburgh and UPMC Children's Hospital finds that commercially insured children are visiting the pediatrician less often.

Image: 
UPMC

PITTSBURGH, Jan. 21, 2020 - Commercially insured children in the U.S. are seeing pediatricians less often than they did a decade ago, according to a new analysis led by a pediatrician-scientist at the University of Pittsburgh and UPMC Children's Hospital of Pittsburgh.

But whether that's good or bad is unclear, the researchers say in the study, published today in JAMA Pediatrics.

"There's something big going on here that we need to be paying attention to," said lead author Kristin Ray, M.D., M.S., assistant professor of pediatrics in Pitt's School of Medicine. "The trend is likely a combination of both positive and negative changes. For example, if families avoid bringing their kids in because of worry about high co-pays and deductibles, that's very concerning. But if this is the result of better preventive care keeping kids healthier or perhaps more physician offices providing advice over the phone to support parents caring for kids at home when they've got minor colds or stomach bugs, that's a good thing."

Ray and her colleagues examined insurance claims data from 2008 through 2016 for children 17-years-old and younger. The data came from a large commercial health plan that covers millions of children across all 50 states with a range of benefit options.

In that time span, primary care visits for any reason decreased by 14%.

Preventive care, or "well child" visits, increased by nearly 10%. This change occurred during the years when the Affordable Care Act eliminated co-pays for such visits. But that increase was eclipsed by a much larger decrease in problem-based visits for things such as illness or injury, with these visits declining by 24%. Among problem-based visits, decreases were seen for all types of diagnoses, except for psychiatric and behavioral health visits, which increased by 42%.

"This means that children and their families are visiting their pediatrician less throughout the year, presumably resulting in fewer opportunities for the pediatrician to connect with families on preventive care and healthy behaviors, like vaccinations and good nutrition," said Ray, also a pediatrician and director of health system improvements at UPMC Children's Community Pediatrics. "The question is: Why? We don't have the definitive answer, but our data give us some clues."

One possible explanation is that children are getting care elsewhere. Visits to urgent care, retail clinics and telemedicine consults for problem-based care increased during the study period. But that increase accounted for only about half of the decrease in visits to primary care pediatricians.

Higher out-of-pocket costs probably also explain why some parents aren't taking their children to the pediatrician for medical concerns, Ray said. During the time period studied, out-of-pocket costs for problem-based visits increased 42%, while inflation-adjusted median household income rose by only 5%. Previous studies have found that even $1-$10 increases in copayments are associated with fewer visits.

Other factors also could be at play, the research team noted. With more parents working, some may find it difficult to bring children in for care. And there may be less need for some visits. Vaccination has dramatically reduced rates of ear infections and hospitalizations. Pediatricians are being more careful with prescribing antibiotics, and this could be causing parents to watch children with cold symptoms for longer before seeking care. Recent research showing that children with ear or urinary tract infections do not always need to come back for rechecks also may have cut down on the number of visits for problems. And parents have ever-increasing amounts of information available to them online as they are deciding whether to seek care.

The drop in visits is not isolated to children. "This decline among children is echoed in other studies among younger and older adults," added senior author Ateev Mehrotra, M.D., M.P.H., associate professor of health care policy at Harvard Medical School. "Due to a variety of forces, Americans are not as connected with their primary care providers."

Credit: 
University of Pittsburgh

Opioid prescriptions affected by computer settings

Simple, no-cost computer changes can affect the number of opioid pills prescribed to patients, according to a new UC San Francisco study.

Researchers found that when default settings, showing a preset number of opioid pills, were modified downward, physicians prescribed fewer pills. Fewer pills could improve prescription practices and protect patients from developing opioid addictions.

The study publishes Tuesday, January 21, 2020 in JAMA Internal Medicine.

"It's striking that even in the current environment, where doctors know about the risks from opioids and are generally thoughtful about prescribing them, this intervention affected prescribing behavior," said senior author Maria C. Raven, MD, MPH, chief of emergency medicine at UCSF and vice chair of the UCSF Department of Emergency Medicine. "The findings are really exciting because of their potential to impact patient care at a large level. Reducing the quantities of opioids prescribed may help protect patients from developing opioid use disorder."

Prescription opioids play a significant role in the ongoing national public health crisis that has taken a massive toll on many communities.

Some addictions stem from an initial opioid prescription for acute pain in individuals who never previously took the pain medications, adding to the overall tragedy. As a result, emergency departments, hospitals and government policymakers have worked to decrease opioid prescribing through provider education and published guidelines, with mixed success.

In an effort to determine whether default settings could influence quantities prescribed, investigators in the new study examined opioid prescribing at two emergency departments, UCSF Medical Center and Highland Hospital, a trauma center and safety-net teaching hospital in Oakland, between November 2016 and July 2017.

Over the course of 20 weeks, the researchers randomly changed the default settings on electronic medical records for commonly prescribed opioids such as oxycodone, Percocet, and Norco, for four weeks at a time. Before the study, the electronic medical records had defaults for pain medications of 12 pills at Highland and 20 pills at UCSF. The researchers used preset quantities of 5, 10 and 15 pills, and also tested a blank setting that forced physicians to enter a number. Physicians could increase or decrease the number to whichever they felt was most appropriate for each patient. Altogether, 4,320 opioid prescriptions were analyzed.

The researchers found that changing default quantities affected the number of pills prescribed. Lower defaults were associated with lower quantities of opioids prescribed and a lower proportion of prescriptions that exceed prescribing recommendations from the federal Centers for Disease Control and Prevention.

The authors noted that they considered the risk of patient harm to be "very low," and that the risk of overprescribing was far greater than the risk of under-prescribing.

"Every electronic health record throughout the country already has default settings for opioids," said lead author Juan Carlos Montoy, MD, PhD, assistant professor of emergency medicine at UCSF. "What we've shown is that default settings matter, and can be changed to improve opioid prescribing. Importantly, this is cost free and preserves physician autonomy to do what they think is best for each patient."

"Our findings add to a large body of research from behavioral economics that has shown that defaults can be used to change behavior," Montoy said. "The opioid epidemic is complex and this certainly won't fix it, but it is one more tool we can use to address it."

Credit: 
University of California - San Francisco

Montana State astrophysicist finds massive black holes wandering around dwarf galaxies

image: A new search led by Montana State University astrophysicist Amy Reines has revealed more than a dozen massive black holes in dwarf galaxies that were previously considered too small to host them, surprising scientists with their location within the galaxies.

Image: 
MSU Photo by Adrian Sanchez-Gonzalez

BOZEMAN -- A new search led by Montana State University has revealed more than a dozen massive black holes in dwarf galaxies that were previously considered too small to host them, and surprised scientists with their location within the galaxies.

The study, headed by MSU astrophysicist Amy Reines, searched 111 dwarf galaxies within a billion light years of Earth using the National Science Foundation's Karl G. Jansky Very Large Array at the National Radio Astronomy Observatory, two hours outside Albuquerque in the plains of New Mexico. Reines identified 13 galaxies that "almost certainly" host massive black holes and found something unexpected: The majority of the black holes were not in the location she anticipated.

"All of the black holes I had found before were in the centers of galaxies," said Reines, an assistant professor in the Department of Physics in the College of Letters and Science and a researcher in MSU's eXtreme Gravity Institute. "These were roaming around the outskirts. I was blown away when I saw this."

The eXtreme Gravity Institute brings together physicists and astronomers to study phenomena where the forces of gravity are so strong they blur the separation between space and time, such as the big bang, neutron stars and black holes.

There are two main types of black holes, incredibly dense areas of space with gravitational pulls strong enough to capture light. Smaller, stellar black holes form as large stars die and are roughly 10 times the mass of our sun, according to Reines. The other type, known as supermassive or massive black holes, tend to be found at the center of galaxies and can have masses millions or even billions that of our sun. Scientists don't know how they are created.

The Milky Way, a spiral galaxy consisting of somewhere between 100 and 400 billion stars, has a massive black hole at its center, Sagittarius A*. Dwarf galaxies can be of any shape, but are much smaller than the Milky Way, with up to a few billion stars.

Reines' results confirm predictions from recent computer simulations by Jillian Bellovary, an assistant professor at Queensborough Community College in New York and Research Associate at the American Museum of Natural History, which postulated that black holes may often be off-center in dwarf galaxies due to the way galaxies interact as they move through space. The findings may change how scientists look for black holes in dwarf galaxies in the future.

"We need to expand searches to target the whole galaxy, not just the nuclei where we previously expected black holes to be," Reines said.

Reines' paper, "A New Sample of (Wandering) Massive Black Holes in Dwarf Galaxies from High Resolution Radio Observations," was published Jan. 3 in The Astrophysical Journal, and Reines reported the findings at the American Astronomical Society meeting in Honolulu, Hawaii, on Jan. 5.

Reines has been searching the skies for black holes for a decade. As a graduate student at the University of Virginia, she focused on star formation in dwarf galaxies, but in her research she found something else that captured her interest: a massive black hole "in a little dwarf galaxy where it wasn't supposed to be."

Thirty million light years from Earth, the dwarf galaxy Henize 2-10 was previously believed to be too small to host a massive black hole. Conventional wisdom told us that all massive galaxies with a spheroidal component have a massive black hole, Reines explained, and little dwarf galaxies didn't. Yet Reines found one in the center of the dwarf galaxy. It was a "eureka" moment, she said. Her findings were published in the journal Nature in 2011 and Reines turned her research to searching for other black holes in dwarf galaxies.

"Once I started looking for these things on purpose, I started finding a whole bunch," Reines said.

Her next search of the universe shifted to visual data rather than radio signals. It uncovered over 100 possible black holes in the first systematic search of a parent sample of more than 40,000 dwarf galaxies. For her latest search, described in the paper released this month, Reines wanted to go back and look for radio signatures in that sample, which she said would allow her to find massive black holes in star-forming dwarf galaxies. Only one galaxy was identified using both methods.

"There are lots of opportunities to make new discoveries because studying black holes in dwarf galaxies is a new field," she said. "People are definitely captivated by black holes. They're mysterious and fascinating objects."

Reines' discoveries have poured new energy into the search for black holes in dwarf galaxies, opening up new areas of astrophysics as she and other scientists attempt to discover how these massive black holes form.

"When new discoveries break our current understanding of the way things work, we find even more questions than we had before," said Yves Idzerda, head of the Department of Physics at MSU.

Credit: 
Montana State University

Research supports new approach to mine reclamation

image: Land reclaimed using geomorphic techniques blends in with undisturbed terrain in the Gas Hills of Fremont County in central Wyoming.

Image: 
Wyoming DEQ

A new approach to reclaiming lands disturbed by surface mining is having the desired result of improving ecosystem diversity, including restoration of foundation species such as sagebrush, according to a study by University of Wyoming researchers.

The study by Associate Professor Kristina Hufford and graduate student Kurt Fleisher, in the Department of Ecosystem Science and Management, looked at former uranium and coal mine sites in central and southwest Wyoming reclaimed under the Wyoming Department of Environmental Quality's (DEQ) Abandoned Mine Land (AML) program. The research was published recently in the Journal of Environmental Management.

"We found that the areas reclaimed using the new techniques, called geomorphic reclamation, had greater species diversity and improved plant community structure when compared with areas reclaimed using traditional practices," Hufford says. "There is strong evidence that geomorphic reclamation may be a better candidate than traditional reclamation to restore foundation species such as sagebrush in Wyoming."

Traditional reclamation techniques generally have created landscapes with uniform topography and linear slopes, sometimes resulting in problems with erosion, as well as less-than-desired revegetation. Geomorphic reclamation is a relatively novel approach intended to mimic the topography of nearby undisturbed lands, with a wide variety of terrain that is stable and less susceptible to erosion.

DEQ's AML Division used both traditional and geomorphic techniques in reclaiming a former uranium mine in the Gas Hills of Fremont County and a former coal mine north of Rock Springs in Sweetwater County. The seeding of those sites was completed in 2007 and 2009, respectively. With funding from DEQ, the UW scientists examined the sites in the summers of 2017 and 2018 to compare plant growth.

While geomorphic techniques didn't result in landscapes exactly matching undisturbed rangeland at either site, the researchers found that geomorphic reclamation was more successful than traditional reclamation from several perspectives.

Most significantly, there was more plant diversity and species richness, including larger numbers of shrubs such as sagebrush and rabbitbrush. These native species are of particular importance to sage grouse, pronghorn and other wildlife species.

"The results of geomorphic reclamation for shrub recovery may have benefits for species that depend upon sagebrush," Hufford says.

The researchers did find that geomorphic reclamation was more successful at the Gas Hills site than the site north of Rock Springs. They say that could be a result of climate differences between the two locations; the fact that the Gas Hills seeding took place two years earlier; and the fact that more native topsoil was used at the Gas Hills site.

They also suggest that seed mixtures could be adjusted to include more native plant species and come closer to matching vegetation on surrounding undisturbed rangelands.

Still, the researchers wrote, "Our results suggest geomorphic reclamation may improve plant community diversity and wildlife habitat as a practical method for landscape-level restoration in post-mining sites."

The issue has particular relevance for Wyoming, where nearly 90,000 acres have been disturbed by surface mining and many more have been permitted for future mining.

Credit: 
University of Wyoming

Scanning Raman picoscopy: A new methodology for determining molecular chemical structure

image: (a) Schematic of scanning Raman picoscopy (SRP). When a laser beam is focused into the nanocavity between the atomistically sharp tip and substrate, a very strong and highly localized plasmonic field will be generated, dramatically enhancing the Raman scattering signals from the local chemical groups in a single molecule right underneath the tip. (b) Merged SRP image by overlaying four typical Raman imaging patterns shown on the right insets for four different vibrational modes. (c) Artistic view of the Mg-porphine molecule showing how four kinds of chemical groups (colored "Legos") are assembled into a complete molecular structure.

Image: 
©Science China Press

Precise determination of the chemical structure of a molecule is of vital importance to any molecular related field and is the key to a deep understanding of its chemical, physical, and biological functions. Scanning tunneling microscope and atomic force microscope have outstanding abilities to image molecular skeletons in real space, but these techniques usually lack chemical information necessary to accurately determine molecular structures. Raman scattering spectra contain abundant structural information of molecular vibrations. Different molecules and chemical groups exhibit distinct spectral features in Raman spectra, which can be used as the "fingerprints" of molecules and chemical groups. Therefore, the above mentioned deficiency can in principle be overcome by a combination of scanning probe microscopy with Raman spectroscopy, as demonstrated by the tip enhanced Raman spectroscopy (TERS), which opens up opportunities to determine the chemical structure of a single molecule.

In 2013, a research group led by Zhenchao Dong and Jianguo Hou at University of Science and Technology of China (USTC) demonstrated sub-nanometer resolved single-molecule Raman mapping for the first time [Nature 498, 82 (2013)], driving the spatial resolution with chemical identification capability down to ~5 Å. Since then, researchers around the world have been keeping on developing such single-molecule Raman imaging technique to explore what is the ultimate limit of the spatial resolution and how this technique can be best utilized.

Recently, the USTC group published a research paper in National Science Review (NSR) entitled "Visually Constructing the Chemical Structure of a Single Molecule by Scanning Raman Picoscopy", pushing the spatial resolution to a new limit and proposing an important new application for the state-of-art technique. In this work, by developing cryogenic ultrahigh-vacuum TERS system at liquid-helium temperatures and fine-tuning the highly localized plasmon field at the sharp tip apex, they further drive the spatial resolution down to 1.5 Å on the single-chemical-bond level, which enables them to achieve full spatial mapping of various intrinsic vibrational modes of a molecule and discover distinctive interference effects in symmetric and antisymmetric vibrational modes. More importantly, based on the Ångström-level resolution achieved and the new physical effect discovered, by combining with Raman fingerprint database of chemical groups, they further propose a new methodology, coined as Scanning Raman Picoscopy (SRP), to visually construct the chemical structure of a single molecule. This methodology highlights the remarkable ability of Raman-based scanning technology via an atomistically sharp tip to reveal the molecular chemical structure in real space, just by "looking" at a single molecule optically, as schematically shown in Figure (a).

By applying the SRP methodology to a single magnesium porphyrin model molecule, the researchers at USTC obtained a set of real-space imaging patterns for different Raman peaks, and found that these patterns show different spatial distributions for different vibrational modes. Taking the typical C-H bond stretching vibration on the pyrrole ring as an example, for the antisymmetric vibration (3072 cm-1) of two C-H bonds, the phase relation of their local polarization responses is opposite. When the tip is located right above the center between two bonds, the contributions from both bonds to the Raman signals cancel out, giving rise to the "eight-spot" feature in the Raman map for the whole molecule, with the best spatial resolution down to 1.5 Å. These "eight spots" have good spatial correspondence with the eight C-H bonds on the four pyrrole rings of a magnesium porphyrin molecule, indicating that the detection sensitivity and spatial resolution have reached the single-chemical-bond level. Raman imaging patterns of other vibrational peaks also show good correspondence to relevant chemical groups in terms of characteristic peak positions and spatial distributions [as shown in Figures (b) and (c)]. The correspondence provided by the simultaneous spatially and energy-resolved Raman imaging allows them to correlate local vibrations with constituent chemical groups and to visually assemble various chemical groups in a "Lego-like" manner into a whole molecule, thus realizing the construction of the chemical structure of a molecule.

The scanning Raman picoscopy (SRP) is the first optical microscopy that has the ability to visualize the vibrational modes of a molecule and to directly construct the structure of a molecule in real space. The protocol established in this proof-of-principle demonstration can be generalized to identify other molecular systems, and can become a more powerful tool with the aid of imaging recognition and machine learning techniques. The ability of such Ångström-resolved scanning Raman picoscopy to determine the chemical structure of unknown molecules will undoubtedly arouse extensive interests of researchers in the fields of chemistry, physics, materials, biology and so on, and is expected to stimulate active research in the fields as SRP is developed into a mature and universal technology.

Credit: 
Science China Press

Mars' water was mineral-rich and salty

image: NASA's Curiosity rover has obtained the mineralogical and chemical data of ancient lake deposits at Gale Crater, Mars. The present study reconstructs water chemistry of the paleolake in Gale based on the Curiosity's data.

Image: 
NASA

Presently, Earth is the only known location where life exists in the Universe. This year the Nobel Prize in physics was awarded to three astronomers who proved, almost 20 years ago, that planets are common around stars beyond the solar system. Life comes in various forms, from cell-phone-toting organisms like humans to the ubiquitous micro-organisms that inhabit almost every square inch of the planet Earth, affecting almost everything that happens on it. It will likely be some time before it is possible to measure or detect life beyond the solar system, but the solar system offers a host of sites that might get a handle on how hard it is for life to start.

Mars is at the top of this list for two reasons. First, it is relatively close to Earth compared to the moons of Saturn and Jupiter (which are also considered good candidates for discovering life beyond Earth in the solar system, and are targeted for exploration in the coming decade). Second, Mars is extremely observable because it lacks a thick atmosphere like Venus, and so far, there are pretty good evidence that Mars' surface temperature and pressure hovers around the point liquid water--considered essential for life--can exist. Further, there is good evidence in the form of observable river deltas, and more recent measurements made on Mars' surface, that liquid water did in fact flow on Mars billions of years ago.

Scientists are becoming increasingly convinced that billions of years Mars was habitable. Whether it was in fact inhabited, or is still inhabited, remains hotly debated. To better constrain these questions, scientists are trying to understand the kinds of water chemistry that could have generated the minerals observed on Mars today, which were produced billions of years ago.

Salinity (how much salt was present), pH (a measure of how acidic the water was), and redox state (roughly a measure of the abundance of gases such as hydrogen [H2, which are termed reducing environments] or oxygen [O2, which are termed oxidising environments; the two types are generally mutually incompatible]) are fundamental properties of natural waters. As an example, Earth's modern atmosphere is highly oxygenated (containing large amounts of O2), but one need only dig a few inches into the bottom of a beach or lake today on Earth to find environments which are highly reduced.

Recent remote measurements on Mars suggest its ancient environments may provide clues about Mars' early habitability. Specifically, the properties of pore water within sediments apparently deposited in lakes in Gale Crater on Mars suggest these sediments formed in the presence of liquid water which was of a pH close to that of Earth's modern oceans. Earth's oceans are of course host to myriad forms of life, thus it seems compelling that Mars' early surface environment was a place contemporary Earth life could have lived, but it remains a mystery as to why evidence of life on Mars is so hard to find.

Credit: 
Tokyo Institute of Technology

'Ancient' cellular discovery key to new cancer therapies

image: The findings provide a new opportunity for cancer treatment strategies aimed at suppressing cell proliferation in the nutrient-poor tumour microenvironment, says Flinders Professor Janni Petersen.

Image: 
Photo: Flinders University

Australian researchers have uncovered a metabolic system which could lead to new strategies for therapeutic cancer treatment.

A team at Flinders University led by Professor Janni Petersen and the St Vincent's Institute of Medical Research have found a link between a metabolic system in a yeast, and now mammals, which is critical for the regulation of cell growth and proliferation.

"What is fascinating about this yeast is that it became evolutionarily distinct about 350 million years ago, so you could argue the discovery, that we subsequently confirmed occurs in mammals, is at least as ancient as that," said Associate Professor Jonathon Oakhill, Head, Metabolic Signalling Laboratory at SVI in Melbourne.

This project, outlined in a new paper in Nature Metabolism, looked at two major signalling networks.

Often referred to as the body's fuel gauge; a protein called AMP-Kinase, or AMPK, regulates cellular energy, slowing cell growth down when they don't have enough nutrients or energy to divide.

The other, that of a protein complex called mTORC1/TORC1, which also regulates cell growth, increases cell proliferation when it senses high levels of nutrients such as amino acids, insulin or growth factors.

A hallmark of cancer cells is their ability to over-ride these sensing systems and maintain uncontrolled proliferation.

"We have known for about 15 years that AMPK can 'put the brakes on' mTORC1, preventing cell proliferation" says Associate Professor Oakhill. "However, it was at this point that we discovered a mechanism whereby mTORC1 can reciprocally also inhibit AMPK and keep it in a suppressed state.

Professor Petersen, from the Flinders Centre for Innovation in Cancer in Adelaide, South Australia says the experiments showed the yeast cells "became highly sensitive to nutrient shortages when we disrupted the ability of mTORC1 to inhibit AMPK".

"The cells also divided at a smaller size, indicating disruption of normal cell growth regulation," she says.

"We measured the growth rates of cancerous mammalian cells by starving them of amino acids and energy (by depriving them of glucose) to mimic conditions found in a tumour.

"Surprisingly, we found that these combined stresses actually increased growth rates, which we determined was due to the cells entering a rogue 'survival' mode.

"When in this mode, they feed upon themselves so that even in the absence of appropriate nutrients the cells continue to grow.

"Importantly, this transition to survival mode was lost when we again removed the ability of mTORC1 to inhibit AMPK."

These findings provide a new opportunity for cancer treatment strategies aimed at suppressing cell proliferation in the nutrient-poor tumour microenvironment, the research concludes.

Credit: 
Flinders University

Advancing the application of genomic sequences through 'Kmasker plants'

image: Applications and methods of the bioinformatics tool "Kmasker plants" for the analysis of sequence data.

Image: 
Chris Ulpinnis / IPB Halle & Pixabay

Gatersleben, 20.01.2020 The development of next-generation-sequencing (NGS) has enabled researchers to investigate genomes that might previously have been considered too complex or too expensive. Nevertheless, the analysis of complex plant genomes, which often have an enormous amount of repetitive sequences, is still a challenge. Therefore, bioinformatics researchers from Leibniz Institute of Plant Genetics and Crop Plant Research (IPK), Martin Luther University Halle-Wittenberg (MLU) and Leibniz Institute of Plant Biochemistry (IPB) have now published "Kmasker plants", a program that allows the identification of repetitive sequences and thus facilitates the analysis of plant genomes.

In bioinformatics, the term k-mer is used to describe a nucleotide sequence of a certain length "k". By defining and counting such sequences, researchers can quantify repetitive sequences in the genome they are studying and assign them to corresponding positions. As early as 2014, researchers at IPK in Gatersleben used this approach to develop the in-silico (computer-based) tool "Kmasker". It was used to detect repetitions in the characterisation of the barley genome (Schmutzer et al., 2014).

The use of NGS is becoming more and more important, but the error-free composition of complex genomes from NGS results is still a challenge. For this reason, the researchers recently decided to revive and expand this initial proof-of-concept project. Under the leadership of Dr. Thomas Schmutzer, formerly from the research group "Bioinformatics and Information Technology" at IPK and now affiliated with the MLU, scientists from the MLU, the IPK, Wageningen University & Research and the IPB Halle worked in close cooperation on the redesign and development of "Kmasker plants". This collaboration was largely supported by the two service centres "GCBN" and "CiBi" from the German Network for Bioinformatics Infrastructure "de.NBI".

"Kmasker plants" allows for the rapid and reference-free screening of nucleotide sequences using genome-wide derived k-mers. In extension to the previous version, the bioinformatics tool now also enables comparative studies between different cultivars or closely related species, and supports the identification of sequences suitable as fluorescence in situ hybridisation (FISH) probes or CRISPR/Cas9-specific guide RNAs. Furthermore, "Kmasker plants" has been published with a web service that contains the pre-computed indices for selected economically important crop plants, such as barley or wheat. Dr. Schmutzer emphasises that "this tool will enable plant researchers all over the world to test plant genomes and thus, for example, identify repeat free parts of their sequence of interest." Rather, he believes that the enhanced features will make it possible to detect sequence candidate regions that have multiplied in the genome of one species but are missing in other species or occur in smaller copy numbers. This is a common effect that contributes to phenotypic variation of agronomic importance in various crops. A significant example is the Vrn-H2 gene, which is present in a single copy in winter barley, while it is missing in barley spring lines.

The "Kmasker plants" web-service is now available as part of the IPK Crop Analysis Tool Suite (CATS) and therefore as a service of the de.NBI Service Platform. Alternatively, the "Kmasker plants" source code can directly be accessed and installed via GitHub.

Credit: 
Leibniz Institute of Plant Genetics and Crop Plant Research

Cybercrime: Internet erodes teenage impulse controls

Many teenagers are struggling to control their impulses on the internet, in a scramble for quick thrills and a sense of power online, potentially increasing their risks of becoming cyber criminals.

A new study by Flinders Criminology analysed existing links between legal online activities and cybercrime- for example, how viewing online pornography progresses to opening illegal content, and motivations to evolve from online gaming to hacking.

Newly published in the European Society of Criminology, the authors outline why illegal online activity involving adolescents from 12-19 years of age is encouraged by the idea the internet blurs normal social boundaries amongst young users tempted into wrongdoings they wouldn't contemplate in the outside world.

Flinders Criminologist Professor Andrew Goldsmith says illegal online activity is especially attractive for adolescents already prone to curiosity and sneaky thrill seeking, but the internet encourages new levels experimentation which are easily accessible.

"The internet allows young people to limit their social involvement exclusively to particular associations or networks, as part of a trend we've termed 'digital drift'. From a regulatory perspective, we're finding this poses significant challenges as it degrades young people's impulse controls."

"It's becoming increasingly important to understand the connection between young people's emotional drivers and committing crimes, as well as human-computer interactions to establish why the internet easily tempts young users into digital piracy, pornography and hacking."

"We're using the word seduction to describe the processes and features intrinsic to the online environment that make online activity both attractive and compelling." "For some young people, the Internet is like a seductive swamp, very appealing to enter, but very sticky and difficult to get out of."

Professor Goldsmith says there needs to be a deeper understanding of the influential technologies regularly used by young people, recognizing that not all motivations for transgression indicate a deep criminal pathology or criminal commitment.

"Policy should consist of interventions that take into account the lack of worldly experience amongst many young offenders. Online technologies render the challenge of weighing up potential risks and harms from actions even harder. A propensity for thrill-seeking common especially among young males encouraged by the Internet can create a form of short-sightedness towards consequences."

"Effective government responses must reflect on the range of motivations young people bring to, and find in, their online behaviours, not least of all in order to garner support amongst young people when it comes to effective regulatory changes."

Credit: 
Flinders University

TB bacteria survive in amoebae found in soil

Scientists from the University of Surrey and University of Geneva have discovered that the bacterium which causes bovine TB can survive and grow in small, single-celled organisms found in soil and dung. It is believed that originally the bacterium evolved to survive in these single-celled organisms known as amoebae and in time progressed to infect and cause TB in larger animals such as cattle.

During the study, published in The ISME Journal, scientists sought to understand more about the bacterium Mycobacterium bovis (M. bovis), which causes bovine TB, and how it can survive in different environments. To do this scientists infected a type of amoebae known as Dictyostelium discoideum with M.bovis. Unlike other bacterium which were digested and used as a food source by the amoebae, M.bovis was unharmed and continued to survive for two days. In-depth analysis showed that the bacterium uses the same genes to escape from amoebae that it uses to avoid being killed by immune cells in larger animals such as cattle and humans.

Scientists also discovered that M. bovis remained metabolically active and continued to grow, although at a slower pace, at lower temperatures than expected.

Previously it was thought the bacterium could only replicate at 37?C, the body temperature of cattle and humans; however, replication of the bacterium was identified at 25 ?C. Researchers believe that the bacterium's ability to adapt to ambient temperatures and survive in amoebae may partially explain high transmission rates of the bacterium between animals.

Bovine TB is a hugely underestimated problem worldwide and England has the highest incidence of infection in Europe. Cattle found to have bovine TB are legally required to be slaughtered due to the high risk of the disease entering the food chain and spreading to humans. 32,793 cattle were slaughtered in England in 2018 in a bid to curtail the spread of the disease.

Lead author Professor Graham Stewart, Head of the Department of Microbial Sciences at the University of Surrey, said: "Despite implementation of control measures, bovine TB continues to be a major threat to cattle and has an enormous impact on the rural economy. Understanding the biology behind the TB disease and how it spreads is crucial for a balanced discussion on this devastating problem and to developing preventative measures to stop its spread.

"An important additional benefit is that our research shows the potential for carrying out at least some future TB research in amoebae rather than in large animals."

Credit: 
University of Surrey

Kazan University chemists teach neural networks to predict properties of compounds

The international team works on a computational model able to predict the properties of new molecules based on the analysis of fundamental chemical laws. The project was supported by the Russian Science Foundation (title "Using AI methods for the planning of chemical synthesis").

Co-author, Associate Professor Timur Madzhidov, explains, "We offered a way to insert the preexisting chemical equations into some frameworks of machine learning. It was tested on the predictions of tautomeric constants and acidity, which are linked by the Kabachnik equation. Using the functional interdependency between them, the neural network learns how to predict both these properties."

Prototropic tautomerism is the phenomenon of reversible isomerism, in which isomers (substances having the same qualitative and quantitative composition, but differing in structure and properties) easily transition into each other due to the transfer of a hydrogen atom.

"Tautomeric transformations are very common for organic compounds, being known for about half of all discovered compounds. For example, one of the mechanisms of spontaneous mutations is tied to the tautomeric transformations of DNA nucleic base. That why tautomerism must be taken into account when registering new compounds, during the computer design of new medications and the search for molecules with preconditioned properties," adds Madzhidov.

The results of this research can help increase the precision of prediction of physicochemical properties of designed medication and materials, as well as correctly forecast the parameters of chemical reactions.

Credit: 
Kazan Federal University

University of Barcelona study links weekend eating jet lag to obesity

A new study by the University of Barcelona (UB) concluded that irregularity in eating schedules during the weekend, named by the authors as eating jet lag, could be related to the increase of body mass index (BMI), a formula that measures weight and height to determine whether someone's weight is healthy.

These results, published in the science journal Nutrients, were independently taken from factors such as the quality of the diet, level of physical activity, social jet lag (difference in sleeping schedules during weekends) and chronotype (natural predisposition to a certain sleeping schedule).

According to the researchers, this is the first study that shows the importance of regularity in eating schedules -including weekends- to control weight, and could be an element to consider as part of nutrition guidelines to prevent obesity.

The study, jointly led by Maria Izquierdo Pulido, from the Department of Nutrition, Food Sciences and Gastronomy of the UB and INSA-UB, and Trinitat Cambras, from the Department of Biochemistry and Physiology of the UB, is part of the doctoral thesis of the researcher María Fernanda Zerón Rugerio, first author of the article. Other participants in the article are Álvaro Hernáez, from the August Pi i Sunyer Biomedical Research Institute (IDIBAPS) and the Physiopathology of Obesity and Nutrition Networking Biomedical Research Centre (CIBERobn), and Armida Patricia Porras Loaiza, from Universidad de las Américas Puebla (Mexico).

The importance of the biological clock in nutrition

During the last years researches proved the body understands calories differently depending on the time of the day. Eating late can be related to a higher risk of obesity. According to Maria Izquierdo Pulido, "this difference is related to our biological clock, which organizes our body to understand and metabolize calories consumed during the day". At night, however, "it gets the body ready for fasting while we sleep".

"As a result -the researcher continues-, when intake takes place regularly, the circadian clock ensures that the body's metabolic pathways act to assimilate nutrients. However, when food is taken at an unusual hour, nutrients can act on the molecular machinery of peripheral clocks (outside the brain), altering the schedule and thus, modifying the body's metabolic functions".

The new study was carried out on a population of 1,106 young people (aged between eighteen and twenty-two) in Spain and Mexico. Researchers analyzed the relation between the body mass index and the variability in eating timing during weekends compared to the rest of the days. To do so, authors used a new marker that gathers changes in eating times (breakfast, lunch and dinner) at weekends: the eating jet lag, presented for the first time in this study.

"Our results show changing the timing of the three meals during the weekend is linked to obesity. The highest impact on the BDI could occur when there is a 3.5-hour difference in eating schedules. After this, the risk of obesity could increase, since we saw individuals who showed a 3.5-hour eating jet lag increased their BDI in 1.3. kg/m2", says María Fernanda Zerón Rugerio.

Lack of synchrony between the social and body time

To explain the link between eating jet lag and obesity, authors suggest individuals to undergo a chronodisruption, that is, a lack of synchrony between internal time of the body and social time. "Our biological clock is like a machine, and is ready to unchain the same physiological and metabolic response at the same time of the day, every day of the week. Fixed eating and sleep schedules help the body to be organized and promote energy homeostasis. Therefore, people with a higher alteration of their schedules have a higher risk of obesity", notes Cambras.

More research is needed to reveal the physiological mechanisms and metabolic alterations behind the eating jet lag and its link to obesity. However, authors highlight the importance of keeping regular eating and sleeping schedules to preserve health and wellbeing. "Apart from diet and physical exercise, which are two pillars regarding obesity, other factor to be considered is regular eating schedules, since we proved it has an impact on our body weight", notes Izquierdo Pulido.

Studying the long term effects of eating jet lag

The study notes the importance of doing research on the relation between time irregularity and the evolution of weight over time, as well as conducting the study on populations with different social and economic characteristics, metabolic features and different age. "Variability in eating schedules during weekends compared to week days can happen chronically during someone's life. Future studies should evaluate the effect of this chronic variability through the eating jet lag, on the evolution of weight", conclude researchers.

Credit: 
University of Barcelona

Insecticides are becoming more toxic to honey bees

image: Researchers found that insecticide toxicity has increased in the last 20 years.

Image: 
Nick Sloff, Penn State

UNIVERSITY PARK, Pa. -- During the past 20 years, insecticides applied to U.S. agricultural landscapes have become significantly more toxic -- over 120-fold in some midwestern states -- to honey bees when ingested, according to a team of researchers, who identified rising neonicotinoid seed treatments in corn and soy as the primary driver of this change. The study is the first to characterize the geographic patterns of insecticide toxicity to bees and reveal specific areas of the country where mitigation and conservation efforts could be focused.

According to Christina Grozinger, Distinguished Professor of Entomology and director of the Center for Pollinator Research, Penn State, this toxicity has increased during the same period in which widespread decline in populations of pollinators and other insects have been documented.

"Insecticides are important for managing insects that damage crops, but they can also affect other insect species, such as bees and other pollinators, in the surrounding landscape," she said. "It is problematic that there is such a dramatic increase in the total insecticide toxicity at a time when there is also so much concern about declines in populations of pollinating insects, which also play a very critical role in agricultural production."

The researchers, led by Maggie Douglas, assistant professor of environmental studies, Dickinson College, and former postdoctoral fellow, Penn State, integrated several public databases -- including insecticide use data from the U.S. Geological Survey, toxicity data from the Environmental Protection Agency, and crop acreage data from the U.S. Department of Agriculture -- to generate county-level annual estimates of honey bee "toxic load" for insecticides applied between 1997 and 2012. The team defined toxic load as the number of lethal doses to bees from all insecticides applied to cropland in each county.

The researchers generated separate estimates for contact-based toxic loads, such as when a bee is sprayed directly, and oral-based toxic loads, such as when a bee ingests the pollen or nectar of a plant that has recently been treated. They generated a map of predicted insecticide toxic load at the county level. Their results appear today (Jan. 21) in Scientific Reports.

The team found that the pounds of insecticides applied decreased in most counties from 1997 to 2012, while contact-based bee toxic load remained relatively steady. In contrast, oral-based bee toxic load increased by 9-fold, on average, across the U.S. This pattern varied by region, with the greatest increase -- 121-fold -- seen in the Heartland, which the U.S. Department of Agriculture defines as all of Iowa, Illinois and Indiana; most of Missouri; and part of Minnesota, Ohio, Kentucky, Nebraska and South Dakota. The Northern Great Plains had the second highest increase at 53-fold. This region includes all of North Dakota and part of South Dakota, Nebraska, Colorado, Wyoming, Montana and Minnesota.

"This dramatic increase in oral-based toxic load is connected to a shift toward widespread use of neonicotinoid insecticides, which are unusually toxic to bees when they are ingested," said Douglas.

The most widely used family of insecticides in the world, neonicotinoids are commonly used as seed coatings in crops, such as corn and soybean. Some of the insecticide is taken up by the growing plants and distributed throughout their tissues, while the rest is lost to the environment.

"Several studies have shown that these seed treatments have negligible benefits for most crops in most regions," said Grozinger. "Unfortunately, growers often don't have the option to purchase seeds without these treatments; they don't have choices in how to manage their crops."

The researchers suggest that the common method of evaluating insecticide use trends in terms of pounds of insecticides applied does not give an accurate picture of environmental impact.

"The indicator we use -- bee toxic load -- can be considered as an alternative indicator in cases where impacts to bees and other non-target insects is a concern," said Douglas. "This is particularly relevant given that many states have recently developed 'Pollinator Protection Plans' to monitor and address pollinator declines. Ultimately, our work helps to identify geographic areas where in-depth risk assessment and insecticide mitigation and conservation efforts could be focused."

"It is important to note that the calculation of bee toxic load provides information about the total toxicity of insecticides applied to a landscape," said Grozinger. "It does not calculate how much of that insecticide actually comes in contact with bees, or how long the insecticide lasts before it is broken down. Future studies are needed to determine how toxic load associates with changes in populations of bees and other insects."

This research is part of a larger project to investigate the various stressors impacting pollinator populations across the United States. One tool created within this research project is Beescape, which allows users to explore the stressors affecting bees in their own communities.

Credit: 
Penn State