Earth

Exotic properties of helium-methane compounds inside giant planets

image: Dynamical behaviors of He3CH4 at high pressure from quantum mechanical molecular dynamics simulations at (a, d) 1000 K, (b, e) 1900 K and (c, f) 2350 K. Top panel: the averaged mean squared displacement (MSD) of H, He and C atoms at different temperatures. Bottom panel: representations of trajectories at different temperatures in the last 10 ps. Blue, cyan and red dots represent H, He and C atoms, respectively. At 2350 K, the trajectories of H and He atoms overlap with one another, and therefore we only show He trajectories here.

Image: 
©Science China Press

The inner mantles of icy giant planets such as Uranus and Neptune are mainly composed by water, ammonia and methane, while their atmospheres are made of hydrogen and helium. Under high pressures inside giant planets, it is unclear whether the helium can diffuse into the depths and react with the mantle materials. Moreover, exotic phenomena such as superionicity (a partially molten phase) and plasticity (spinning molecules in a regular crystal) might occur in these compounds under the environments of giant planets. Therefore, it is interesting to explore the formation of helium-methane compounds and their dynamical properties under planetary condition.

Recently, the group of Prof. Jian Sun in Nanjing University collaborated with researchers from the University of Cambridge and the University of Edinburgh and predicted a novel helium-methane compound He3CH4, using first-principles calculations and crystal structure prediction methods. He3CH4 is a typical molecular crystal composed by helium atoms and methane molecules. It is predicted to be stable from 55 to 155 GPa, which is a much wider range than that for pure methane. The insertion of helium atoms not only changes the original packing of pure methane molecules but also suppresses the polymerization of methane into longer hydrocarbons at higher pressures.

The authors further investigated dynamical behaviors of the helium-methane compound at different temperatures using ab initio molecular dynamics. They found a series of phase transitions upon heating. Firstly, the compound has a phase transition from a solid state (Fig. 1(a)(c)) to a plastic phase (Fig. 1(b)(e)), where the methane molecules rotate freely. At higher temperatures, helium atoms start to diffuse while methane molecules keep rotating (Fig. 1(d)(f)). Such an exotic phase has never been discovered in previous works and is expected to have intriguing properties such as efficient heat transport that will affect a planet's interior and surface temperature. Several analysis methods based on charge density were used to determine the nature of the interactions in He3CH4. The results indicated that van der Waals interactions exist between methane molecules, which is weaker than hydrogen bonds in the helium-water and helium-ammonia compounds. It results in a relatively fragile framework in the He3CH4 compound and an easier transition to the fluid state.

Compared with helium-water and helium-ammonia compounds predicted by the same groups previously, the helium-methane compound has a narrower range of the diffusive behavior within the P-T phase diagram. These recent works suggest that, as inert as helium is, it can still react with methane, water, and ammonia inside planets, almost every main component in the icy planet mantles. These theoretical predictions should stimulate further experimental investigations and makes a contribution to the understanding and building new models of icy giant planets, as well as the physics and chemistry of helium.

Credit: 
Science China Press

Sleep-wake disturbances can predict recurrent events in stroke survivors

image: The study, conducted in Switzerland, found that having multiple sleep-wake disturbances such as sleep-disordered breathing, extreme long or short sleep duration, insomnia and restless leg syndrome independently and significantly increased the risk of a new cardio-cerebrovascular event in the 2 years following a stroke.

Image: 
EAN

(Vienna, Sunday, 24 May, 2020) Stroke survivors suffering from the burden of combined sleep-wake disturbances are more likely to have another stroke or serious cardio- or cerebrovascular event compared to those without sleep-wake disturbances, according to the results of a scientific study presented today at the European Academy of Neurology (EAN) Virtual Congress.

The study, conducted by Professor Claudio Basssetti and his research team in Switzerland, found that having multiple sleep-wake disturbances such as sleep-disordered breathing, extreme long or short sleep duration, insomnia and restless leg syndrome independently and significantly increased the risk of a new cardio-cerebrovascular event (e.g. stroke, transient ischaemic attack, myocardial infarction) in the 2 years following a stroke. This, say the researchers, suggests that assessing and improving sleep patterns in stroke survivors could improve their long-term outcomes.

"We know that people who have had a stroke often experience sleep disorders, and that these are associated with worse stroke recovery outcomes," said Dr Martijn Dekkers and Dr Simone Duss from the University of Bern in Switzerland, who presented the study today. "What we wanted to learn from this study was whether sleep-wake disturbances in particular are associated with worse outcomes after stroke."

The study includes 438 individuals aged 21 to 86 years (average age 65 years) who had been hospitalized after an acute ischemic stroke (a type of stroke caused by a blocked blood vessel to the brain) or a transient ischaemic attack (a 'mini-stroke' caused by a brief blockage of the blood supply to the brain with transient clinical symptoms up to 24 hours). The presence and severity of the sleep-wake disturbances, such as insomnia, restless leg syndrome and sleep duration, as well as daytime symptoms such as sleepiness, were recorded for each individual at 1, 3, 12, and 24 months after their stroke. Sleep-disordered breathing was assessed within the first days after the ischemic stroke or transient ischemic attack using respirography. The occurrence of new cardio-cerebrovascular events was also recorded during the 2 years of follow-up.

The research team reports that a bit more than one third of the patients reported insomnia symptoms (i.e. ? 8 points on the Insomnia Severity Index questionnaire), about 8 % fulfil the clinical diagnosis of restless legs syndrome, 26% suffer from severe sleep-disordered breathing (Apnoea-Hypopnea-Index > 20 events per hour) and about 15% report extreme sleep durations, with a tendency toward longer sleep durations following the stroke.

"Using the sleep-related information we collected during the first 3 months after the stroke, we calculated a 'sleep burden index' for each individual, which reflected the presence and severity of sleep-wake disturbances," explained Dr Dekkers. "We then assessed whether the sleep burden index could be used to predict who would go on to have another cardio-cerebrovascular event during the 2 years we followed them after their stroke."

The results suggest that stroke survivors with at least one subsequent cardio/cerebrovascular event have a higher sleep burden index score than patients without a subsequent event 3 months to 2 years post-stroke (Wilcoxon rank-sum test p

Although interventional trials investigating the benefit of treating sleep-wake-disturbances after stroke are needed, Dr. Duss said that sleep-wake disorders, should be more systematically assessed and considered in comprehensive treatment approaches in stroke patients (as recommended also in a recent guideline produced by the EAN in collaboration with 3 other European Societies).

Credit: 
Spink Health

Australian researchers record world's fastest internet speed from a single optical chip

Researchers from Monash, Swinburne and RMIT universities have successfully tested and recorded Australia's fastest internet data speed, and that of the world, from a single optical chip - capable of downloading 1000 high definition movies in a split second.

Published in the prestigious journal Nature Communications, these findings have the potential to not only fast-track the next 25 years of Australia's telecommunications capacity, but also the possibility for this home-grown technology to be rolled out across the world.

In light of the pressures being placed on the world's internet infrastructure, recently highlighted by isolation policies as a result of COVID-19, the research team led by Dr Bill Corcoran (Monash), Distinguished Professor Arnan Mitchell (RMIT) and Professor David Moss (Swinburne) were able to achieve a data speed of 44.2 Terabits per second (Tbps) from a single light source.

This technology has the capacity to support the high-speed internet connections of 1.8 million households in Melbourne, Australia, at the same time, and billions across the world during peak periods.

Demonstrations of this magnitude are usually confined to a laboratory. But, for this study, researchers achieved these quick speeds using existing communications infrastructure where they were able to efficiently load-test the network.

They used a new device that replaces 80 lasers with one single piece of equipment known as a micro-comb, which is smaller and lighter than existing telecommunications hardware. It was planted into and load-tested using existing infrastructure, which mirrors that used by the NBN.

It is the first time any micro-comb has been used in a field trial and possesses the highest amount of data produced from a single optical chip.

"We're currently getting a sneak-peak of how the infrastructure for the internet will hold up in two to three years' time, due to the unprecedented number of people using the internet for remote work, socialising and streaming. It's really showing us that we need to be able to scale the capacity of our internet connections," said Dr Bill Corcoran, co-lead author of the study and Lecturer in Electrical and Computer Systems Engineering at Monash University.

"What our research demonstrates is the ability for fibres that we already have in the ground, thanks to the NBN project, to be the backbone of communications networks now and in the future. We've developed something that is scalable to meet future needs.

"And it's not just Netflix we're talking about here - it's the broader scale of what we use our communication networks for. This data can be used for self-driving cars and future transportation and it can help the medicine, education, finance and e-commerce industries, as well as enable us to read with our grandchildren from kilometres away."

To illustrate the impact optical micro-combs have on optimising communication systems, researchers installed 76.6km of 'dark' optical fibres between RMIT's Melbourne City Campus and Monash University's Clayton Campus. The optical fibres were provided by Australia's Academic Research Network.

Within these fibres, researchers placed the micro-comb - contributed by Swinburne University, as part of a broad international collaboration - which acts like a rainbow made up of hundreds of high quality infrared lasers from a single chip. Each 'laser' has the capacity to be used as a separate communications channel.

Researchers were able to send maximum data down each channel, simulating peak internet usage, across 4THz of bandwidth.

Distinguished Professor Mitchell said reaching the optimum data speed of 44.2 Tbps showed the potential of existing Australian infrastructure. The future ambition of the project is to scale up the current transmitters from hundreds of gigabytes per second towards tens of terabytes per second without increasing size, weight or cost.

"Long-term, we hope to create integrated photonic chips that could enable this sort of data rate to be achieved across existing optical fibre links with minimal cost," Distinguished Professor Mitchell said.

"Initially, these would be attractive for ultra-high speed communications between data centres. However, we could imagine this technology becoming sufficiently low cost and compact that it could be deployed for commercial use by the general public in cities across the world."

Professor Moss, Director of the Optical Sciences Centre at Swinburne University, said: "In the 10 years since I co-invented micro-comb chips, they have become an enormously important field of research.

"It is truly exciting to see their capability in ultra-high bandwidth fibre optic telecommunications coming to fruition. This work represents a world-record for bandwidth down a single optical fibre from a single chip source, and represents an enormous breakthrough for part of the network which does the heaviest lifting. Micro-combs offer enormous promise for us to meet the world's insatiable demand for bandwidth."

Credit: 
Monash University

New to science newts from Vietnam with an important message for Biodiversity Day 2020

image: One of the newly discovered crocodile newt species, Tylototriton pasmansi.

Image: 
Cuong The Pham

In time for the International Day for Biological Diversity 2020, the date (22 May) set by the United Nations to recognise biodiversity as "the pillars upon which we build civilizations", a new study, published in the peer-reviewed open-access journal ZooKeys, describes two new to science species and one subspecies of crocodile newts from northern Vietnam. However, this manifestation of the incredible diversity of life hosted on our planet comes as an essential reminder of how fragile Earth's biodiversity really is.

Until recently, the Black knobby newt (Tylototriton asperrimus) was known to be a common species inhabiting a large area stretching all the way from central and southern China to Vietnam. Much like most of the other members of the genus Tylototriton, colloquially referred to as crocodile newts or knobby newts, it has been increasingly popular amongst exotic pet owners and traditional Chinese medicine practitioners. Meanwhile, authorities would not show much concern about the long-term survival of the Black knobby newt, exactly because it was found at so many diverse localities. In fact, it is still regarded as Near Threatened, according to the International Union for Conservation of Nature's Red List.

However, over the past decade, the increasing amount of research conducted in the region revealed that there are, in fact, many previously unknown to science species, most of which would have been assumed to be yet another population of Black knobby newts. As a result, today, the crocodile newts represent the most species-rich genus within the whole family of salamanders and newts (Salamandridae).

Even though this might sound like great news for Earth's biodiversity, unfortunately, it also means that each of those newly discovered species has a much narrower distributional range, making them particularly vulnerable to habitat loss and overcollection. In fact, the actual Black knobby newt turns out to only exist within a small area in China. Coupled with the high demand of crocodile newts for the traditional Chinese medicine markets and the exotic pet trade, this knowledge spells a worrying threat of extinction for the charming 12 to 15-centimetre amphibians.

In order to help with the answer of the question of exactly how many Vietnamese species are still being mistakenly called Black knobby newt, the German-Vietnamese research team of the Cologne Zoo (Germany), the universities of Hanoi (Vietnam), Cologne and Bonn (Germany), and the Vietnam Academy of Science and Technology analysed a combination of molecular and detailed morphological characters from specimens collected from northern Vietnam. Then, they compared them with the Black knobby newt specimen from China used to originally describe the species back in 1930.

Thus, the scientists identified two species (Tylototriton pasmansi and Tylototriton sparreboomi) and one subspecies (Tylototriton pasmansi obsti) previously unknown to science, bringing the total of crocodile newt taxa known from Vietnam to seven. According to the team, their discovery also confirms northern Vietnam to be one of the regions with the highest diversity of crocodile newts.

"The taxonomic separation of a single widespread species into multiple small-ranged taxa (...) has important implications for the conservation status of the original species," comment the researchers.

The newly discovered crocodile newts were named in honour of the specialist on salamander chytrid fungi and co-discoverer Prof. Dr. Frank Pasmans and, sadly, the recently deceased salamander enthusiasts and experts Prof. Fritz-Jurgen Obst and Prof. Dr. Max Sparreboom.

In light of their findings, the authors conclude that the current and "outdated" Near Threatened status of the Black knobby newt needs to be reassessed to reflect the continuous emergence of new species in recent years, as well as the "severe threats from international trade and habitat loss, which have taken place over the last decade."

Meanwhile, thanks to the commitment to biodiversity conservation of Marta Bernardes, lead author of the study and a PhD Candidate at the University of Cologne under the supervision of senior author Prof Dr Thomas Ziegler, all crocodile newts were included in the list of internationally protected species by the Convention on International Trade in Endangered Species (CITES) last year.

Today, some of the threatened crocodile newt species from Vietnam are already kept at the Cologne Zoo as part of conservation breeding projects. Such is the case for the Ziegler's crocodile newt (Tylototriton ziegleri), currently listed as Vulnerable on the IUCN Red List and the Vietnamese crocodile newt (Tylototriton vietnamensis), currently considered as Endangered. Fortunately, the latter has been successfully bred at Cologne Zoo and an offspring from Cologne was recently repatriated.

Credit: 
Pensoft Publishers

First fossil nursery of the great white shark discovered

image: Set of teeth of today's white shark and a reconstructed set of teeth of a fossil great white shark.

Image: 
©Jaime Villafaña/Juergen Kriwet

The great white shark is one of the most charismatic, but also one of the most infamous sharks. Despite its importance as top predator in marine ecosystems, it is considered threatened with extinction; its very slow growth and late reproduction with only few offspring are - in addition to anthropogenic reasons - responsible for this.

Young white sharks are born in designated breeding areas, where they are protected from other predators until they are large enough not to fear competitors any more. Such nurseries are essential for maintaining stable and sustainable breeding population sizes, have a direct influence on the spatial distribution of populations and ensure the survival and evolutionary success of species. Researchers* have therefore intensified the search for such nurseries in recent years in order to mitigate current population declines of sharks by suitable protection measures. "Our knowledge about current breeding grounds of the great white shark is still very limited, however, and palaeo-nurseries are completely unknown", explains Jaime Villafaña from the University of Vienna.

He and his colleagues analysed statistically 5 to 2 million year old fossil teeth of this fascinating shark, which were found at several sites along the Pacific coast of Chile and Peru, to reconstruct body size distribution patterns of great white shark in the past. The results show that body sizes varied considerably along the South American paleo-Pacific coast. One of these localities in northern Chile, Coquimbo, revealed the highest percentage of young sharks, the lowest percentage of "teenagers". Sexually mature animals were completely absent.

This first undoubted paleo-nursery of the Great White Shark is of enormous importance. It comes from a time when the climate was much warmer than today, so that this time can be considered analogous to the expected global warming trends in the future. "If we understand the past, it will enable us to take appropriate protective measures today to ensure the survival of this top predator, which is of utmost importance for ecosystems," explains palaeobiologist Jürgen Kriwet: "Our results indicate that rising sea surface temperatures will change the distribution of fish in temperate zones and shift these important breeding grounds in the future".

This would have a direct impact on population dynamics of the great white shark and would also affect its evolutionary success in the future. "Studies of past and present nursery grounds and their response to temperature and paleo-oceanographic changes are essential to protect such ecological key species," concluded Jürgen Kriwet.

Credit: 
University of Vienna

Oriented hexagonal boron nitride foster new type of information carrier

image: The surface represents the low energy bands of the bilayer graphene around the K valley and the colour of the surface indicates the magnitude of Berry curvature, which acts as a new information carrier. When the top and bottom hBN are out-of-phase with each other (a) the Berry curvature magnitude is very small and is confined to the K-valley. However, when the top and bottom hBN are in phase with each other (b) the asymmetry induced between the layers of bilayer graphene results in large Berry curvature which is widely spread around the K-valley of the reciprocal space.

Image: 
JAIST

Valleytronics gives rise to valley current, a stable, dissipationless current which is driven by a pseudo-magnetic field, Berry curvature. This gives rise to valletronics based information processing and storage technology. A pre-requisite for the emergence of Berry curvature is either a broken inversion symmetry or a broken time-reversal symmetry. Thus two-dimensional materials such as transition metal dichalcogenides and gated bilayer graphene are widely studied for valleytronics as they exhibit broken inversion symmetry.

For most of the studies related to graphene and other two-dimensional materials, these materials are encapsulated with hexagonal boron nitride (hBN), a wide band gap material which has comparable lattice parameter to that of graphene. Encapsulation with hBN layer protects the graphene and other two-dimensional materials from unwanted adsorption of stray molecules while keeping their properties intact. hBN also acts as a smooth twodimensional substrate unlike SiO2 which is highly non-uniform, increasing the mobility of carriers in graphene. However, most of the valleytronics studies on bilayer graphene with hBN encapsulation has not taken into account the effect of hBN layer in breaking the layer symmetry of bilayer graphene and inducing Berry curvature.

This is why Japan Advanced Institute of Science and Technology (JAIST) postdoc Afsal Kareekunnan, senior lecturer Manoharan Muruganathan and Professor Hiroshi Mizuta decided it was vital to take into account the effect of hBN as a substrate and as an encapsulation layer on the valleytronics properties of bilayer graphene. By using first-principles calculations, they have found that for hBN/bilayer graphene commensurate heterostructures, the configuration, as well as the orientation of the hBN layer, has an immense effect on the polarity as well as the magnitude of the Berry curvature.

For non-encapsulated hBN/bilayer graphene heterostructure, where hBN is present only at the bottom, the layer symmetry is broken due to the difference in the potential experienced by the two layers of the bilayer graphene. This layer asymmetry induces a non-zero Berry curvature. However, encapsulation of the bilayer graphene with hBN (where the top and bottom hBN are out of phase with each other) nullifies the effect of hBN and drives the system towards symmetry, reducing the magnitude of the Berry curvature. A small Berry curvature which is still present is the feature of pristine bilayer graphene where the spontaneous charge transfer from the valleys to one of the layers results in a slight asymmetry between the layers as reported by the group earlier. Nonetheless, encapsulating bilayer graphene with the top and bottom hBN in phase with each other enhances the effect of hBN, leading to an increase in the asymmetry between the layers and a large Berry curvature. This is due to the asymmetric potential experienced by the two layers of bilayer graphene from the top and bottom hBN. The group has also found that the magnitude and the polarity of the Berry curvature can be tuned in all the above-mentioned cases with the application of an out-of-plane electric field.

"We believe that, from both theoretical and experimental perspective, such precise analysis of the effect of the use of hBN both as a substrate and as an encapsulation layer for graphene-based devices gives deep insight into the system which has great potential to be an ideal valleytronic material," Professor Mizuta said.

Credit: 
Japan Advanced Institute of Science and Technology

BCN MedTech presents an automatic method to detect and segment the intrauterine cavity

image: Anatomical representation and virtual reality system of the laser photocoagulation procedure in TTTS fetal surgery. The virtual representation includes the fetoscope (dark blue), placenta (beige), vessels (red), intrauterine cavity (grey) and maternal soft tissue (blue).

Image: 
UPF

Twin-to-twin transfusion syndrome (TTTS) occurs in around 10-15% of pregnancies with twins that share the same placenta. Typically, this syndrome appears before 24 weeks' gestation due to abnormal vascular communications located on the surface of the placenta. As a result, blood circulation is not balanced between the two twins, dramatically decreasing their chances of survival.

Fetoscopic laser photocoagulation is the most effective treatment for this syndrome and it consists of closing abnormal vascular connections located on the surface of the placenta to completely separate the circulation of blood to the two twins, thus preventing complications related to blood flow imbalance, such as death by cardiac overload, premature delivery and miscarriage.

The manoeuvrability of the fetoscope inserted through the uterine wall of the mother and the ability to burn all vessels that require sealing depends on the proper selection of the fetoscope entry point on the surface of the intrauterine cavity. Planning the best insertion point before the operation requires a good understanding of the patient's anatomy, which can be achieved using a virtual representation of the mother's uterus, via magnetic resonance imaging.

A study recently published in the advanced online edition of the journal IEEE Transactions on Medical Imaging presents the first automatic method to detect and segment the intrauterine cavity via three views (axial, sagittal and coronal) of the MRI by means of artificial intelligence and deep learning techniques.

A study conducted by Miguel Ángel González Ballester, ICREA research professor with the Department of Information and Communication Technologies (DTIC) at UPF, with Jordina Torrents-Barrena, first author of the study, Gemma Piella and Mario Ceresa, members of the UPF BCN MedTech Unit. Eduard Gratacós and Elisenda Eixarch, members of the Fetal i+D Fetal Medicine Research Center, BCNatal-Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Déu), IDIBAPS, are co-authors of the study and responsible for the clinics.

"The methodology presented uses neural networks based on the new paradigm of capsules to successfully capture the interdependency of the anatomy present in the MRI, particularly for unique class instances (anatomies), such as the intrauterine cavity and/or placenta", explains Jordina Torrents-Barrena, first author of the paper.

"The method designed is based on a reinforcement learning framework that uses capsules to delimit the location of the uterus. A capsule architecture is subsequently designed to segment (or refine) the whole intrauterine cavity", Torrents-Barrena adds. The latter network encodes the most discriminatory and robust features in the image.

The proposed method is evaluated by 13 performance measures and is also compared to 15 neural networks that have been previously published in the literature. "Our artificial intelligence method has been trained using magnetic resonance imaging from 71 pregnancies", Torrents-Barrena affirms.

"Having a three-dimensional representation allows us to evaluate different entry points and choose the one that offers the best visibility of all placental vessels with the slightest movement", comments Elisenda Eixarch, co-author of the study. "Undoubtedly, the application of this technology will allow us to move towards safer, more precise surgery", she adds.

On average, the methodology presented obtains a segmentation performance of over 91% for all tests and comparisons, highlighting the potential of this approach for use in the daily clinical practice as a surgical planning method.

Credit: 
Universitat Pompeu Fabra - Barcelona

Insight into mechanism of treatment-resistant gonorrhea sets stage for new antibiotics

image: Dr. Christopher Davies (left) and Dr. Avinash Singh (right) of the Medical University of South Carolina are co-authors of the May 22, 2020 article in the Journal of Biological Chemistry.

Image: 
Medical University of South Carolina

Due to the spread of antibiotic-resistant strains of Neisseria gonorrhoeae, existing treatments for gonorrhea, the sexually transmitted infection caused by the bacterium, are no longer effective. In the absence of a vaccine, there is an urgent need to develop novel treatment options.

"It's becoming much more difficult to treat gonorrhea infections as a result of antibiotic resistance," said Christopher Davies, Ph.D., a professor in the Department of Biochemistry and Molecular Biology at the Medical University of South Carolina. "Antibiotics that used to work against the bug are no longer effective."

Davies and his team report surprising findings about antibiotic resistance in the May 22, 2020 issue of the Journal of Biological Chemistry, showing that mutations in an essential protein responsible for resistance affect the binding of the antibiotic to the microbe. Rather than directly blocking binding of the antibiotic, the mutations prevent movements in the protein that help form the binding site for the antibiotic. The findings could offer a strategy for developing new treatments that overpower antibiotic resistance. Avinash Singh, Ph.D., a postdoctoral fellow in the Davies laboratory, is lead author of the article.

N. gonorrhoeae acquires resistance to antibiotics via interactions with so-called commensal species of Neisseria that colonize mucosal surfaces, such as those in the throat and genital tract, but do not cause disease. These species develop resistance following exposure to antibiotics that someone has taken for an infection. The commensal bacteria then transfer sections of genes responsible for antibiotic resistance not only among themselves, but also to disease-causing N. gonorrhoeae during gonorrhea infections. Once N. gonorrhoeae have incorporated these genes, they develop resistance and are no longer treatable with current antibiotics.

Overpowering that resistance will require more than a genetic understanding of how resistance arises.

"We need to understand what that resistance means at the molecular level," said Davies. "Only then can we address antimicrobial resistance by designing new antimicrobials to replace those that are no longer effective."

In recent years, cephalosporins have been the main drugs used to treat gonorrhea. Like penicillin, they target essential bacterial proteins, called penicillin-binding proteins (PBPs), that are responsible for the construction of cell walls. Bacteria need their cell walls to maintain cell shape and integrity. When a PBP is inhibited by a cephalosporin, the bacterial wall develops holes, resulting in the death of the microbe.

Gonorrhea can become resistant to cephalosporins when the PBP drug target mutates. Davies' group looked at the effect of those mutations on the structure of a PBP called PBP2 from a cephalosporin-resistant strain of gonorrhea.

The researchers compared the molecular structure of PBP2 in the antibiotic-resistant strain to that of an antibiotic-susceptible strain.

To their surprise, they found that the mutations prevented changes in the shape of PBP2 that are necessary for the antibiotic to bind to the protein.

Typically, mutations that confer antibiotic resistance occur in the so-called active site of proteins and block binding. But in PBP2, several of the mutations are quite a distance away. These distant mutations seem to be restricting shape changes in PBP2 that normally allow the antibiotic to interact with the protein and kill the microbe.

Once scientists understand the molecular mechanisms behind antibiotic resistance, they will be able to create new generations of antibiotics designed to avoid or overpower these mechanisms.

Knowing the important mutations that cause resistance will also allow treatments to be tailored for specific strains of N. gonorrhoeae. Patterns of resistance mutations could then be used to develop diagnostic kits to identify the strain with which a patient is infected, enabling doctors to prescribe the most appropriate antibiotics.

Credit: 
Medical University of South Carolina

MSU scientists solve half-century-old magnesium dimer mystery

Magnesium dimer (Mg2) is a fragile molecule consisting of two weakly interacting atoms held together by the laws of quantum mechanics. It has recently emerged as a potential probe for understanding fundamental phenomena at the intersection of chemistry and ultracold physics, but its use has been thwarted by a half-century-old enigma--five high-lying vibrational states that hold the key to understanding how the magnesium atoms interact but have eluded detection for 50 years.

The lowest fourteen Mg2 vibrational states were discovered in the 1970s, but both early and recent experiments should have observed a total of nineteen states. Like a quantum cold case, experimental efforts to find the last five failed, and Mg2 was almost forgotten. Until now.

Piotr Piecuch, Michigan State University Distinguished Professor and MSU Foundation Professor of chemistry, along with College of Natural Science Department of Chemistry graduate students Stephen H. Yuwono and Ilias Magoulas, developed new, computationally derived evidence that not only made a quantum leap in first-principles quantum chemistry, but finally solved the 50-year-old Mg2 mystery.

Their findings were recently published in the journal Science Advances.

"Our thorough investigation of the magnesium dimer unambiguously confirms the existence of 19 vibrational levels," said Piecuch, whose research group has been active in quantum chemistry and physics for more than 20 years. "By accurately computing the ground- and excited-state potential energy curves, the transition dipole moment function between them and the rovibrational states, we not only reproduced the latest laser-induced fluorescence (LIF) spectra, but we also provided guidance for the future experimental detection of the previously unresolved levels."

So why were Piecuch and his team able to succeed where others had failed for so many years?

The persistence of Yuwono and Magoulas certainly revived interest in the Mg2 case, but the answer lies in the team's brilliant demonstration of the predictive power of modern electronic structure methodologies, which came to the rescue when experiments encountered unsurmountable difficulties.

"The presence of collisional lines originating from one molecule hitting another and the background noise muddied the experimentally observed LIF spectra," Piecuch explained. "To make matters worse, the elusive high-lying vibrational states of Mg2 that baffled scientists for decades dissipate into thin air when the molecule starts rotating."

Instead of running costly experiments, Piecuch and his team developed efficient computational strategies that simulated those experiments, and they did it better than anyone had before.

Like the quantized vibrational states of Mg2, in-between approximations were not acceptable. They solved the electronic and nuclear Schrödinger equations, tenets of quantum physics that describe molecular motions, with almost complete accuracy.

"The majority of calculations in our field do not require the high accuracy levels we had to reach in our study and often resort to less expensive computational models, but we provided compelling evidence that this would not work here," Piecuch said. "We had to consider every conceivable physical effect and understand the consequences of neglecting even the tiniest details when solving the quantum mechanical equations."

Their calculations reproduced the experimentally derived vibrational and rotational motions of Mg2 and the observed LIF spectra with remarkable precision--on the order of 1 cm-1, to be exact. This provided the researchers with confidence that their predictions regarding the magnesium dimer, including the existence of the elusive high-lying vibrational states, were firm.

Yuwono and Magoulas were clearly excited about the groundbreaking project, but emphasized they had initial doubts whether the team would be successful.

"In the beginning, we were not even sure if we could pull this investigation off, especially considering the number of electrons in the magnesium dimer and the extreme accuracies required by our state-of-the-art computations," said Magoulas, who has worked in Piecuch's research group for more than four years and teaches senior level quantum chemistry courses at MSU.

"The computational resources we had to throw at the project and the amount of data we had to process were immense--much larger than all of my previous computations combined," added Yuwono, who also teaches physical chemistry courses at MSU and has worked in Piecuch's research group since 2017.

The case of the high-lying vibrational states of Mg2 that evaded scientists for half a century is finally closed, but the details of the computations that cracked it are completely open and accessible on the Science Advances website. Yuwono, Magoulas, and Piecuch hope that their computations will inspire new experimental studies.

"Quantum mechanics is a beautiful mathematical theory with a potential of explaining the intimate details of molecular and other microscopic phenomena," Piecuch said. "We used the Mg2 mystery as an opportunity to demonstrate that the predictive power of modern computational methodologies based on first-principles quantum mechanics is no longer limited to small, few-electron species."

Credit: 
Michigan State University

Large-scale analysis of protein arginine methylation by mass spectrometry

Methylation (namely addition of a methyl group) of arginine amino acid residues of proteins is a post-translational modification (PTM) catalyzed by a family of nine enzymes called Protein Arginine Methyl-Transferases (PRMTs).

PRMTs have been gaining increasing attention in the scientific landscape due their role in several essential physiological processes and implication in various diseases, including cancer and neurological disorders; loss of PRMT1 is lethal at the early developmental stages, while overexpression is frequently observed in different tumor types and correlates with poor patient prognosis, suggesting that inhibition of PRMTs may represent an effective therapeutic approach in oncology. Moreover, PTM levels are deregulated in different tumor types, and both local and global changes in PTM of the DNA-associated proteins - the histones - are linked to Amyotrophic Lateral Sclerosis. Due to the large body of scientific evidence indicating their role in cell pathophysiology, several PRMT inhibitors have been designed and developed as a potential new class of drugs. For instance, a potent, reversible type I PRMT inhibitor has been shown to have anti-tumor effects in human cancer models and some of the molecular candidates have entered clinical trials for solid tumors and lymphomas.

"Although it was first described in 1968," says Dr Tiziana Bonaldi, Group Leader at the European Institute of Oncology, "for almost 30 years, very little was known about the extent of protein arginine methylation, its effect on protein activity, and its biological role. The first PRMT, capable of catalyzing arginine methylation, was discovered in 1996 and, since then, nine proteins exerting the same functions have been discovered. However, inefficient analytical approaches have largely limited a comprehensive understanding of their biological function. Quantitative Mass spectrometry has emerged as the ideal analytical strategy to study the extent of protein arginine-methylation in model systems and identify PRMT targets, thus contributing substantial knowledge in this field."

In their article published in Current Protein & Peptide Science, Dr. Bonaldi and co-workers offer an overview on state-of-the-art arginine methyl-proteomics, describing the innovations that led - from the description by Mathias Mann's group of the first high-quality methyl-proteome in 2004 - to the latest studies that profile protein-methylation events occurring on hundreds of cellular proteins. Throughout this review, the authors describe the implementations both in the biochemical methods and in the computational methods for Mass spectrometry data analysis or the identification of sites of arginine methylation, discussing the pros and cons of the most common strategies employed.

Furthermore, relevant issues related to protein-arginine methylation analysis that are still under development are also discussed, such as the discrimination of symmetric and asymmetric arginine-di-methylation from Mass spectrometry fragmentation spectra. "These two modifications have identical mass, yet they are catalyzed by different PRMTs and have substantially different biological outcomes," explains Bonaldi; "Indeed, even though, for instance, both have a role in regulation of transcription, while asymmetric di-methylation is activating, symmetric di-methylation is repressive. Therefore, being able to distinguish between the two processes is crucial."

Finally, major emphasis is devoted to the heavy methyl SILAC strategy, a variation of the more conventional SILAC (Stable Isotope Labelling with Amino acids in Cell culture). "The heavy methyl SILAC strategy was designed to increase the confidence of in vivo arginine methyl-peptides identification by Mass spectrometry," - continues Bonaldi - "and the use will be facilitated by the recent development of ad hoc algorithms tailored for processing of heavy methyl SILAC datasets, such as MethylQuant and hmSEEKER."

Importantly, the authors conclude that optimization of the currently available analytical approaches and their systematic application will play a key role in the future research on the involvement of protein methylation in biological processes, providing critical insights in the related cellular biology processes and likely offering potential novel targets to be exploited in a clinical context.

Credit: 
Bentham Science Publishers

Long-term resilience of Earth's tropical forests in warmer world

A long-term assessment of the sensitivity of hundreds of tropical forest plots to increasing temperatures brings encouraging news: in the long run, Earth's tropical forests may be more resilient to a moderately warming world than short-term predictions have suggested. According to the new biome-wide study, tropical forests worldwide and their carbon storage capacities are likely to remain intact in moderate climate warming scenarios - so long as they're not further impacted by other human disturbances such as clearance, logging or fires. As plants and trees grow, they convert inorganic carbon into biomass, effectively storing vast amounts of atmospheric carbon dioxide into terrestrial flora. Understanding the land-atmosphere carbon flux of tropical forests - where nearly 40% of the world's carbon-hoarding vegetation resides - is particularly important to understanding potential climate change scenarios. However, the long-term sensitivity of tropical forests to climate warming, as well as how increased temperatures might affect carbon fluxes, are poorly constrained, representing some of the greatest sources of uncertainty in global climate change predictions. Long-term thermal sensitivity of tropical forests is often derived through short-term and inter-annual observations. However, the sensitivity of these scales may lead to overestimations in longer-term responses to climate change. To assess long-term climate controls on tropical forests directly, Martin Sullivan and colleagues measured biomass carbon and carbon flux in 590 globally distributed, permanent tropical forest plots. The results identify maximum temperature as the most important predictor of overall biomass; it depresses growth rates and reduces carbon storage by killing trees under hot, dry conditions. These adverse effects were most prominent when daytime high temperatures exceeded 32.2 °Celsius (C). Stabilizing global temperatures at 2 °C would push 71% of tropical forests beyond this threshold. Nevertheless, Sullivan et al. reveal greater long-term thermal resilience during moderate warming conditions than previous studies have implied, though, they say, this thermal adaptation potential may not be fully realized in all forests' future responses because of factors including the speed of temperature rises exceeding species' adaptive capabilities. The authors also emphasize that achieving the biome-wide climate resilience potential they document depends on both limiting heating and on large-scale conservation and restoration in forests.

Credit: 
American Association for the Advancement of Science (AAAS)

Mechanism behind upper motor degeneration revealed

CHICAGO --- Scientists from Northwestern Medicine and the University of Belgrade have pinpointed the electrophysiological mechanism behind upper motor neuron (UMN) disease, unlocking the door to potential treatments for amyotrophic lateral sclerosis (ALS) and other neurodegenerative diseases, such as Hereditary Spastic Paraplegia and Primary Lateral Sclerosis.

The study, published in Frontiers in Molecular Neuroscience on May 19, 2020, reveals the molecular underpinnings of electrical signals from potassium and sodium ion channels within the neuron's cell membrane.

Maintaining stability is the primary goal of healthy UNMs. Without it, cells begin to degenerate. Like the game of telephone, when UMNs process signals from neighboring neurons incorrectly, the message fails to reach the motor neurons in the spine, which instruct muscles to move.

"Voltage-gated ion channels, as a family, are involved in many neurodegenerative diseases, but their function, modulation, and expression profile are very complicated," said senior study author Hande Ozdinler, associate professor of neurology at Northwestern University Feinberg School of Medicine.

To identify the precise areas within the ion channels where the dysfunction began, the investigators, led by Dr. Marco Martina, associate professor of physiology and of psychiatry and behavioral sciences at Feinberg, recorded the electrical signals of in vivo cells in the earliest stage of ALS to measure the neuron's reaction to external stimuli. The team also looked at the genes of the diseased UMNs' ion channels to measure the changes in the structure of the ion and its subunits to determine whether the cause of degeneration was intrinsic.

The data revealed that early in the disease UMNs were unable to maintain the balance of excitation and inhibition within the cortical circuitry and their behavior was due to dynamic changes in key ion channels and their subunits.

"We all knew that ion channels were important, but we did not know which subunit or ion channels were important or involved in the shifting balance from health to disease in UMNs," said Ozdinler, a member of the Chemistry of Life Processes Institute. "When we received the exon microarray results, it was obvious that the ion channels were perturbed very early in the disease, potentially initiating the first wave of vulnerability."

By identifying the molecular underpinnings of the early stages of neurodegeneration, the study also identified potential targets for future treatment strategies.

"There are already drugs out there for some of those ion channels and subunits, but we never thought that we could use them for ALS because we did not know the mechanism," said Ozdinler. "This is the information we needed to move forward. "Now, we may begin to investigate whether we can utilize some of these drugs, already approved by the FDA, for motor neuron diseases."

Credit: 
Northwestern University

Scientists finally crack nature's most common chemical bond

image: A catalyst (center) based on iridium (blue ball) can snip a hydrogen atom (white balls) off a terminal methyl group (upper and lower left) to add a boron-oxygen compound (pink and red) that is easily swapped out for more complicated chemical groups. The reaction works on simple hydrocarbon chains (top reaction) or more complicated carbon compounds (bottom reaction). The exquisite selectivity of this catalytic reaction is due to the methyl group (yellow) that has been added to the iridium catalyst. The black balls are carbon atoms; red is oxygen; pink is boron. (UC Berkeley image by John Hartwig)

Image: 
John Hartwig, UC Berkeley

The most common chemical bond in the living world -- that between carbon and hydrogen -- has long resisted attempts by chemists to crack it open, thwarting efforts to add new bells and whistles to old carbon-based molecules.

Now, after nearly 25 years of work by chemists at the University of California, Berkeley, those hydrocarbon bonds -- two-thirds of all the chemical bonds in petroleum and plastics -- have fully yielded, opening the door to the synthesis of a large range of novel organic molecules, including drugs based on natural compounds.

"Carbon-hydrogen bonds are usually part of the framework, the inert part of a molecule," said John Hartwig, the Henry Rapoport Chair in Organic Chemistry at UC Berkeley. "It has been a challenge and a holy grail of synthesis to be able to do reactions at these positions because, until now, there has been no reagent or catalyst that will allow you to add anything at the strongest of these bonds."

Hartwig and other researchers had previously shown how to add new chemical groups at C-H bonds that are easier to break, but they could only add them to the strongest positions of simple hydrocarbon chains.

In the May 15 issue of the journal Science, Hartwig and his UC Berkeley colleagues described how to use a newly designed catalyst to add functional chemical groups to the hardest of the carbon-hydrogen bonds to crack: the bonds, typically at the head or tail of a molecule, where a carbon has three attached hydrogen atoms, what's called a methyl group (CH3).

"The primary C-H bonds, the ones on a methyl group at the end of a chain, are the least electron-rich and the strongest," he said. "They tend to be the least reactive of the C-H bonds."

UC Berkeley postdoctoral fellow Raphael Oeschger discovered a new version of a catalyst based on the metal iridium that opens up one of the three C-H bonds at a terminal methyl group and inserts a boron compound, which can be easily replaced with more complex chemical groups. The new catalyst was more than 50 times more efficient than previous catalysts and just as easy to work with.

"We now have the ability to do these types of reactions, which should enable people to rapidly make molecules that they would not have made before," Hartwig said. "I wouldn't say these are molecules that could not have been made before, but people wouldn't make them because it would take too long, too much time and research effort, to make them."

The payoff could be huge. Each year, nearly a billion pounds of hydrocarbons are used by industry to make solvents, refrigerants, fire retardants and other chemicals and are the typical starting point for synthesizing drugs.

'Expert surgery' on hydrocarbons

To prove the utility of the catalytic reaction, UC Berkeley postdoctoral fellow Bo Su and his coworkers in the lab used it to add a boron compound, or borane, to a terminal, or primary, carbon atom in 63 different molecular structures. The borane can then be swapped out for any number of chemical groups. The reaction specifically targets terminal C-H bonds, but works at other C-H bonds when a molecule doesn't have a terminal C-H.

"We make a boron-carbon bond using boranes as reagents -- they're just a couple steps away from ant poison, boric acid -- and that carbon-boron bond can be converted into many different things," Hartwig said. "Classically, you can make a carbon-oxygen bond from that, but you can also make a carbon-nitrogen bond, a carbon-carbon bond, a carbon-fluorine bond or other carbon-halogen bonds. So, once you make that carbon-boron bond, there are many different compounds that can be made."

Organic chemist Varinder Aggarwal from the University of Bristol referred to the catalytic reaction as "expert surgery" and characterized UC Berkeley's new technique as "sophisticated and clever," according to the magazine Chemical and Engineering News

One potential application, Hartwig said, is altering natural compounds -- chemicals from plants or animals that have useful properties, such as antibiotic activity -- to make them better. Many pharmaceutical companies today are focused on biologics -- organic molecules, such as proteins, used as drugs -- that could also be altered with this reaction to improve their effectiveness.

"In the normal course, you would have to go back and remake all those molecules from the start, but this reaction could allow you to just make them directly," Hartwig said. "This is one type of chemistry that would allow you to take those complex structures that nature makes that have an inherent biological activity and enhance or alter that biological activity by making small changes to the structure."

He said that chemists could also add new chemical groups to the ends of organic molecules to prep them for polymerization into long chains never before synthesized.

"It could enable you to take molecules that would be naturally abundant, biosourced molecules like fatty acids, and be able to derivatize them at the other end for polymer purposes," he said.

UC Berkeley's long history with C-H bonds

Chemists have long tried to make targeted additions to carbon-hydrogen bonds, a reaction referred to as C-H activation. One still unachieved dream is to convert methane -- an abundant, but often wasted, byproduct of oil extraction and a potent greenhouse gas -- into an alcohol called methanol that can be used as a starting point in many chemical syntheses in industry.

In 1982, Robert Bergman, now a UC Berkeley professor emeritus of chemistry, first showed that an iridium atom could break a C-H bond in an organic molecule and insert itself and an attached ligand between the carbon and hydrogen. While a major advance in organic and inorganic chemistry, the technique was impractical -- it required one iridium atom per C-H bond. Ten years later, other researchers found a way to use iridium and other so-called transition metals, like tungsten, as a catalyst, where a single atom could break and functionalize millions of C-H bonds.

Hartwig, who was a graduate student with Bergman in the late 1980s, continued to bang on unreactive C-H bonds and in 2000 published a paper in Science describing how to use a rhodium-based catalyst to insert boron at terminal C-H bonds. Once the boron was inserted, chemists could easily swap it out for other compounds. With subsequent improvements to the reaction and changing the metal from rhodium to iridium, some manufacturers have used this catalytic reaction to synthesize drugs by modifying different types of C-H bonds. But the efficiency for reactions at methyl C-H bonds at the ends of carbon chains remained low, because the technique required that the reactive chemicals also be the solvent.

With the addition of the new catalytic reaction, chemists can now stick chemicals in nearly any type of carbon-hydrogen bond. In the reaction, iridium snips off a terminal hydrogen atom, and the boron replaces it; another boron compound floats away with the released hydrogen atom. The team attached a new ligand to iridium -- a methyl group called 2-methylphenanthroline -- that accelerated the reaction by 50 to 80 times over previous results.

Hartwig acknowledges that these experiments are a first step. The reactions vary from 29% to 85% in their yield of the final product. But he is working on improvements.

"For us, it shows, yeah, you can do this, but we will need to make even better catalysts. We know that the ultimate goal is attainable if we can further increase our rates by a factor of 10, let's say. Then, we should be able to increase the complexity of molecules for this reaction and achieve higher yields," Hartwig said. "It is a little bit like a four-minute mile. Once you know that something can be accomplished, many people are able to do it, and the next thing you know, we're running a three-and-three-quarter-minute mile."

Credit: 
University of California - Berkeley

Tracking the tinderbox: Stanford scientists map wildfire fuel moisture across western US

image: Maps display the amount of water in plants relative to dry biomass across the American West.

Image: 
Krishna Rao

As California and the American West head into fire season amid the coronavirus pandemic, scientists are harnessing artificial intelligence and new satellite data to help predict blazes across the region.

Anticipating where a fire is likely to ignite and how it might spread requires information about how much burnable plant material exists on the landscape and its dryness. Yet this information is surprisingly difficult to gather at the scale and speed necessary to aid wildfire management.

Now, a team of experts in hydrology, remote sensing and environmental engineering have developed a deep-learning model that maps fuel moisture levels in fine detail across 12 western states, from Colorado, Montana, Texas and Wyoming to the Pacific Coast.

The researchers describe their technique in the August 2020 issue of Remote Sensing of Environment. According to the senior author of the paper, Stanford University ecohydrologist Alexandra Konings, the new dataset produced by the model could "massively improve fire studies."

According to the paper's lead author, Krishna Rao, a PhD student in Earth system science at Stanford, the model needs more testing to figure into fire management decisions that put lives and homes on the line. But it's already illuminating previously invisible patterns. Just being able to see forest dryness unfold pixel by pixel over time, he said, can help reveal areas at greatest risk and "chart out candidate locations for prescribed burns."

The work comes at a time of growing urgency for this kind of insight, as climate change extends and intensifies the wildfire season - and as the ongoing COVID-19 pandemic complicates efforts to prevent large fires through controlled burns, prepare for mass evacuations and mobilize first responders.

Getting a read on parched landscapes

Fire agencies today typically gauge the amount of dried-out, flammable vegetation in an area based on samples from a small number of trees. Researchers chop and weigh tree branches, dry them out in an oven and then weigh them again. "You look at how much mass was lost in the oven, and that's all the water that was in there," said Konings, an assistant professor of Earth system science in Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "That's obviously really laborious, and you can only do that in a couple of different places, for only some of the species in a landscape."

The U.S. Forest Service painstakingly collects this plant water content data at hundreds of sites nationwide and adds them to the National Fuel Moisture Database, which has amassed some 200,000 such measurements since the 1970s. Known as live fuel moisture content, the metric is well established as a factor that influences wildfire risk. Yet little is known about how it varies over time from one plant to another - or from one ecosystem to another.

For decades, scientists have estimated fuel moisture content indirectly, from informed but unproven guesses about relationships between temperature, precipitation, water in dead plants and the dryness of living ones. According to Rao, "Now, we are in a position where we can go back and test what we've been assuming for so long - the link between weather and live fuel moisture - in different ecosystems of the western United States."

AI with a human assist

The new model uses what's called a recurrent neural network, an artificial intelligence system that can learn to recognize patterns in vast mountains of data. The scientists trained their model using field data from the National Fuel Moisture Database, then put it to work estimating fuel moisture from two types of measurements collected by spaceborne sensors. One involves measurements of visible light bouncing off Earth. The other, known as synthetic aperture radar (SAR), measures the return of microwave radar signals, which can penetrate through leafy branches all the way to the ground surface.

"One of our big breakthroughs was to look at a newer set of satellites that are using much longer wavelengths, which allows the observations to be sensitive to water much deeper into the forest canopy and be directly representative of the fuel moisture content," said Konings, who is also a center fellow, by courtesy, at Stanford Woods Institute for the Environment.

To train and validate the model, the researchers fed it three years of data for 239 sites across the American west starting in 2015, when SAR data from the European Space Agency's Sentinel-1 satellites became available. They checked its fuel moisture predictions in six common types of land cover, including broadleaf deciduous forests, needleleaf evergreen forests, shrublands, grasslands and sparse vegetation, and found they were most accurate - meaning the AI predictions most closely matched field measurements in the National Fuel Moisture Database - in shrublands.

Rich with aromatic herbs like rosemary and oregano, and often marked by short trees and steep, rocky slopes, shrublands occupy as much as 45 percent of the American West. They're not only the region's biggest ecosystem, Rao said, "they are also extremely susceptible to frequent fires since they grow back rapidly." In California, fires whipped to enormous size by Santa Ana winds burn in a type of shrubland known as chaparral. "This has led fire agencies to monitor them intensively," he said.

The model's estimates feed into an interactive map that fire agencies may eventually be able to use to identify patterns and prioritize control measures. For now, the map offers a dive through history, showing fuel moisture content from 2016 to 2019, but the same method could be used to display current estimates. "Creating these maps was the first step in understanding how this new fuel moisture data might affect fire risk and predictions," Konings said. "Now we're trying to really pin down the best ways to use it for improved fire prediction."

Credit: 
Stanford's School of Earth, Energy & Environmental Sciences

Scientists identify gene linked to thinness that may help resist weight gain

While others may be dieting and hitting the gym hard to stay in shape, some people stay slim effortlessly no matter what they eat. In a study publishing May 21 in the journal Cell, researchers use a genetic database of more than 47,000 people in Estonia to identify a gene linked to thinness that may play a role in resisting weight gain in these metabolically healthy thin people. They show that deleting this gene results in thinner flies and mice and find that expression of it in the brain may be involved in regulating energy expenditure.

"We all know these people: it's around one percent of the population," says senior author Josef Penninger, the director of the Life Sciences Institute and professor of the department of medical genetics at the University of British Columbia. "They can eat whatever they want and be metabolically healthy. They eat a lot, they don't do squats all the time, but they just don't gain weight.

"Everybody studies obesity and the genetics of obesity," he says. "We thought, 'Let's just turn it around and start a new research field.' Let's study thinness."

Penninger's team looked at data from the Estonian Biobank, which includes 47,102 people aged 20 to 44 years old. The team compared the DNA samples and clinical data of healthy thin individuals with normal-weight individuals and discovered genetic variants unique to thin individuals in the ALK gene.

Scientists have known that the ALK gene frequently mutates in various types of cancer, and it gained a reputation as an oncogene, a gene that drives the development of tumors. The role of ALK outside of cancer has remained unclear. But this new finding suggested that the gene may play a role as a novel thinness gene involved in weight-gain resistance.

The researchers also found that flies and mice without ALK remained thin and were resistant to diet-induced obesity. Furthermore, despite having the same diet and activity levels as normal mice, mice with deleted ALK have lower body weight and body fat. The team's mouse studies also suggested that ALK, which is highly expressed in the brain, plays a part there by instructing the fat tissues to burn more fat from food.

The researchers say that therapeutics targeting the gene might help scientists fight obesity in the future. "If you think about it, it's realistic that we could shut down ALK and reduce ALK function to see if we did stay skinny," says Penninger. "ALK inhibitors are used in cancer treatments already. It's targetable. We could possibly inhibit ALK, and we actually will try to do this in the future." Further research will be required to see if these inhibitors are effective for this purpose. The team also plans to further study how neurons that express ALK regulate the brain at a molecular level to balance metabolism and promote thinness.

The Estonian Biobank that the team studied was ideal because of its wide age range and its strong phenotype data. But one limitation for replicating these findings is that biobanks that collect biological or medical data and tissue samples don't have a universal standard in data collection, which makes comparability a challenge. The researchers say they will need to confirm their findings with other data banks through meta-analysis. "You learn a lot from biobanks," says Penninger. "But, like everything, it's not the ultimate answer to life, but they're the starting points and very good points for confirmation, very important links and associations to human health."

The team says that its work is unique because of how it combines exploration of the genetic basis of thinness on a population- and genome-wide scale with in vivo analyses in mice and flies of the gene's function. "It's great to bring together different groups, from nutrition to biobanking, to hardcore mouse and fly genetics," says Penninger. "Together, this is one story including evolutionary trees in metabolism, the evolutionary role of ALK, human evidence, and hardcore biochemistry and genetics to provide causal evidence."

Credit: 
Cell Press