Culture

Dietary and physical activity intervention reduces LDL cholesterol level in children

An individualised and family-based physical activity and dietary intervention reduced the plasma LDL cholesterol concentration of primary school children, a new study from the University of Eastern Finland shows. The findings of the Physical Activity and Nutrition in Children (PANIC) Study ongoing at the University of Eastern Finland were published in the European Journal of Nutrition.

The two-year follow-up study explored the effects of an individualised and family-based physical activity and dietary intervention on the plasma lipids of more than 500 Finnish children aged between 6 and 8 years at baseline. The researchers were also interested in which components of the lifestyle intervention had the greatest impact of plasma lipids.

"The LDL cholesterol concentration of children from families who participated in the lifestyle intervention was slightly reduced during the two-year follow-up, whereas no similar change was observed in children in the control group. The lifestyle intervention did not have an impact on other plasma lipids," Adjunct Professor Aino-Maija Eloranta from the University of Eastern Finland says.

The study showed that increasing the consumption of high-fat vegetable oil-based spreads and decreasing the consumption of butter-based spreads played the most important role in decreasing the LDL cholesterol concentration. Replacing high-fat milk with low-fat milk, and doing more physical activity, also explained some of the decrease in the LDL cholesterol concentration.

Having an elevated LDL cholesterol concentration in childhood is may predict artery wall thickening in adulthood. The results of this newly published study thus suggest that a family-based dietary and physical activity intervention may prevent the development of atherosclerosis in adulthood.

"Individualised and family-based dietary and physical activity counselling could be integrated into the services provided by maternity clinics and school health care. This would prevent the development lifestyle diseases in the long run and, consequently, mitigate health care costs," Professor Timo Lakka from the University of Eastern Finland, the Principal Investigator of the study, says.

During the two-year follow-up, families participated in six individualised dietary and physical activity counselling sessions. The sessions were individually tailored to each family and they focused on improving the quality of the family's diet, increasing physical activity and reducing screen time. In addition, children were encouraged to participate in weekly after-school exercise clubs. Children's plasma lipids were analysed at the beginning and at the end of the study.

Credit: 
University of Eastern Finland

The most common organism in the oceans harbors a virus in its DNA

image: The viruses, colored orange, attached to a membrane vesicle from the SAR11 marine bacteria, colored gray, that was the subject of this study.

Image: 
Morris et al./<i>Nature Microbiology</i>

The most common organism in the oceans, and possibly on the entire planet, is a family of single-celled marine bacteria called SAR11. These drifting organisms look like tiny jelly beans and have evolved to outcompete other bacteria for scarce resources in the oceans.

We now know that this group of organisms thrives despite -- or perhaps because of -- the ability to host viruses in their DNA. A study published in May in Nature Microbiology could lead to new understanding of viral survival strategies.

University of Washington oceanographers discovered that the bacteria that dominate seawater, known as Pelagibacter or SAR11, hosts a unique virus. The virus is of a type that spends most of its time dormant in the host's DNA but occasionally erupts to infect other cells, potentially carrying some of its host's genetic material along with it.

"Many bacteria have viruses that exist in their genomes. But people had not found them in the ocean's most abundant organisms," said co-lead author Robert Morris, a UW associate professor of oceanography. "We suspect it's probably common, or more common than we thought -- we just had never seen it."

This virus' two-pronged survival strategy differs from similar ones found in other organisms. The virus lurks in the host's DNA and gets copied as cells divide, but for reasons still poorly understood, it also replicates and is released from other cells.

The new study shows that as many as 3% of the SAR11 cells can have the virus multiply and split, or lyse, the cell -- a much higher percentage than for most viruses that inhabit a host's genome. This produces a large number of free viruses and could be key to its survival.

"There are 10 times more viruses in the ocean than there are bacteria," Morris said. "Understanding how those large numbers are maintained is important. How does a virus survive? If you kill your host, how do you find another host before you degrade?"

The study could prompt basic research that could help clarify host-virus interactions in other settings.

"If you study a system in bacteria, that is easier to manipulate, then you can sort out the basic mechanisms," Morris said. "It's not too much of a stretch to say it could eventually help in biomedical applications."

The UW oceanography group had published a previous paper in 2019 looking at how marine phytoplankton, including SAR11, use sulfur. That allowed the researchers to cultivate two new strains of the ocean-dwelling organism and analyze one strain, NP1, with the latest genetic techniques.

Co-lead author Kelsy Cain collected samples off the coast of Oregon during a July 2017 research cruise. She diluted the seawater several times and then used a sulfur-containing substance to grow the samples in the lab -- a difficult process, for organisms that prefer to exist in seawater.

The team then sequenced this strain's DNA at the UW PacBio sequencing center in Seattle.

"In the past we got a full genome, first try," Morris said. "This one didn't do that, and it was confusing because it's a very small genome."

The researchers found that a virus was complicating the task of sequencing the genome. Then they discovered a virus wasn't just in that single strain.

"When we went to grow the NP2 control culture, lo and behold, there was another virus. It was surprising how you couldn't get away from a virus," said Cain, who graduated in 2019 with a UW bachelor's in oceanography and now works in a UW research lab.

Cain's experiments showed that the virus' switch to replicating and bursting cells is more active when the cells are deprived of nutrients, lysing up to 30% of the host cells. The authors believe that bacterial genes that hitch a ride with the viruses could help other SAR11 maintain their competitive advantage in nutrient-poor conditions.

"We want to understand how that has contributed to the evolution and ecology of life in the oceans," Morris said.

Credit: 
University of Washington

Study charts developmental map of inner ear sound sensor in mice

image: Single-cell RNA sequencing helped scientists map how sensory hair cells (pink) develop in a newborn mouse cochlea.

Image: 
Source: Helen Maunsell, NIDCD/NIH

A team of researchers has generated a developmental map of a key sound-sensing structure in the mouse inner ear. Scientists at the National Institute on Deafness and Other Communication Disorders (NIDCD), part of the National Institutes of Health, and their collaborators analyzed data from 30,000 cells from mouse cochlea, the snail-shaped structure of the inner ear. The results provide insights into the genetic programs that drive the formation of cells important for detecting sounds. The study also sheds light specifically on the underlying cause of hearing loss linked to Ehlers-Danlos syndrome and Loeys-Dietz syndrome.

The study data is shared on a unique platform open to any researcher, creating an unprecedented resource that could catalyze future research on hearing loss. Led by Matthew W. Kelley, Ph.D., chief of the Section on Developmental Neuroscience at the NIDCD, the study appeared online in Nature Communications. The research team includes investigators at the University of Maryland School of Medicine, Baltimore; Decibel Therapeutics, Boston; and King's College London.

"Unlike many other types of cells in the body, the sensory cells that enable us to hear do not have the capacity to regenerate when they become damaged or diseased," said NIDCD Director Debara L. Tucci, M.D., who is also an otolaryngology-head and neck surgeon. "By clarifying our understanding of how these cells are formed in the developing inner ear, this work is an important asset for scientists working on stem cell-based therapeutics that may treat or reverse some forms of inner ear hearing loss."

In mammals, the primary transducers of sound are hair cells, which are spread across a thin ribbon of tissue (the organ of Corti) that runs the length of the coiled cochlea. There are two kinds of hair cells, inner hair cells and outer hair cells, and they are structurally and functionally sustained by several types of supporting cells. During development, a pool of nearly identical progenitor cells gives rise to these different cell types, but the factors that guide the transformation of progenitors into hair cells are not fully understood.

To learn more about how the cochlea forms, Kelley's team took advantage of a method called single-cell RNA sequencing. This powerful technique enables researchers to analyze the gene activity patterns of single cells. Scientists can learn a lot about a cell from its pattern of active genes because genes encode proteins, which define a cell's function. Cells' gene activity patterns change during development or in response to the environment.

"There are only a few thousand hair cells in the cochlea, and they are arrayed close together in a complex mosaic, an arrangement that makes the cells hard to isolate and characterize," said Kelley. "Single-cell RNA sequencing has provided us with a valuable tool to track individual cells' behaviors as they take their places in the intricate structure of the developing cochlea."

Building on their earlier work on 301 cells, Kelley's team set out to examine the gene activity profiles of 30,000 cells from mouse cochleae collected at four time points, beginning with the 14th day of embryonic development and ending with the seventh postnatal day. Collectively, the data represents a vast catalog of information that researchers can use to explore cochlear development and to study the genes that underlie inherited forms of hearing impairment.

Kelley's team focused on one such gene, Tgf?r1, which has been linked to two conditions associated with hearing loss, Ehlers-Danlos syndrome and Loeys-Dietz syndrome. The data showed that Tgf?r1 is active in outer hair cell precursors as early as the 14th day of embryonic development, suggesting that the gene is important for initiating the formation of these cells.

To explore Tgf?r1's role, the researchers blocked the Tgf?r1 protein's activity in cochleae from 14.5-day-old mouse embryos. When they examined the cochleae five days later, they saw fewer outer hair cells compared to the embryonic mouse cochleae that had not been treated with the Tgf?r1 blocker. This finding suggests that hearing loss in people with Tgf?r1 mutations could stem from impaired outer hair cell formation during development.

The study revealed additional insights into the early stages of cochlear development. The developmental pathways of inner and outer hair cells diverge early on; researchers observed distinct gene activity patterns at the earliest time point in the study, the 14th day of embryonic development. This suggests that the precursors from which these cells derive are not as uniform as previously believed. Additional research on cells collected at earlier stages is needed to characterize the initial steps in the formation of hair cells.

In the future, scientists may be able to use the data to steer stem cells toward the hair cell lineage, helping to produce the specialized cells they need to test cell replacement approaches for reversing some forms of hearing loss. The study's results also represent a valuable resource for research on the hearing mechanism and how it goes awry in congenital forms of hearing loss.

The authors have made their data available through the gEAR portal (gene Expression Analysis Resource), a web-based platform for sharing, visualizing, and analyzing large multiomic datasets. The portal is maintained by Ronna Hertzano, M.D., Ph.D., and her team in the Department of Otorhinolaryngology and the Institute for Genome Sciences (IGS) at the University of Maryland School of Medicine.

"Single-cell RNA sequencing data are highly complex and typically require significant skill to access," said Hertzano. "By disseminating this study data via the gEAR, we are creating an 'encyclopedia' of the genes expressed in the developing inner ear, transforming the knowledge base of our field and making this robust information open and understandable to biologists and other researchers."

This press release describes a basic research finding. Basic research increases our understanding of human behavior and biology, which is foundational to advancing new and better ways to prevent, diagnose, and treat disease. Science is an unpredictable and incremental process; each research advance builds on past discoveries, often in unexpected ways. Most clinical advances would not be possible without the knowledge gained through basic research.

Credit: 
NIH/National Institute on Deafness and Other Communication Disorders

Next frontier in bacterial engineering

From bacteria-made insulin that obviates the use of animal pancreases to a better understanding of infectious diseases and improved treatments, genetic engineering of bacteria has redefined modern medicine. Yet, serious limitations remain that hamper| progress in numerous other areas.

A decades-old bacterial engineering technique called recombineering (recombination-mediated genetic engineering) allows scientists to scarlessly swap pieces of DNA of their choosing for regions of the bacterial genome. But this valuable and versatile approach has remained woefully underused because it has been limited mainly to Escherichia coli--the lab rat of the bacterial world--and to a handful of other bacterial species.

Now a new genetic engineering method developed by investigators in the Blavatnik Institute at Harvard Medical School and the Biological Research Center in Szeged, Hungary, promises to super-charge recombineering and open the bacterial world at large to this underutilized approach.

A report detailing the team's technique is published May 28 in PNAS.

The investigators have developed a high-throughput screening method to look for the most efficient proteins that serve as the engines of recombineering. Such proteins, known as SSAPs, reside within phages--viruses that infect bacteria.

Applying the new method, which enables the screening of more than two hundred SSAPs, the researchers identified two proteins that appear to be particularly promising.

One of them doubled the efficiency of single-spot edits of the bacterial genome. It also improved tenfold the ability to perform multiplex editing--making multiple edits genome-wide at the same time. The other one enabled efficient recombineering in the human pathogen Pseudomonas aeruginosa, a frequent cause of life-threatening, hospital-acquired infections, for which there has long been a dearth of good genetic tools.

"Recombineering will be a very critical tool that will augment our DNA writing and editing capabilities in the future, and this is an important step in improving the efficiency and reach of the technology," said study first author Timothy Wannier, research associate in genetics in lab of George Church, the Robert Winthrop Professor of Genetics at HMS.

Previous genetic engineering methods, including CRISPR Cas9-based gene-editing, have been ill-suited to bacteria because these methods involve "cutting and pasting" DNA, the researchers said. This is because, unlike multicellular organisms, bacteria lack the machinery to repair double-stranded DNA breaks efficiently and precisely, thus DNA cutting can profoundly interfere with the stability of the bacterial genome, Wannier said. The advantage of recombineering is that it works without cutting DNA.

Instead, recombineering involves sneaking edits into the genome during bacterial reproduction. Bacteria reproduce by splitting in two. During that process, one strand of their double-stranded, circular DNA chromosomes goes to each daughter cell, along with a new second strand that grows during the early stages of fission. The raw materials for recombineering are short, approximately 90 base strands of DNA that are made to order. Each strand is identical to a sequence in the genome, except for edits in the strand's center. These short strands slip into place as the second strands of the daughter cells grow, efficiently incorporating the edits into their genomes.

Among many possible uses, edits might be designed to interfere with a gene in order to pinpoint its function or, alternatively, to improve production of a valuable bacterial product. SSAPs mediate attachment and proper placement of the short strand within the growing new half of the daughter chromosome.

Recombineering might enable the substitution of a naturally occurring bacterial amino acid--the building blocks of proteins--with an artificial one. Among other things, doing so could enable the use of bacteria for environmental cleanup of oil spills or other contaminants, that depend on these artificial amino acids to survive, meaning that the modified bacteria could be easily annihilated once the work is done to avoid the risks of releasing engineered microbes into the environment, Wannier said.

"The bacteria would require artificial amino acid supplements to survive, meaning that they are preprogrammed to perish without the artificial feed stock," Wannier added.

A version of recombineering, called multiplex automated genome engineering (MAGE), could greatly boost the benefits of the technique. The particular advantage of MAGE is its ability to make multiple edits throughout the genome in one fell swoop.

MAGE could lead to progress in projects requiring reengineering of entire metabolic pathways, said John Aach, lecturer in genetics at HMS. Case in point, Aach added, are large-scale attempts to engineer microbes to turn wood waste into liquid fuels.

"Many investigator-years' effort in that quest have made great progress, even if they have not yet produced market-competitive products," he said.

Such endeavors require testing many combinations of edits, Aach said.

"We have found that using MAGE with a library of DNA sequences is a very good way of finding the combinations that optimize pathways."

A more recent descendant of recombineering, named directed evolution with random genomic mutations (DIvERGE), promises benefits in the fight against infectious diseases and could open new avenues for tackling antibiotic resistance.

By introducing random mutations into the genome, DIvERGE can speed up natural bacterial evolution. This helps researchers quickly uncover changes that could arise naturally in harmful bacteria that would make them resistant to antibiotic treatment, explained Akos Nyerges, research fellow in genetics in Church's lab at HMSs, previously at the Biological Research Center of the Hungarian Academy of Sciences.

"Improvements in recombineering will allow researchers to more quickly test how bacterial populations can gain resistance to new antibacterial drugs, helping researchers to identify less resistance-prone antibiotics," Nyerges said.

Recombineering will likely usher in a whole new world of applications that would be hard to foresee at this juncture, the researchers said.

"The new method greatly improves our ability to modify bacteria," Wannier said. "If we could modify a letter here and there in the past, the new approach is akin to editing words all over a book and doing so opens up the scientific imagination in a way that was not previously possible."

Credit: 
Harvard Medical School

First cases of COVID-19 in New York City primarily from European and US sources

In New York City, the first confirmed COVID-19 cases arose mostly through untracked transmission of the virus from Europe and other parts of the United States, a new molecular epidemiology study of 84 patients reports. The results provide limited evidence to support any direct introductions of the virus from China, where SARS-CoV-2 originated. The first SARS-CoV-2 case in New York State was identified in New York City by 29 February. Knowing the route it took to arrive is essential for evaluating and designing effective containment strategies. Ana S. Gonzalez-Reiche and colleagues took advantage of SARS-CoV-2 sequences collected at the Mount Sinai Health System through March 18, from patients representing 21 New York City neighborhoods and two towns in neighboring Westchester County. The authors sequenced 90 SARS-CoV-2 genomes from 84 of the over 800 confirmed COVID-19 positive cases and analyzed these sequences together with all publicly available SARS-CoV-2 genomes from around the world (more than 2,000). The results indicate SARS-CoV-2 was introduced to New York City through multiple independent but isolated introductions mainly from Europe and other parts of the United States. Most of these cases appear associated with untracked transmission and potential travel-related exposures, the authors say. Very few of the cases were infected with a virus that looked to be introduced from Asia, and in those, the virus was most closely related to viral isolates from Seattle, Washington. The authors also found evidence that early spread of the virus in New York City was sustained by community transmission. Their data also point to the limited efficacy of travel restrictions in a place once multiple introductions of the virus and community-driven transmission have already occurred. The results also underscore the need for early and continued broad testing to identify untracked transmission clusters in communities.

Credit: 
American Association for the Advancement of Science (AAAS)

Government's stimulus program to boost consumer spending

The world has been experiencing an unprecedented economic downturn due to the COVID-19 pandemic. A significant number of economic activities have shut down, causing contractions in global output, as well as the loss of businesses and family income. Recent evidence shows that millions of people globally lost their jobs and projecting the extent of the impending global economic loss remains a difficult endeavor. In response, almost all countries have been declaring various economic stimulus packages to overcome this situation. With increasing unemployment, economists are devising and proposing economic measures that could help ensure a sustainable increase in consumer spending and circumvent a long-term economic recession. However, whether the proposed economic measures are going to provide a long-term solution to these problems remains a concern.

In response to a stagnant economy, the Japanese government in 2015 implemented a discount shopping coupon scheme through local governments to boost consumer spending. People who purchased these coupons were eligible for a 20% discount. For example, a coupon could be used to purchase products priced at JPY 1250 for JPY 1000. Earlier, the Japanese government introduced schemes to distribute free shopping coupons to the elderly, people of specific regions, and families with children. However, these schemes did not bring about long-term effects on consumer spending partly because the government did not target the right consumer groups. Government initiatives to boost production by stimulating consumer spending depend on the successful implementation of the proposed programs among the right consumer groups. As a result, understanding consumers' responses to stimulus programs is important.

A group of researchers from Hiroshima University led by Professor Yoshihiko Kadoya conducted a study, with supports from Hiroshima Prefectural Government and Hiroshima Bank, to identify the groups of consumers who responded most to the discount shopping coupon scheme. He argues that it is important to know which consumer groups need such stimulus and to design the stimulus program accordingly in order to have a long-term effect on consumer spending. He further explains that people's socio-economic conditions determine whether they will respond to government stimulus programs such as the discount shopping coupon scheme.

The study results show that middle-aged people, homemakers, people having a greater household balance of financial assets, and people who emphasize current consumption more than saving for the future were the purchasers of discount shopping coupons. Their results further show that greater financial literacy reduced the purchase of discount shopping coupons for people over 40 years of age, while higher household income increased the purchase of discount shopping coupons for middle-aged respondents. Overall, consumers who need to maintain families, can afford it financially and are currently on a consumption spree responded positively to the discount shopping coupon scheme.

Professor Kadoya explained that their study results have implications for future government stimulus programs to boost an economy hit by the pandemic. He added that for some socio-economic reasons, consumers responded to a particular type of stimulus program more enthusiastically and make the program more effective.

Credit: 
Hiroshima University

Towards a climate neutral Europe: The land sector is key

In 2014, EU leaders agreed that all sectors should contribute to the European 2030 emission reduction target, including the land use sector, which did not count towards the achievement of the previous climate change mitigation goals. In 2018, this agreement was implemented by the Regulation on the inclusion of greenhouse gas emissions and removals from land use, land use change and forestry (LULUCF) in the 2030 EU climate and energy framework. The Regulation lays down new rules for the accounting of the sector's emissions and removals, and for assessing EU Member States' compliance with these. For the first time, this allows the land sector to contribute, at least in part, to the achievement of the EU's climate change mitigation targets.

The paper "Making sense of the LULUCF Regulation: Much ado about nothing?", realized with the collaboration of the CMCC Foundation, assesses the importance and highlights the weaknesses and strengths of the LULUCF Regulation in the context of current EU climate and sustainability policies.

The three authors - among which Maria Vincenza Chiriacò and Lucia Perugini, researchers at the CMCC within the division dedicated to the study of agriculture, forests and ecosystem services - explain that the land sector plays a crucial role in climate change mitigation due to a peculiarity: the sector can either release greenhouse gases into the atmosphere, acting as a source of emissions or, conversely, store carbon and therefore acting as a sink. Whereas some sectors can reduce or even eliminate their emissions by foregoing the use of fossil fuels (which can be achieved via a transition to renewable energy sources and increased energy efficiency interventions) , other sectors - such as food production and waste - cannot. With its capacity to absorb CO2, the land sector can therefore compensate for part of these unavoidable emissions, thus becoming an important player in the EU's mitigation targets of reducing emissions by 40% before 2030.

"Given the potential for climate change mitigation embedded in the good management of the LULUCF sector, and underlined in the latest IPCC Special Report on "Climate change and land", it is extremely important that emissions and removals of the land sector are accounted for, to incentivize virtuous forest and agricultural management in the EU. Thanks to this Regulation, the sector can finally contribute to the EU's mitigation targets. This was also necessary to align the EU with the Paris Agreement requirement for economy-wide mitigation targets. Although the new Regulation has much improved the accounting rules for the LULUCF, it is still constrained within certain limits. We can consider the LULUCF regulation as a first step towards its full recognition", affirms Perugini, who is currently involved in the negotiating process under the UNFCCC (United Nations Framework Convention on Climate Change) as part of the Italian delegation dedicated to defining the role of the land sector.

Indeed, the Regulation demands that EU Member States ensure, between 2021 and 2030, that the LULUCF sector remain emission "neutral", and therefore generate neither credits nor debits. As of today, only a small part of credits generated by the LULUCF can be used to compensate emissions generated in other sectors towards the EU climate goals. Furthermore, the Regulation allows for possible debts arising from the land sector, under given conditions, to go unaccounted for by single member states.

The authors look forward to a further review of the 2030 EU climate framework, as envisioned by the EU Green Deal, as an opportunity to better tap into the sector's sizeable mitigation potential. "With the increased ambitions foreseen by the 'European Green Deal', which includes the specific objective to make of the EU the first climate neutral continent, including the contributions of every economic sector into the EU targets is even more important, as it incentivizes all sectors to do their best in the fight against climate change", continues Chiriacò.

The roadmap designed by the EU Commission - with the final objective of having zero net emissions of greenhouse gases by 2050 - includes the target of reducing GHG emissions by at least 50%, and possibly towards 55%, by 2030 compared with 1990 levels, and therefore increasing current ambitions.

Achieving these climate goals will require a deep cut in emissions in all sectors.

"The subject matter of the LULUCF Regulation closely intersects with that of other EU law and policy instruments dealing with agriculture and forestry, most saliently the Common Agricultural Policy (CAP) and the Renewable Energy Directive (RED). The EU's ambitious targets ask for a strong coordination and integration among the various sustainability and climate policies linked to the land sector, where all debits and credits generated are accounted for, with no limitations. Only in this way will we have full accountability of emissions and removals from the agriculture and forestry sectors, which will be crucial to monitor progress and reward those that engage in virtuous behaviour, and penalize those who do not", concludes Perugini.

Credit: 
CMCC Foundation - Euro-Mediterranean Center on Climate Change

Limits on evolution revealed by statistical physics

What is and is not possible for natural evolution may be explained using models and calculations from theoretical physics, say researchers in Japan.

Theoretically, every component of every chemical in every cell of all living organisms could vary independently of all the others, a situation researchers refer to as high dimensionality. In reality, evolution does not produce every possible outcome.

Experts have consistently noticed that organisms seem to be restricted to a low level of dimensionality, meaning that their essential building blocks appear to be linked to each other. For example, if A increases, then B always decreases.

"Bacteria have thousands of types of proteins, so in theory those could be thousands of dimensional points in different environments. However, we see the variation fits a one-dimensional curve or low-dimensional surface regardless of the environment," said Professor Kunihiko Kaneko, a theoretical biology expert from the University of Tokyo Research Center for Complex Systems Biology and an author of the recent research publication.

To explain this low dimensionality, researchers simplified the natural world to fit idealized physics models and searched for any mathematical structure within biological complexity.

Researchers have long used statistical physics models to characterize certain materials' transitions from nonmagnetic to magnetic states. Those models use simplified representations of the spinning electrons in magnets. If the spins are aligned, the ensemble of spins shows ordered and magnetic arrangement. When the spins lose alignment, there is a transition to a disordered and nonmagnetic state. In the researchers' model of biology, instead of a spin being up or down, a gene could be active or inactive.

"We applied the same method to this experiment, to observe what conditions were necessary to go from a disordered, high-dimensionality state to an ordered, low-dimensionality state," said Associate Professor Ayaka Sakata from the Institute of Statistical Mathematics in Tokyo, first author of the research publication.

An essential component of those statistical physics models is background noise, the level of inherent unpredictability that can be quiet and nearly nonexistent or loud and totally overpowering. For living organisms, noise represents tiny environmental variations that can change how genes are expressed, causing different gene expression patterns even between organisms with identical genes, like twins or plants that reproduce by cloning.

In researchers' mathematical models, changing the volume of environmental noise changed the number of dimensions in evolutionary complexity.

Computer-simulated evolution of hundreds of genes under low levels of environmental noise led to high dimensionality, gene expression varying in too many ways without organized changes. Simulated evolution under high levels of environmental noise also led to high variability where gene expressions change randomly, meaning no organization nor functional states of gene expression.

"We can imagine that organisms at either of those extreme noise conditions would not be evolutionarily fit - they would go extinct because they could not respond to changes in their environment," said Kaneko.

When the noise levels were moderate, computer-simulated evolution of hundreds of genes led to a model where the change in gene expression followed a one-dimensional curve, as seen in real life.

"With the appropriate environmental noise level, an organism that is both robust and sensitive to its environment can evolve," said Kaneko.

Credit: 
University of Tokyo

Astronomers predict bombardment from asteroids and comets in another planetary system

image: This is a cartoon to accompany the article 'Astronomers predict bombardment from asteroids and comets in another planetary system'

Image: 
Anastasia Kruchevska

The planetary system around star HR8799 is remarkably similar to our Solar System. A research team led by astronomers from the University of Groningen and SRON Netherlands Institute for Space Research has used this similarity to model the delivery of materials by asteroids, comets and other minor bodies within the system. Their simulation shows that the four gas planets receive material delivered by minor bodies, just like in our Solar System. The results were published by the journal Astronomy & Astrophysics on 29 May.

Counting outwards from the Sun, our Solar System consists of four rocky planets, an asteroid belt, four gas giants and another asteroid belt. The inner planets are rich in refractory materials such as metals and silicates, the outer planets are rich in volatiles such as water and methane. While forming, the inner planets had a hard time collecting a volatile atmosphere because the strong solar wind kept blowing the gas away. At the same time, the heat from the Sun evaporated any ice clumps, so it was harder to retain water. In the outer regions, there was less solar heat and wind, so the eventual gas giants could collect water ice and also gather large atmospheres filled with volatiles.

Simulation

Minor bodies, including asteroids, comets and dust, fine-tuned this outcome later on by delivering refractories from the inner belt and both volatiles and refractories from the outer belt. A research team led by astronomers from the University of Groningen and SRON Netherlands Institute for Space Research wondered if the same delivery system applies to planetary systems around other stars. They created a simulation for the system around HR8799, which is similar to our Solar System with four gas giants plus an inner and outer belt, and possibly rocky planets inside the inner belt. Therefore the team could take some unknowns about HR8799 from our own Solar System.

Terrestrial planets

The simulation shows that just like in our Solar System, the four gas planets receive material delivered by minor bodies. The team predicts a total delivery of both material types of around half a millionth of the planets' masses. Future observations, for example by NASA's James Webb Space Telescope, will be able to measure the amount of refractories in the volatile-rich gas giants. 'If telescopes detect the predicted amount of refractories, it means that these can be explained by delivery from the belts as shown in the model', explains Kateryna Frantseva, first author of the paper. 'However, if they detect more refractories than predicted, the delivery process is more active than was assumed in the model, for example, because HR8799 is much younger than the Solar System. The HR8799 system may contain terrestrial planets, for which volatile delivery from the asteroid belts may be of astrobiological relevance.'

Credit: 
University of Groningen

Fearful Great Danes provide new insights to genetic causes of fear

image: Professor Lohi and Great Dane Reno

Image: 
University of Helsinki

The identified genomic region includes several candidate genes associated with brain development and function as well as anxiety, whose further analysis may reveal new neural mechanisms related to fear.

For the purposes of the study, carried out by Professor Hannes Lohi's research group and published in the Translational Psychiatry journal, data from a total of 120 Great Danes was collected. The Great Dane breed is among the largest dog breeds in the world.

The project was launched after a number of Great Dane owners approached the research group to tell them about their dogs' disturbing fearfulness towards unfamiliar human beings in particular.

"Fear in itself produces a natural and vital reaction, but excessive fear can be disturbing and results in behavioural disorders. Especially in the case of large dogs, strongly expressed fearfulness is often problematic, as it makes it more difficult to handle and control the dog," says Riika Sarviaho, PhD from the University of Helsinki.

In dogs, behavioural disorders associated with anxiety and fearfulness include generalised anxiety disorder and a range of phobias. Fear can be evidenced, for example, as the dog's attempt to flee from situations they experience as frightening. At its worst, fear can manifest as aggression, which may result in attacks against other dogs or humans.

"Previous studies have suggested that canine anxiety and fearfulness could correspond with anxiety disorder in humans. In fact, investigating fearfulness in dogs may also shed more light on human anxiety disorders and help understand their genetic background," Professor Lohi explains the broader goal of the study.

A new genomic region underlying fearfulness

The study utilised a citizen science approach as the dog owners contributed by completing a behavioural survey concerning their dogs, in which the dogs received scores according to the intensity of fear. Through genetic research, a genomic region associated with fearfulness was identified in chromosome 11. The analysis was repeated by taking into consideration the socialisation carried out in puppyhood, or the familiarisation of the dogs with new people, dogs and situations. The re-analysis reinforced the original finding.

"In the case of behavioural studies, it's important to keep in mind that, in addition to genes, the environment has a significant impact on the occurrence of specific traits. For dogs, the socialisation of puppies has been found to be an important environmental factor that strongly impacts fearfulness. In this study, the aim was to exclude the effect of puppyhood socialisation and, thus, observe solely the genetic predisposition to fearfulness," says Sarviaho.

The genomic region was studied in more detail also with the help of whole genome sequencing, but, so far, the researchers have not succeeded in identifying in it a specific gene variant that predisposes to fearfulness.

"Although no actual risk variant was identified, the genomic region itself is interesting, as it contains a number of genes previously associated in various study models with neural development and function, as well as anxiety. For example, the MAPK9 gene has been linked with brain development and synaptic plasticity as well as anxiety, while RACK1 has been associated with neural development and N4BP3 with neurological diseases," says Professor Lohi.

Link between accelerated puppyhood growth and timidity?

A genomic region in humans corresponding with the one now associated with canine fearfulness is linked to a rare syndrome, which causes both neurological symptoms and, among other things, accelerated growth in childhood.

"Research on the topic is only at the early stages and findings have to be carefully interpreted, but it's interesting to note, when focusing on a particularly large dog breed, that the genomic region associated with fearfulness appears to have a neurological role as well as one related to growth," Sarviaho adds.

So far, gene discoveries in canine behavioural research have remained fairly rare, and the genomic region now identified has not previously been linked with fearfulness. Lohi's research group has previously described two genomic regions associated with canine generalised fear and sensitivity to sound. The genetic research findings support the hypothesis that fearfulness and anxiety are inherited traits. To be able to identify more detailed risk factors and confirm the relevance of the findings, the study should be repeated with a more extensive dataset.

Credit: 
University of Helsinki

Better prepared for future crises

The article gives an overview of the spread of Covid-19 and outlines six causes of the crisis: the exponential infection rate, international integration, the insufficient capacity of health care systems in many countries, conflicts of competence and a lack of foresight on the part of many government agencies, the need to grapple with the economic impacts of the shutdown parallel to the health crisis, as well as weaknesses in capital markets resulting from the financial crisis of 2008. The solutions proposed by the team of authors were developed using a framework developed by the International Risk Governance Council to which Ortwin Renn contributed.

According to the study, five of the aspects of risk governance described in the framework are particularly relevant for efforts to overcome the Corona Crisis. Accordingly, the authors highlight the importance of increasing global capacities for the scientific and technical appraisal of risks in order to provide reliable early warning systems. This research must be supplemented by an analysis of the perceived risk - i.e. individual and public opinion, concerns, and wishes. The awareness of and acknowledgement of these perceptions facilitates effective crisis communication and enables authorities to issue effective public health guidelines. This leads to a key task for decision-makers - risk evaluation: Whether and to what extent are risk reduction measures necessary? What trade-offs are identified during the development of measures and restrictions and how can they be resolved on the basis of recognized ethical criteria in light of the considerable degree of uncertainty? This characterization and evaluation of the risk provides qualified options for risk management. The focus here is on the development of collectively binding decisions on measures to minimize the suffering of affected populations as a whole as well as strategies to minimize undesirable side effects. Coordinated crisis and risk communication underpinned by robust scientific and professional communications expertise is crucial to the success of efforts to tackle the crisis.

The team of authors has distilled ten recommendations from its findings:

Address risks at source: in the case of pandemics this means reducing the possibility of viruses being transmitted from animals to humans.

Respond to warnings: This includes the review of national and international risk assessments, and the development of better safeguards for risks with particularly serious impacts.

Acknowledge trade-offs: Measures to reduce a particular risk will impact other risks. Undesirable side effects must be identified in risk assessments.

Consider the role of technology: How can machine learning and other technologies be applied to support pandemic assessment, preparedness, and responses?

Invest in resilience: Gains in organizational efficiency have made critical systems such as health care more vulnerable. Their resilience must be strengthened, for example by reducing dependencies on important products and services.

Concentrate on the most important nodes in the system: The early imposition of restrictions on air travel have proved effective in combating a pandemic. A global emergency fund could be established to address the cost of such measures.

Strengthen links between science and policymaking: Those countries in which scientific information and science-based policy advice are readily available to policymakers have had greater success in combating the coronavirus.

Build state capacities: Tackling systemic risks should be viewed as an integral aspect of good governance that is performed on a continuing basis rather than as an emergency response.

Improve communication: Communications around Covid-19 was slow or deficient in a number of countries. One solution would be the establishment of national and international risk information and communication units.

Reflect on social disruption: The Corona Crisis is forcing people and organizations to experiment with new work and life patterns. Now is the time to consider which of these changes should be maintained over the longer term.

Credit: 
Research Institute for Sustainability (RIFS) – Helmholtz Centre Potsdam

Solution to century-old math problem could predict transmission of infectious diseases

A Bristol academic has achieved a milestone in statistical/mathematical physics by solving a 100-year-old physics problem - the discrete diffusion equation in finite space.

The long-sought-after solution could be used to accurately predict encounter and transmission probability between individuals in a closed environment, without the need for time-consuming computer simulations.

In his paper, published in Physical Review X, Dr Luca Giuggioli from the Department of Engineering Mathematics at the University of Bristol describes how to analytically calculate the probability of occupation (in discrete time and discrete space) of a diffusing particle or entity in a confined space - something that until now was only possible computationally.

Dr Giuggioli said: "The diffusion equation models random movement and is one of the fundamental equations of physics. The analytic solution of the diffusion equation in finite domains, when time and space is continuous, has been known for a long time.

"However, to compare model predictions with empirical observations, one needs to study the diffusion equation in finite space. Despite the work of illustrious scientists such as Smoluchowski, Pólya, and other investigators of yore, this has remained an outstanding problem for over a century--until now.

"Excitingly, the discovery of this exact analytic solution allows us to tackle problems that were almost impossible in the past because of the prohibitive computational costs."

The finding has far-reaching implications across a range of disciplines and possible applications include predicting molecules diffusing inside cells, bacteria roaming in a petri dish, animals foraging within their home ranges, or robots searching in a disaster area.

It could even be used to predict how a pathogen is transmitted in a crowd between individuals.

Solving the conundrum involved the joint use of two techniques: special mathematical functions known as Chebyshev polynomials, and a technique invented to tackle electrostatic problems, the so-called method of images.

This approach allowed Dr Giuggioli to construct hierarchically the solution to the discrete diffusion equation in higher dimension from the one in lower dimensions.

Credit: 
University of Bristol

Anesthesia's effect on consciousness solved, settling century-old scientific debate

image: An ordered cholesterol cluster in a cell membrane briefly becomes disordered on exposure to chloroform.

Image: 
Hansen lab, Scripps Research

LA JOLLA, Calif. and JUPITER, Fla.- MAY 29, 2020 - Surgery would be inconceivable without general anesthesia, so it may come as a surprise that despite its 175-year history of medical use, doctors and scientists have been unable to explain how anesthetics temporarily render patients unconscious.

A new study from Scripps Research published Thursday evening in the Proceedings of the National Academies of Sciences (PNAS) solves this longstanding medical mystery. Using modern nanoscale microscopic techniques, plus clever experiments in living cells and fruit flies, the scientists show how clusters of lipids in the cell membrane serve as a missing go-between in a two-part mechanism. Temporary exposure to anesthesia causes the lipid clusters to move from an ordered state, to a disordered one, and then back again, leading to a multitude of subsequent effects that ultimately cause changes in consciousness.

The discovery by chemist Richard Lerner, MD, and molecular biologist Scott Hansen, PhD, settles a century-old scientific debate, one that still simmers today: Do anesthetics act directly on cell-membrane gates called ion channels, or do they somehow act on the membrane to signal cell changes in a new and unexpected way? It has taken nearly five years of experiments, calls, debates and challenges to arrive at the conclusion that it's a two-step process that begins in the membrane, the duo say. The anesthetics perturb ordered lipid clusters within the cell membrane known as "lipid rafts" to initiate the signal.

"We think there is little doubt that this novel pathway is being used for other brain functions beyond consciousness, enabling us to now chip away at additional mysteries of the brain," Lerner says.

Lerner, a member of the National Academy of Sciences, is a former president of Scripps Research, and the founder of Scripps Research's Jupiter, Florida campus. Hansen is an associate professor, in his first posting, at that same campus.

The Ether Dome

Ether's ability to induce loss of consciousness was first demonstrated on a tumor patient at Massachusetts General Hospital in Boston in 1846, within a surgical theater that later became known as "the Ether Dome." So consequential was the procedure that it was captured in a famous painting, "First Operation Under Ether," by Robert C. Hinckley. By 1899, German pharmacologist Hans Horst Meyer, and then in 1901 British biologist Charles Ernest Overton, sagely concluded that lipid solubility dictated the potency of such anesthetics.

Hansen recalls turning to a Google search while drafting a grant submission to investigate further that historic question, thinking he couldn't be the only one convinced of membrane lipid rafts' role. To Hansen's delight, he found a figure from Lerner's 1997 PNAS paper, "A hypothesis about the endogenous analogue of general anesthesia," that proposed just such a mechanism. Hansen had long looked up to Lerner--literally. As a predoctoral student in San Diego, Hansen says he worked in a basement lab with a window that looked directly out at Lerner's parking space at Scripps Research.

"I contacted him, and I said, 'You are never going to believe this. Your 1997 figure was intuitively describing what I am seeing in our data right now,'" Hansen recalls. "It was brilliant."

For Lerner, it was an exciting moment as well.

"This is the granddaddy of medical mysteries," Lerner says. "When I was in medical school at Stanford, this was the one problem I wanted to solve. Anesthesia was of such practical importance I couldn't believe we didn't know how all of these anesthetics could cause people to lose consciousness."

Many other scientists, through a century of experimentation, had sought the same answers, but they lacked several key elements, Hansen says: First, microscopes able to visualize biological complexes smaller than the diffraction limits of light, and second, recent insights about the nature of cell membranes, and the complex organization and function of the rich variety of lipid complexes that comprise them.

"They had been looking in a whole sea of lipids, and the signal got washed out, they just didn't see it, in large part for a lack of technology," Hansen says.

From order to disorder

Using Nobel Prize-winning microscopic technology, specifically a microscope called dSTORM, short for "direct stochastical optical reconstruction microscopy," a post-doctoral researcher in the Hansen lab bathed cells in chloroform and watched something like the opening break shot of a game of billiards. Exposing the cells to chloroform strongly increased the diameter and area of cell membrane lipid clusters called GM1, Hansen explains.

What he was looking at was a shift in the GM1 cluster's organization, a shift from a tightly packed ball to a disrupted mess, Hansen says. As it grew disordered, GM1 spilled its contents, among them, an enzyme called phospholipase D2 (PLD2).

Tagging PLD2 with a fluorescent chemical, Hansen was able to watch via the dSTORM microscope as PLD2 moved like a billiard ball away from its GM1 home and over to a different, less-preferred lipid cluster called PIP2. This activated key molecules within PIP2 clusters, among them, TREK1 potassium ion channels and their lipid activator, phosphatidic acid (PA). The activation of TREK1 basically freezes neurons' ability to fire, and thus leads to loss of consciousness, Hansen says.

"The TREK1 potassium channels release potassium, and that hyper-polarizes the nerve--it makes it more difficult to fire--and just shuts it down," Hansen says.

Lerner insisted they validate the findings in a living animal model. The common fruit fly, drosophila melanogaster, provided that data. Deleting PLD expression in the flies rendered them resistant to the effects of sedation. In fact, they required double the exposure to the anesthetic to demonstrate the same response.

"All flies eventually lost consciousness, suggesting PLD helps set a threshold, but is not the only pathway controlling anesthetic sensitivity," they write.

Hansen and Lerner say the discoveries raise a host of tantalizing new possibilities that may explain other mysteries of the brain, including the molecular events that lead us to fall asleep.

Lerner's original 1997 hypothesis of the role of "lipid matrices" in signaling arose from his inquiries into the biochemistry of sleep, and his discovery of a soporific lipid he called oleamide. Hansen and Lerner's collaboration in this arena continues.

"We think this is fundamental and foundational, but there is a lot more work that needs to be done, and it needs to be done by a lot of people," Hansen says.
Lerner agrees.

"People will begin to study this for everything you can imagine: Sleep, consciousness, all those related disorders," he says. "Ether was a gift that helps us understand the problem of consciousness. It has shined a light on a heretofore unrecognized pathway that the brain has clearly evolved to control higher-order functions."

Credit: 
Scripps Research Institute

Targeted therapy tepotinib for non-small cell lung cancer with MET exon 14 skipping mutation shows durable response

image: This is Xiuning Le, M.D., Ph.D.

Image: 
The University of Texas MD Anderson Cancer Center

HOUSTON -- Patients with advanced non-small cell lung cancer (NSCLC) and the MET exon 14 (METex14) skipping mutation had a 46.5% objective response rate to the targeted therapy drug tepotinib, as shown in a study published today in the New England Journal of Medicine and presented at the 2020 American Society of Clinical Oncology (ASCO) Annual Meeting ASCO20 Virtual Meeting (Abstract 9556 - Poster 322) by researchers from The University of Texas MD Anderson Cancer Center.

"The success of this trial, alongside other studies on the same class of drugs, establishes MET exon 14 as an actionable target for non-small cell lung cancer," said senior author Xiuning Le, M.D., Ph.D., assistant professor of Thoracic/Head & Neck Medical Oncology. "We're pleased to show that another group of lung cancer patients may benefit from precision medicine."

METex14 skipping is a mutation that drives cancer growth and occurs in 3-4% of all NSCLC patients. Patients with METex14 skipping tend to be older, with a median age of 74, and typically don't have other actionable mutations with existing targeted therapy options.

The study results represent cohort A of the single-arm, international Phase II VISION trial, which is ongoing with additional cohorts. More than 6,700 NSCLC patients were prescreened for MET alterations through liquid and/or tissue biopsy. A total of 152 patients with advanced NSCLC and METex14 skipping were treated with tepotinib. Patients with prior treatment and/or stable brain metastasis were allowed to participate in the trial. Participants were treated with 500mg daily oral tepotinib.

Meaningful benefit for an elderly population

The primary endpoint was objective response rate, defined as complete or partial response, according to the RECIST v1.1 criteria and confirmed by independent review. After nine months follow-up, the primary efficacy population of 99 patients had a 46.5% objective response rate and durable response of 11.1 months.

"The median duration of response of almost one year is very meaningful for this patient population," Le said. "It's important for these elderly patients to have another treatment option, other than traditional chemotherapy, in oral form that can improve their quality of life for a long duration."

Toxicities were manageable, with grade ? 3 treatment-related adverse events reported in 27.6% of patients. The most common side effect was peripheral edema. Eleven percent of patients discontinued treatment due to adverse events.

The study also collected patient-reported outcomes, which indicated an improvement in coughing and overall maintenance of quality of life.

Liquid biopsy for biomarker detection

The VISION study represents the largest METex14 skipping cohort to be identified prospectively through liquid biopsy, verifying that liquid biopsy is a reliable method to detect the mutation. The study also showed that liquid biopsy was a useful tool to identify response to the drug.

Matched liquid biopsy samples for baseline and on-treatment were available for 51 patients. Next-generation sequencing found 34 of those patients had a molecular response with a complete or deep reduction of the mutation, and radiographic response was confirmed in 68% of patients who had a molecular response.

"This study marked a major advance in that we now have a highly effective, oral therapy for a group of non-small lung cancer patients that previously did not have any targeted therapy options," said co-author John Heymach, M.D., Ph.D., chair of Thoracic-Head & Neck Medical Oncology. "We are proud to lead the field forward as we work to provide novel treatments to patients."

Tepotinib was granted breakthrough therapy designation by the U.S. Food and Drug Administration (FDA) in September 2019, based on early data from the VISION study. It was approved for use as the first oral targeted therapy for MET-positive NSCLC in Japan in March 2020.

A full list of co-authors and their disclosures are included in the paper. The research was supported by Merck KGaA, Darmstadt, Germany.

Credit: 
University of Texas M. D. Anderson Cancer Center

Researchers conduct metabolite analysis of ALS patient blood plasma

High-throughput analysis of blood plasma could aid in identification of diagnostic and prognostic biomarkers for amyotrophic lateral sclerosis (ALS), according to research from North Carolina State University. The work sheds further light on a pathway involved in disease progression and appears to rule out an environmental neurotoxin as playing a role in ALS.

ALS is a progressive neurodegenerative disease that causes deterioration of nerve cells in the brain and spinal cord. Currently, treatments are hampered by lack of definitive targets, a diagnostic process that often takes over a year to complete, and insufficient and subjective methods for monitoring progression.

"Early diagnosis is important, but we are in dire need of quantitative markers for monitoring progression and the efficacy of therapeutic intervention," says Michael Bereman, associate professor of biological sciences at NC State and corresponding author of a paper describing the work. "Since disruptions in metabolism are hallmark features of ALS, we wanted to investigate metabolite markers as an avenue for biomarker discovery."

Bereman, with colleagues from NC State and Australia's Macquarie University, took blood plasma samples for 134 ALS patients and 118 healthy individuals from the Macquarie University MND Biobank. They used chip-based capillary zone electrophoresis coupled to high resolution mass spectrometry to identify and analyze blood plasma metabolites in the samples. This method quickly breaks the plasma down into its molecular components, which are then identified by their mass. The researchers developed two computer algorithms: one to separate healthy and ALS samples and the other to predict disease progression.

The most significant metabolism markers were associated with muscle activity: elevated levels of creatine, which aids muscle movement, and decreased levels of creatinine and methylhistidine, which are byproducts of muscle activity and breakdown. Creatine was 49% elevated in ALS patients, while creatinine and methylhistindine decreased by 20% and 24%, respectively. Additionally, the ratio of creatine versus creatinine increased 370% in male, and 200% in female, ALS patients.

Through machine learning, the algorithms that they created were then able to both separate healthy participants from ALS patients and predict the progression of the disease. The models produced results for both sensitivity (ability to detect disease), and specificity (ability to detect individuals without disease). The disease detection model performed at 80% sensitivity and 78% specificity, and the progression model performed at 74% sensitivity and 87% specificity.

"Creatine deficiency alone does not seem to be a problem - our results confirm that the creatine kinase pathway of cellular energy production, known to be altered in ALS, is not working as well as it should," Bereman says.

"These results are strong evidence that a panel of plasma metabolites could be used both for diagnosis and as a way to monitor disease progression," says Gilles Guillemin, professor of neurosciences at Macquarie University and co-author of the paper. "Our next steps will be to examine these markers over time within the same patient."

Another goal of the work was to look for evidence of exposure to an environmental neurotoxin, Beta Methylamino-L-Alanine (BMAA), which is found in green and blue algae blooms. BMAA has been associated with ALS since the 1950s, but few studies have attempted to detect it in human ALS patients. The researchers did not detect BMAA in the blood of either healthy or ALS patients.

Credit: 
North Carolina State University