Culture

Fly model offers new approach to unraveling 'difficult' pathogen

image: A normal fruit fly gut revealing the regular structure of actin staining microvilli (green) and cell nuclei (blue). This organization in the fruit fly is very similar to that of the human intestine. A study published in iScience makes use of these close parallels in structure and function to identify new activities of a toxin produced by the problematic hospital pathogen C. difficile.

Image: 
Bier Lab, UC San Diego

The Clostridium difficile pathogen takes its name from the French word for "difficult." A bacterium that is known to cause symptoms ranging from diarrhea to life-threatening colon damage, C. difficile is part of a growing epidemic of concern for the elderly and patients on antibiotics.

Outbreaks of C. difficile-infected cases have progressively increased in Western countries, with 29,000 reported deaths per year in the United States alone.

Now, biologists at the University of California San Diego are drawing parallels from newly developed models of the common fruit fly to help lay the foundation for novel therapies to fight the pathogen's spread. Their report is published in the journal iScience.

"C. difficile infections pose a serious risk to hospitalized patients" said Ethan Bier, a distinguished professor in the Division of Biological Sciences and science director of the UC San Diego unit of the Tata Institute for Genetics and Society (TIGS). "This research opens a new avenue for understanding how this pathogen gains an advantage over other beneficial bacteria in the human microbiome through its production of toxic factors. Such knowledge could aid in devising strategies to contain this pathogen and reduce the great suffering it causes."

As with most bacterial pathogens, C. difficile secretes toxins that enter host cells, disrupt key signaling pathways and weaken the host's normal defense mechanisms. The most potent strains of C. difficile unleash a two-component toxin that triggers a string of complex cellular responses, culminating in the formation of long membrane protrusions that allow the bacteria to attach more effectively to host cells.

UC San Diego scientists in Bier's lab created strains of fruit flies that are capable of expressing the active component of this toxin, known as "CDTa." The strains allowed them to study the elaborate mechanisms underlying CDTa toxicity in a live model system focused on the gut, which is key since the digestive system of these small flies is surprisingly similar to that of humans.

"The fly gut provides a rapid and surprisingly accurate model for the human intestine, which is the site of infection by C. difficile," said Bier. "The vast array of sophisticated genetic tools in flies can identify new mechanisms for how toxic factors produced by bacteria disrupt cellular processes and molecular pathways. Such discoveries, once validated in a mammalian system or human cells, can lead to novel treatments for preventing or reducing the severity of C. difficile infections."

The fruit fly model gave the researchers a clear path to examine genetic interactions disrupted at the hands of CDTa. They ultimately found that the toxin induces a collapse of networks that are essential for nutrient absorption. As a result, the model flies' body weight, fecal output and overall lifespan were severely reduced, mimicking symptoms in human C. difficile-infected patients.

Credit: 
University of California - San Diego

Scientists identify new biochemical 'warning sign' of early-stage depression

image: Major depressive disorder affects over 300 million people worldwide, but so far there have been no established biomarkers that clinicians can rely on to detect early-stage depression symptoms. Now, in a new study published in Scientific Reports, scientists at Fujita Health University led by Professor Yasuko Yamamoto have shown that the levels of anthranilic acid in blood may provide a basis for identifying patients at risk of major depressive disorder.

Image: 
Fujita Health University

Chronic pain, or inflammation, is believed to be one of the major factors in the onset of major depressive disorder. Therefore, to better understand what happens physiologically during depression, scientists have long studied several metabolic processes or "pathways" related to inflammation. One of these pathways, called the kynurenine pathway, is the principal pathway involved in metabolizing the amino acid tryptophan. Now, a new study by a team of scientists, led by Professor Kuniaki Saito and Associate Professor Yasuko Yamamoto of Japan's Fujita Health University, shows that elevated levels of anthranilic acid--an important metabolite (product/intermediate) of the kynurenine pathway--in the blood may serve as a marker for identifying individuals who are experiencing depression-like symptoms and are at risk of developing major depressive disorder. This interesting new study is published in Scientific Reports.

"Various lines of scientific evidence suggest that tryptophan metabolism is involved in the symptoms of major depressive disorder," notes Dr Yamamoto. For example, past studies have reported that patients with depression and other conditions involving depression-like symptoms show increased blood levels of various tryptophan metabolites produced by the kynurenine pathway. These findings led Dr Saito's team to speculate that metabolites of the kynurenine pathway may serve as "biomarkers" that could allow early detection of patients at risk of developing depression.

To test this idea, Dr Saito's team analyzed serum (fractionated, clear part of blood) samples from 61 patients who had clinical test scores that indicated a high risk of developing major depressive disorder. For scientifically accurate comparison, they also used a "control" group, wherein they analyzed serum samples from 51 healthy individuals. The scientists measured the serum levels of various kynurenine pathway metabolites with a technique called high-performance liquid chromatography, which allows precise measurement of concentrations. Compared to the healthy "controls," the patients at risk of depression had increased serum levels of anthranilic acid. Furthermore, the women at risk of depression had reduced serum levels of tryptophan. Given that the kynurenine pathway consumes tryptophan and produces anthranilic acid, these findings are aligned with the previous findings of increased kynurenine pathway activity in patients at risk of developing major depressive disorder.

The scientists also wanted to find out whether tryptophan metabolite profiles can predict the progression of depression-related symptoms. For that, they did further analyses on samples and data from 33 patients at risk of depression whose scores on a clinical depression scale at different timepoints indicated regression from a healthy state to a depressed state. The analyses showed that increases in serum anthranilic acid levels over time correlated with worsening of the clinical test scores. Prof Saito states, "this finding confirms that there is indeed a strong, direct correlation between anthranilic acid levels in blood and the severity of depression on the clinical depression scale."

Because chronic pain can cause depression and related symptoms, the scientists also scrutinized tryptophan metabolite profiles in patients with chronic pain disorders affecting the mouth, jaw, and face. By testing serum samples from 48 patients with chronic pain disorders and 42 healthy individuals, the research team found that the patients with chronic pain had elevated serum levels of anthranilic acid and lower serum levels of tryptophan, just like those who were at risk of major depressive disorder.

So, what is the takeaway of this study? According to Prof Saito and team, these results show that clinicians can monitor serum levels of anthranilic acid to find out if patients are at risk of developing major depressive disorder. As Prof Saito notes, "monitoring the levels of tryptophan metabolites may be useful for the realization of pre-emptive medicine for depressive symptoms." Preemptive medicine in this case involves specific treatments that can prevent a patient from developing depression. Of course, more research is necessary to validate the clinical relevance of serum anthranilic acid levels and to understand exactly how tryptophan metabolism influences outwardly aspects like mood. But, that said, this study has the potential to pinpoint the physiological processes that contribute to depression and thus improve the standard of care for preventing depression.

Credit: 
Fujita Health University

The power of going small: Copper oxide subnanoparticle catalysts prove most superior

image: This is a research concept of copper oxide subnanoparticles.

Image: 
Makoto Tanabe, Kimihisa Yamamoto

Scientists at Tokyo Institute of Technology have shown that copper oxide particles on the sub-nanoscale are more powerful catalysts than those on the nanoscale. These subnanoparticles can also catalyze the oxidation reactions of aromatic hydrocarbons far more effectively than catalysts currently used in industry. This study paves the way to better and more efficient utilization of aromatic hydrocarbons, which are important materials for both research and industry.

The selective oxidation of hydrocarbons is important in many chemical reactions and industrial processes, and as such, scientists have been on the lookout for more efficient ways to carry out this oxidation. Copper oxide (CunOx) nanoparticles have been found useful as a catalyst for processing aromatic hydrocarbons, but the quest of even more effective compounds has continued.

In the recent past, scientists applied noble metal-based catalysts comprising of particles at the sub-nano level. At this level, particles measure less than a nanometer and when placed on appropriate substrates, they can offer even higher surface areas than nanoparticle catalysts to promote reactivity (Fig. 1).

In this trend, a team of scientists including Prof. Kimihisa Yamamoto and Dr. Makoto Tanabe from Tokyo Institute of Technology (Tokyo Tech) investigated chemical reactions catalyzed by CunOx subnanoparticles (SNPs) to evaluate their performance in the oxidation of aromatic hydrocarbons. CunOx SNPs of three specific sizes (with 12, 28, and 60 copper atoms) were produced within tree-like frameworks called dendrimers (Fig. 2). Supported on a zirconia substrate, they were applied to the aerobic oxidation of an organic compound with an aromatic benzene ring.

X-ray photoelectron spectroscopy (XPS) and infrared spectroscopy (IR) were used to analyze the synthesized SNPs' structures, and the results were supported by density functionality theory (DFT) calculations.

The XPS analysis and DFT calculations revealed increasing ionicity of the copper-oxygen (Cu-O) bonds as SNP size decreased. This bond polarization was greater than that seen in bulk Cu-O bonds, and the greater polarization was the cause of the enhanced catalytic activity of the CunOx SNPs.

Tanabe and the team members observed that the CunOx SNPs sped up the oxidation of the CH3 groups attached to the aromatic ring, thereby leading to the formation of products. When the CunOx SNP catalyst was not used, no products were formed. The catalyst with the smallest CunOx SNPs, Cu12Ox, had the best catalytic performance and proved to be the longest lasting.

As Tanabe explains, "the enhancement of the ionicity of the Cu-O bonds with decrease in size of the CunOx SNPs enables their better catalytic activity for aromatic hydrocarbon oxidations."

Their research supports the contention that there is great potential for using copper oxide SNPs as catalysts in industrial applications. "The catalytic performance and mechanism of these size-controlled synthesized CunOx SNPs would be better than those of noble metal catalysts, which are most commonly used in industry at present," Yamamoto say, hinting at what CunOx SNPs can achieve in the future.

Credit: 
Tokyo Institute of Technology

Bovine embryo completely regenerates placenta-forming cells

image: An early bovine embryo regenerating its TE cells which will later form a large part of the placenta. (Left: intact, Middle: after removal of TE, Right: regenerated) (Kohri N. et al., Journal of Biological Chemistry. November 8, 2019)

Image: 
Kohri N. et al., Journal of Biological Chemistry. November 8, 2019

A calf was born from an embryo lacking cells which form a large part of the placenta, providing new insight into the regenerative capacity of mammalian embryos.

Mammalian development starts from a single cell -- a fertilized egg. The egg goes through multiple cell divisions to increase its cell numbers and then starts forming a sphere-like structure with a cavity inside, called the blastocyst. The blastocyst consists of two types of cells, the inner cell mass (ICM) and the trophectoderm (TE), which develop into an embryo proper and a large part of the placenta, respectively.

Scientists led by Manabu Kawahara at Hokkaido University have shown that, since bovine ICM cells can regenerate TE, they are capable of forming both the embryo and placenta. The study was published in the Journal of Biological Chemistry and became one of the top 50 most viewed papers from November through December 2019 on the Journal's website.

To examine the ICM's capacity to regenerate TE, the researchers cultivated mouse and bovine blastocysts and removed entire TE from both blastocysts. They found that both blastocysts regained their sphere-like shapes in 24 hours. However, the regeneration rate to reform the blastocyst was remarkably higher in bovine cells (97%) than mouse cells (57%). The more complete recovery of bovine blastocysts in cell numbers compared to mouse blastocysts suggests the bovine cells have a higher regenerative capacity.

Further experiments revealed abnormal protein expression in the TE of mouse regenerated blastocysts, whereas bovine regenerated blastocysts showed normal gene expressions overall.

To test its developmental abilities, the researchers then transferred the regenerated blastocysts to recipient females. After the embryo-transfer, to their surprise, one of the four cows became pregnant and a female calf was naturally born with an apparently normal placenta. In contrast, none of the more than 100 mouse embryos transferred to recipients developed to term.

"We will continue to monitor the health of the calf born from the regenerated blastocyst," says Manabu Kawahara. "Our study suggests that we can remove and use a large part of TE for genetic testing to breed cattle with improved qualities. Also, further studies could reveal the mechanism of cell fate decision in mammals and its differences between species."

Credit: 
Hokkaido University

How long coronaviruses persist on surfaces and how to inactivate them

The novel coronavirus 2019-nCoV is making headlines worldwide. Since there is no specific therapy against it, the prevention of infection is of particular importance in order to stem the epidemic. Like all droplet infections, the virus can spread via hands and surfaces that are frequently touched. "In hospitals, these can be door handles, for example, but also call buttons, bedside tables, bed frames and other objects in the direct vicinity of patients, which are often made of metal or plastic," explains Professor Günter Kampf from the Institute of Hygiene and Environmental Medicine at the Greifswald University Hospital.

Together with Professor Eike Steinmann, head of the Department for Molecular and Medical Virology at Ruhr-Universität Bochum (RUB), he has compiled comprehensive findings from 22 studies on coronaviruses and their inactivation for a future textbook. "Under the circumstances, the best approach was to publish these verified scientific facts in advance, in order to make all information available at a glance," says Eike Steinmann.

Infectious on surfaces for up to nine days

The evaluated studies, which focus on the pathogens Sars coronavirus and Mers coronavirus, showed, for example, that the viruses can persist on surfaces and remain infectious at room temperature for up to nine days. On average, they survive between four and five days. "Low temperature and high air humidity further increase their lifespan," points out Kampf.

Tests with various disinfection solutions showed that agents based on ethanol, hydrogen peroxide or sodium hypochlorite are effective against coronaviruses. If these agents are applied in appropriate concentrations, they reduce the number of infectious coronaviruses by four so-called log steps within one minute: this means, for example, from one million to only 100 pathogenic particles. If preparations based on other active ingredients are used, the product should be proven to be at least effective against enveloped viruses ("limited virucidal activity"). "As a rule, this is sufficient to significantly reduce the risk of infection," explains Günter Kampf.

Findings should be transferable to 2019-CoV

The experts assume that the results from the analyses of other coronaviruses are transferable to the novel virus. "Different coronaviruses were analysed, and the results were all similar," concludes Eike Steinmann.

Credit: 
Ruhr-University Bochum

Plugging into a 6G future with users at the center

video: 6G communications will need to be more secure and reliable, involving a decentralized network architecture

Image: 
2020 KAUST

With the deployment of 5G networks throughout 2020, scientists are now focusing their research attentions on 6G communications. This research will need to be human-centric, according to KAUST postdoctoral fellow Shuping Dang.

Dang and his colleagues examined the potential applications and challenges of 6G communications in a study published in Nature Electronics. They found that 6G communications will need to be more secure, protect people's privacy, be ubiquitously accessible and affordable, and safeguard users' mental and physical well-being.

Achieving these criteria is no small feat. 5G communications have many advantages, supporting internet protocol television, high-definition video streaming, basic virtual and augmented reality services, and faster transmission; however, it does not involve the use of ground-breaking technologies. Its main focus has been about enhancing performance rather than technology.

Communications systems get updated every decade, which is also known as a generation or simply a "G."

Meanwhile, 6G is expected to revolutionize the way we communicate. Envision a day, somewhere around 2030, when recreational scuba divers, for example, use their phones to transmit holographic images of themselves in their underwater surroundings to colleagues at work.

Other applications will include more accurate indoor positioning, allowing an application to identify exactly where you are in a 10-story building; a more tactile internet that allows remote machine operation or cooperative automated driving; improved in-flight and on-the-move connectivity; and the transmission of biological information extracted from exhaled breath, allowing the detection of developing contagions and the diagnosis of disease.

"These 6G communications will need to be more secure and reliable, involving a decentralized network architecture," says KAUST research scientist, Osama Amin.

This could involve using blockchain technology, famous for its use in Bitcoin mining, to make data anonymous and untraceable. This technology would prevent private data leakages, such as those that have recently captured public attention. Currently, governments and corporations control internet connections through the use of centralized servers. A decentralized blockchain network would involve storing encoded data on thousands of nodes that cannot all be accessed by a single person or entity.

Researchers will also need to investigate the use of physical-layer security technologies, which exploit the physical characteristics of wireless communication, such as noise and fading, to improve user security and privacy.

"6G communications will also require the employment of a 3D network architecture, where terrestrial base stations, unmanned aerial vehicles and space satellites are jointly used to provide seamless, high-quality and affordable communication services to people living in remote and underdeveloped areas," adds advisor Mohamed-Slim Alouini. This could even involve the deployment of underwater communication nodes in the form of autonomous vehicles and sensors that are connected to underwater base stations.

"Artificial intelligence will play a pivotal role in the 6G communication revolution," explains Dang. Machine learning algorithms could be used, for example, to efficiently allocate base station resources and achieve close-to-optimum performance. Intelligent materials placed on surfaces in the environment, such as on buildings or on streetlights, could be used to sense the wireless environment and apply customized changes to radio waves. And deep learning techniques could be used to improve the accuracy of indoor positioning.

All of this will require systems that offer an extremely large bandwidth for signal transmission and that are robust in the face of adverse weather conditions. Also required are devices that consume less energy and that have longer battery lives. This will need further research into technologies that can harvest energy from ambient radio frequency signals, microvibrations and sunlight. Finally, researchers will need to investigate the impacts of these evolving technologies on mental and physical health.

"Our current study aims to provide a vision of 6G and to serve as a research guideline in the post-5G era," says Alouini. He and his team are investigating the integration of satellite, airborne and terrestrial networks for forming a decentralized 6G communication system.

They are also studying the use of artificial intelligence and deep learning techniques to optimize 6G communications and the use of "smart radio environments" with reconfigurable reflecting surfaces to optimize signal transmission. Finally, the team is working on expanding the wireless spectrum to the terahertz and optical bands to unlock a much larger bandwidth for 6G communication systems.

Credit: 
King Abdullah University of Science & Technology (KAUST)

More people and fewer wild fish lead to an omega-3 supply gap

Everyone knows that eating fish is good for you, in part because of the healthy omega-3 fatty acids that it contains.

Several of these fatty acids are essential in human diets, especially when it comes to infant development and reducing cognitive decline in adults.

But dwindling fish stocks worldwide, combined with a growing population, mean that a substantial number of people on the planet don't get enough of these essential nutrients, a new study shows.

The researchers focused on two particular omega-3 fatty acids, abbreviated EPA and DHA, because they are the two fatty acids that are both essential and limited in supply. Other fatty acids are readily available through plants.

"When we looked at how EPA and DHA are produced and consumed, in humans and in the ocean, we found that 70 per cent of the world's population doesn't get what they really need. That can have far-reaching health consequences," said Helen Hamilton, first author of the paper.

Hamilton recently completed a postdoc at the Norwegian University of Science and Technology's (NTNU) Industrial Ecology Programme and is now a sustainability specialist at Biomar Global.

Hamilton and her colleagues documented the reasons behind the supply gap and suggested ways to increase supplies through improved recycling and tapping new primary sources, and to reduce demand through alternative diets. Their findings have been published in the academic journal Nature Food.

The world's fisheries are under pressure, with an estimated 63 per cent of all fish stocks considered exploited and in need of rebuilding, Hamilton and her colleagues wrote. That makes it unlikely that people can catch enough fish to provide their dietary needs for EPA and DHA.

"We can't take any more fish out of the ocean," Hamilton said. "That means we really need to optimize what we do have or find new, novel sources. We need to look at how EPA and DHA are produced and consumed by humans and in the ocean."

To arrive at their results, the researchers collected data from the UN's Food and Agriculture Organization and the International Marine Ingredients Organization, along with published research articles and reports. The data was fed into a model called a multi-layer material flow analysis framework. This allowed Hamilton and her colleagues to estimate the amount of available omega-3 fatty acids, and how and where they are consumed.

The researchers suggest that better fisheries management, such as limiting catches and modifying fishing gear to cut the catch of unwanted fish, as ways to boost fish stocks. However, allowing fish stocks to recover is a long-term solution that will result in short-term decreases in supplies, they said.

Another marine source of EPA and DHA is krill, currently harvested from Antarctic waters.

"Increasing krill catch for use as feed could substantially increase the EPA/DHA supply," Hamilton and her colleagues wrote. Annual harvest rates of roughly 300 000 tons are well below recommend catch limits of 5.6 million tons, the researchers wrote.

But catching krill isn't necessarily a quick fix answer either, they said. Catching krill from the Antarctic is both costly and challenging because of the sheer distance from Antarctic waters to markets, they wrote.

Fish farming can help, but many farmed fish, including salmon, need fish feed that includes fish meal and fish oil.The strong demand for fish oil and meal has led the aquaculture industry to develop fish feed based on plant products, like soy. But too little EPA and DHA in fish feed can cause health problems in farmed fish and also reduce the amount of omega-3 fatty acids they contain.

Hamilton and her colleagues suggest that aquaculture can make strategic use of fish oils in fish feed by feeding these essential compounds to farmed fish at key life stages, especially right before the fish will be slaughtered for consumption.

The researchers' analysis also showed that aquaculture, while a major consumer of EPA and DHA, is also a major producer when it comes to species that don't depend on fish oils in their diet. These species include molluscs and carp. Freshwater fish like carp also can synthesize the two substance, the researchers noted.

People rarely eat all of a fish, yet these leftover by-products, such as innards and heads, also contain omega-3 fatty acids. Fish feed and fish oil can be made from fish wastes, the researchers wrote, with the trick being to collect and process the wastes.

"In Europe and North America, fish are gutted and processed by industry, which makes it really easy to collect and reuse by-products," Hamilton said. "But in China, specifically, the culture is to filet and gut the fish at home, making it very difficult to use the waste for anything useful."

Asia, far more than elsewhere in the world, is where there's most to be gained by collecting fish by-products for use, she said.

As a result, better use of by-products will require both cultural changes and central processing facilities, they said.

Changing diets can help

The researchers observed that EPA and DHA can be produced by both natural and genetically modified microalgae, as well as microbacteria and plants.

But that will also require a scale-up in production and changes in cultural acceptance, particularly in Europe, where current regulations limit use of genetically modified organisms.

"There is no silver bullet for closing the supply gap and none of the strategies we have suggested are easy. But we have to find a way to balance healthy human nutrition, a growing population and protecting our environment," Hamilton said."To do this, we will need a combination of strategies that target different parts of the supply chain. However, before we go forward, it is essential we understand potential trade-offs, such as the repercussions that can come from the widespread use of genetically modified organisms," she said.

Credit: 
Norwegian University of Science and Technology

Jackiw-Rebbi zero-mode: Realizing non-Abelian braiding in non-Majorana system

image: (a) Nanowire-based cross-shaped junction supporting the non-Abelian braiding of Jackiw-Rebbi zero-modes. (b) Numerical results for the evolution of wavefunction that demonstrates the non-Abelian braiding properties of Jackiw-Rebbi zero-modes.

Image: 
©Science China Press

As an important branch of quantum computation, topological quantum computation has been drawing extensive attention for holding great advantages such as fault-tolerance. Topological quantum computation is based on the non-Abelian braiding of quantum states, where the non-Abelian braiding in the field of quantum statistics is highly related to the non-locality of the quantum states. The exploration on topological quantum computation in the last two decades is mainly focused on Majorana fermion (or its zero-energy incarnation known as Majorana zero-mode), an exotic particle possessing non-Abelian statistics and well-known for its anti-particle being itself.

Jackiw-Rebbi zero-mode was firstly raised in the field of high energy physics in 1970s. With the growing importance of topology in the area of condensed matter physics, the concept of Jackiw-Rebbi zero-mode was also adopted to refer to the topologically protected zero-mode in the boundary of topological insulator. In contrast with the Majorana zero-mode only presented with non-vanishing superconducting order parameter, Jackiw-Rebbi zero-mode is not self-conjugate and therefore could be presented even in the absence of particle-hole symmetry.

Recently, in a research article entitled as "Double-frequency Aharonov-Bohm effect and non-Abelian braiding properties of Jackiw-Rebbi zero-mode" published in National Science Review, researchers from four universities including Peking University and Xi'an Jiaotong University claimed a new method realizing non-Abelian braiding. Co-authors Yijia Wu, Haiwen Liu, Jie Liu, Hua Jiang, and X. C. Xie demonstrated that the Jackiw-Rebbi zero-modes widely existed in topological insulators also support non-Abelian braiding.

In this work, the authors constructed Jackiw-Rebbi zero-modes in a quantum spin Hall insulator. Through showing the Aharonov-Bohm oscillation frequency of the Jackiw-Rebbi zero-mode intermediated transport is doubled, they claimed that the Majorana zero-mode can be viewed as a special case of Jackiw-Rebbi zero-mode with particle-hole symmetry. In the method of numerical simulation, they also demonstrated that non-Abelian braiding properties are exhibited by Jackiw-Rebbi zero-modes in the absence of superconductivity. The authors believed that these results not only make theoretical progress exhibiting the charming properties of Jackiw-Rebbi zero-mode, but also provide the possibility realizing topological quantum computation in a non-Majorana (non-superconductivity) system.

This latest research also put forward a generalized and continuously tunable fusion rule in topological quantum computation when the degeneracy of Jackiw-Rebbi zero-modes is lifted. The authors concluded that Jackiw-Rebbi zero-mode could be a new candidate for topological quantum computation and holds additional advantages compared with its Majorana cousin: (1) the superconductivity is no longer required; (2) possesses generalized fusion rule; and (3) the energy gap is generally larger.

Credit: 
Science China Press

Scientists resurrect mammoth's broken genes

image: New research builds on evidence that the last mammoths on Wrangel Island suffered from a variety of genetic defects.

Image: 
Rebecca Farnham / University at Buffalo

BUFFALO, N.Y. -- Some 4,ooo years ago, a tiny population of woolly mammoths died out on Wrangel Island, a remote Arctic refuge off the coast of Siberia.

They may have been the last of their kind anywhere on Earth.

To learn about the plight of these giant creatures and the forces that contributed to their extinction, scientists have resurrected a Wrangel Island mammoth's mutated genes. The goal of the project was to study whether the genes functioned normally. They did not.

The research builds on evidence suggesting that in their final days, the animals suffered from a medley of genetic defects that may have hindered their development, reproduction and their ability to smell.

The problems may have stemmed from rapid population decline, which can lead to interbreeding among distant relatives and low genetic diversity -- trends that may damage a species' ability to purge or limit harmful genetic mutations.

"The key innovation of our paper is that we actually resurrect Wrangel Island mammoth genes to test whether their mutations actually were damaging (most mutations don't actually do anything)," says lead author Vincent Lynch, PhD, an evolutionary biologist at the University at Buffalo. "Beyond suggesting that the last mammoths were probably an unhealthy population, it's a cautionary tale for living species threatened with extinction: If their populations stay small, they too may accumulate deleterious mutations that can contribute to their extinction."

The study was published on Feb. 7 in the journal Genome Biology and Evolution.

Lynch, an assistant professor of biological sciences in the UB College of Arts and Sciences, joined UB in 2019 and led the project while he was at the University of Chicago. The research was a collaboration between Lynch and scientists at the University of Chicago, Northwestern University, University of Virginia, University of Vienna and Penn State. The first authors were Erin Fry from the University of Chicago and Sun K. Kim from Northwestern University.

To conduct the study, Lynch's team first compared the DNA of a Wrangel Island mammoth to that of three Asian elephants and two more ancient mammoths that lived when mammoth populations were much larger.

The researchers identified a number of genetic mutations unique to the Wrangel Island mammoth. Then, they synthesized the altered genes, inserted that DNA into cells in petri dishes, and tested whether proteins expressed by the genes interacted normally with other genes or molecules.

The scientists did this for genes that are thought or known to be involved in a range of important functions, including neurological development, male fertility, insulin signaling and sense of smell.

In the case of detecting odors, for example, "We know how the genes responsible for our ability to detect scents work," Lynch says. "So we can resurrect the mammoth version, make cells in culture produce the mammoth gene, and then test whether the protein functions normally in cells. If it doesn't -- and it didn't -- we can infer that it probably means that Wrangel Island mammoths were unable to smell the flowers that they ate."

The research builds on prior work by other scientists, such as a 2017 paper in which a different research team identified potentially detrimental genetic mutations in the Wrangel Island mammoth, estimated to be a part of a population containing only a few hundred members of the species.

"The results are very complementary," Lynch says. "The 2017 study predicts that Wrangel Island mammoths were accumulating damaging mutations. We found something similar and tested those predictions by resurrecting mutated genes in the lab. The take-home message is that the last mammoths may have been pretty sick and unable to smell flowers, so that's just sad."

Credit: 
University at Buffalo

Galaxy formation simulated without dark matter

image: 1.5 billion years after the start of the simulation. The lighter the color, the higher the density of the gas. The light blue dots show young stars.

Image: 
© AG Kroupa/Uni Bonn

For the first time, researchers from the Universities of Bonn and Strasbourg have simulated the formation of galaxies in a universe without dark matter. To replicate this process on the computer, they have instead modified Newton's laws of gravity. The galaxies that were created in the computer calculations are similar to those we actually see today. According to the scientists, their assumptions could solve many mysteries of modern cosmology. The results are published in the Astrophysical Journal.

Cosmologists nowadays assume that matter was not distributed entirely evenly after the Big Bang. The denser places attracted more and more matter from their surroundings due to their stronger gravitational forces. Over the course of several billion years, these accumulations of gas eventually formed the galaxies we see today.

An important ingredient of this theory is the so-called dark matter. On the one hand, it is said to be responsible for the initial uneven distribution that led to the agglomeration of the gas clouds. It also explains some puzzling observations. For instance, stars in rotating galaxies often move so fast that they should actually be ejected. It appears that there is an additional source of gravity in the galaxies that prevents this - a kind of "star putty" that cannot be seen with telescopes: dark matter.

However, there is still no direct proof of its existence. "Perhaps the gravitational forces themselves simply behave differently than previously thought," explains Prof. Dr. Pavel Kroupa from the Helmholtz Institute for Radiation and Nuclear Physics at the University of Bonn and the Astronomical Institute of Charles University in Prague. This theory bears the abbreviation MOND (MOdified Newtonian Dynamics); it was discovered by the Israeli physicist Prof. Dr. Mordehai Milgrom. According to the theory, the attraction between two masses obeys Newton's laws only up to a certain point. Under very low accelerations, as is the case in galaxies, it becomes considerably stronger. This is why galaxies do not break apart as a result of their rotational speed.

Results close to reality

"In cooperation with Dr. Benoit Famaey in Strasbourg, we have now simulated for the first time whether galaxies would form in a MOND universe and if so, which ones," says Kroupa's doctoral student Nils Wittenburg. To do this he used a computer program for complex gravitational calculations which was developed in Kroupa's group. Because with MOND, the attraction of a body depends not only on its own mass, but also on whether other objects are in its vicinity.

The scientists then used this software to simulate the formation of stars and galaxies, starting from a gas cloud several hundred thousand years after the Big Bang. "In many aspects, our results are remarkably close to what we actually observe with telescopes," explains Kroupa. For instance, the distribution and velocity of the stars in the computer-generated galaxies follow the same pattern that can be seen in the night sky. "Furthermore, our simulation resulted mostly in the formation of rotating disk galaxies like the Milky Way and almost all other large galaxies we know," says the scientist. "Dark matter simulations, on the other hand, predominantly create galaxies without distinct matter disks - a discrepancy to the observations that is difficult to explain."

Calculations based on the existence of dark matter are also very sensitive to changes in certain parameters, such as the frequency of supernovae and their effect on the distribution of matter in galaxies. In the MOND simulation, however, these factors hardly played a role.

Yet the recently published results from Bonn, Prague and Strasbourg do not correspond to reality in all points. "Our simulation is only a first step," emphasizes Kroupa. For example, the scientists have so far only made very simple assumptions about the original distribution of matter and the conditions in the young universe. "We now have to repeat the calculations and include more complex influencing factors. Then we will see if the MOND theory actually explains reality."

Credit: 
University of Bonn

Statistical method developed at TUD allows the detection of higher order dependencies

image: Full dependence structure.

Image: 
Copyright: Björn Böttcher

Distance multivariance is a multivariate dependence measure, which can detect dependencies between an arbitrary number of random vectors each of which can have a distinct dimension. In his new article, Böttcher now presents the concept as a unifying theory that combines several classical dependence measures. Connections between two or more high-dimensional variables can be captured and even complicated non-linear dependencies as well as dependencies of higher order can be detected. For numerous scientific disciplines, this method opens up new approaches to detect and evaluate dependencies.

Can the number of missed school days be linked to the age, gender or origin of school students? In a survey of 146 school students, social scientists analysed various influencing variables on missed school days and examined them for dependencies in order to derive a prediction model. This classic question has already been widely discussed and analysed with various statistical approaches.

The statistical measure "distance multivariance" presents a novel approach to this question: Dr. Bjoern Boettcher from the Institute of Mathematical Stochastics was able to use distance multivariance to determine the cultural background and a higher order dependence including age and gender as influencing factors for the missed school days. He thus was able to suggest a minimal model. "This is an elementary example for an application of the developed method. I cannot judge whether this is also a substantiated finding with regard to the investigated question. Working with real data and especially the subject-specific interpretation of the results always requires expertise in the respective subject," Dr. Böttcher emphasizes and provides numerous other illustrative examples of the application of his method: "In the paper, I refer to more than 350 freely available data sets from all scientific disciplines in which statistically significant higher-order dependencies occur. Again, whether these dependencies are meaningful in terms of the underlying surveys requires further investigations as well as the expertise in the respective fields," and he adds, "of course, requests for cooperation are always welcome."

Statistical analysis usually considers dependencies between individual variables. Especially with many variables, it is desirable to remove independent variables prior to studying any specific types of dependence. Dr. Björn Böttcher presents a method for this purpose called "dependence structure detection", which can also be used to detect higher-order dependencies. Variables are called "higher-order dependent", if they are pairwise independent, but more than two variables still influence each other jointly. Dependencies of this kind have not been in the focus of applications so far.

Some scientists suspect that higher-order dependencies occur in genetics in particular: the basic idea here is that several genes together determine a property, but these genes show neither individually any dependence among each other nor individually with the property - thus indeed these would be higher-order dependent. The framework of "distance multivariance" and the "dependence structure detection" method are now promising tools for such investigations.

Implementations of the new methods are provided for direct applications in the package 'multivariance' for the free statistical computing environment 'R'.

Credit: 
Technische Universität Dresden

New progress in turbulent combustion modeling: Filtered flamelet model

image: Distribution of the H2O in Sydney bluff body turbulent jet flame at different axial sections.

Image: 
©Science China Press

In turbulent combustion, the interaction between the strong nonlinear reaction source and turbulence leads to broad spectrum of the spatio and temporal scales. From the modeling point of view, it is especially challenging to predict the field statistics satisfactorily. Although there are different turbulent combustion models, e.g. the flamelet-like model, probability density function-like model, conditional moment closure model and eddy dissipation concept model, the bases of the model closure have not been reasonably justified. Recently, a new modeling idea for turbulent diffusion flame has been proposed by Lipo Wang's group from Shanghai Jiao Tong University and Jian Zhang from the Institute of Mechanics, CAS. The article titled "non-premixed turbulent combustion modeling based on the filtered turbulent flamelet equation" was published in SCIENCE CHINA Physics, Mechanics & Astronomy.

In the framework of large eddy simulation (LES), a new filtered flamelet equation was first derived, based on which a filtered flamelet model could be constructed directly from the filtered quantities. For instance, the scalar dissipation rate of the filtered progress variable, instead of the unfiltered one, is involved in the model construction. Therefore, the model uncertainty can be largely reduced. Figures 1 and 2 show the comparison between the simulation results of the Sydney bluff-body turbulent jet flame using different models, including the newly proposed filtered flamelet model with simplified mechanism (solid red lines), the flamelet/progress variable approach with detailed mechanism (dotted pink lines); the flamelet/progress variable approach with simplified mechanism (dotted blue lines), the laminar flamelet model with detailed mechanism (dotted green lines) and the experiment results (solid triangle marks). Overall, the new model results agree satisfactorily with the experimental data.

In summary, the promising performance of the present filtered flamelet model can be attributed to the new inspiration of model construction based on the filter flamelet equation. Further improvement and various case tests will be implemented in future's work.

Credit: 
Science China Press

Neurobiological mechanisms involved in the loss of control in a study in mice revealed

image: From left to right: Rafael Maldonado, Elena Martín-García and Laura Domingo.

Image: 
UPF.

Researchers at UPF, in collaboration with researchers from the University of Mainz (Germany), the Center for Genomic Regulation, Instituto Cajal, Johannes Gutenberg University (Germany), the Autonomous University Barcelona and Hospital del Mar have identified for the first time the involvement of certain cortical areas in the brain in the loss of control over food intake. In the study, conducted in rodents and published today in Nature Communications, they have discovered a specific mechanism in this crucial cortical circuit for food addiction that involves a loss of control over intake. The study was led by the scientists Rafael Maldonado, Elena Martín-García and Beat Lutz.

This addiction is related to a loss of control over food intake that is associated with obesity and eating disorders, whose prevalence is increasing worldwide. Loss of control over food intake has a major socioeconomic impact, there are no effective treatments and it shares common neurobiological mechanisms with drug addiction. Both brain disorders are chronic, multifactorial and complex, and result from the interaction of multiple genes and environmental factors.

In this new study, the researchers have identified the neurobiological mechanisms that allow the development of addictive behaviour to food. To do so, they used a rodent model that mimics the behavioural abnormalities associated with this addiction in humans and leads to a loss of control over food intake: high motivation and impulsiveness for food, and the compulsive search for food, despite the negative effects of such behaviour. They used innovative tools to characterize the features of resilience and vulnerability to the disorder at genetic, cellular and behavioural level.

Another finding of the study is the role of the D2 dopamine receptor at cortical level in food addiction. This receptor had previously been implicated in drug addiction due to its action in subcortical areas and the limbic system in particular. This study identifies for the first time how food addiction produces an overexpression of the gene of the dopamine D2 receptor in the prefrontal cortex and this overexpression is directly involved in the loss of control over food intake.

"The identification of a specific cortical area in the loss of control over food intake may be of interest for the prevention and treatment of this disorder. Cortical areas are the brain structures of the highest hierarchical order to control behaviour and thus represent brain areas of great interest for treatment", proposes Rafael Maldonado, director of the Neuropharmacology Laboratory - Neurophar at the Department of Experimental and Health Sciences (DCEXS) of UPF.

They demonstrated that the activation of the circuit gives better control over reinforcement, while a decrease in the activity of the circuit leads to a loss of inhibitory control and greater susceptibility of the animal to developing addictive behaviour. "Therefore, we suggest that a possible therapeutic target for this disease could be the stimulation of this brain circuit for which we currently dispose of fairly precise techniques", he adds. This article provides more scientific evidence to the debate on the existence of food addiction. "There is some controversy at present as to how to classify this important behavioural disorder and our findings strengthen the idea that this addiction exists and shares common features with drug addiction".

Cortical control in decision-making

One of the neurobiological mechanisms they characterized was the circuit that goes from the prefrontal cortex to nucleus accumbens, i.e., from cortical areas to areas of the limbic system related to reward and pleasure. "We noted that addicted animals show a decrease in activity in this specific circuit, whereas the circuit of resilient animals is more active", explains Elena Martín-García.

The mechanisms related to addictions that have been most studied in the past are those related to the limbic system, more primitive circuits related to the reward system. Food consumption causes an increase in dopamine in the nucleus accumbens, which provides pleasure. "However, in this study, we have focused on the less studied part, which is decision-making at a higher level, i.e., how this system is controlled by the cortical areas", says Laura Domingo, first author of the article.

Credit: 
Universitat Pompeu Fabra - Barcelona

Caught soap-handed: Understanding how soap molecules help proteins get in and out of shape

image: Results published by AU researchers reveal that surfactant-mediated unfolding and refolding of proteins are complex processes with several structures present, and rearrangements occur on time scales from sub-milliseconds to minutes. (Reproduced with permission from the Royal Society of Chemistry).

Image: 
Chem Sci, copyright 2020 Royal Society of Chemistry.

Understanding the interactions between proteins and soap molecules (surfactants) has long been important for the industry, particularly within detergents and cosmetics. The anionic surfactant sodium dodecyl sulfate (SDS) is known to unfold globular proteins, while the nonionic surfactant octaethylene glycol monododecyl ether (C12E8) does the opposite, i.e. it helps proteins fold into shape again.

For washing powders to work efficiently, it is important that the surfactants do not change the structure of proteins (enzymes), as any change in enzyme structure kills their ability to break down stains and remove dirt. Most washing powders contain mixtures of surfactants which allows the enzymes to remain active. Also, some biotechnologies exploit surfactants in combination with proteins.

Membrane proteins usually sit in the cell membrane. In order to extract them from this environment for different studies, they have to be solubilized by surfactant. The surfactant has to be 'gentle' and only cover the membrane-inserted part of protein so that their structure is preserved. In contrast, when characterizing the molecular weight of proteins in the lab, a standard technique is to unfold them by the aggressive negatively charged surfactant, SDS, and monitor how they migrate in a polymer gel in an electric field. This technique only works if the surfactant completely unfolds the proteins and destroys their structure.

There is still debate about which type of interactions between the protein and the surfactant is most important. Is it the electrostatic interactions between the charges of the surfactant and the protein, or is it simply the properties of the interface of the aggregates (micelles) that the surfactants form in water, which are responsible for the unfolding of the protein?

While unfolding has been studied in detail at the protein level, a complete picture of the interaction between protein and surfactant is lacking in these processes. This lack of knowledge is addressed in the current work using the globular protein β-lactoglobulin (bLG) as a model protein.

The right combination of experimental techniques

Deeper insight into unfolding and refolding of proteins was obtained, as the various steps of interactions between surfactant and proteins were mapped out as a function of time. Firstly, the model protein, bLG, was mixed with the anionic surfactant SDS while following the time evolution of the formation of complexes between protein and surfactant molecules on the millisecond-minute time scale. By this the researchers have determined the structure of the evolving complexes. Subsequently they mapped the time course of the refolding process when non-charged surfactant (C12E8) was added to a sample containing complexes of SDS and protein.

In order to observe how the protein rearranges during the unfolding and refolding process induced by surfactants, complementary spectroscopic techniques, Circular Dichroism and tryptophan fluorescence, were used in combination with time-resolved Small-angle X-ray scattering (SAXS).

Circular Dichroism and tryptophan fluorescence monitor changes in the structure of bLG, while changes in the overall shape of the protein-surfactant complexes were followed by synchrotron SAXS. This combination of techniques has not been used before to study these processes.

Complex processes lasting milliseconds to minutes

The unfolding of the protein by SDS was a homogeneous process, where all protein molecules follow the same unfolding route. The SDS complexes (micelles) attack the protein molecules head-on and then gradually unfold the protein so that it forms a shell around the SDS micelle. Refolding kicks off when C12E8 micelles "suck out" SDS from the protein-SDS complex to form mixed SDS-C12E8 micelles. However, the actual refolding process seems to follow several routes, since multiple structures were found to form in parallel, namely protein-surfactant complexes (probably containing both SDS and C12E8), mixed micelles of SDS and C12E8, "naked" proteins unfolded like long polymeric chains, and properly folded proteins. The experiment allowed the interconversion between these species to be followed, so that it could be determined which of the processes are fast and which ones are slow. The folded protein could form both from the naked unfolded proteins (quickly) and from protein-surfactant complexes (more slowly). Thus, the best way in which surfactants can help a protein to fold is to basically get out of the way and let the protein find its own way back to the folded state.

The results have provided deeper insight into the structural changes occurring at the protein-surfactant level. They revealed that surfactant-mediated unfolding and refolding of proteins are complex processes of rearrangements occurring on time scales from below milliseconds to minutes and involve intimate collaboration between surfactant complexes and proteins.

Credit: 
Aarhus University

Epigenetics: Inheritance of epigenetic markers

A study undertaken by an international team led by Ludwig-Maximilians-Universitaet (LMU) in Munich molecular biologist Axel Imhof sheds new light on the mechanisms that control the establishment of epigenetic modifications on newly synthesized histones following cell division.

The classical genetic code is not the only code involved in the regulation of cell differentiation and behavior in multicellular organisms. The instructions encoded in the nucleotide sequence of the genomic DNA determine which sets of genes are expressed within a given cell type. Their selective expression thus defines the differences between a muscle cell and a nerve cell, for example. However, there is a second level of control that contributes to the regulation of patterns of gene expression. This is based on chemical modifications of DNA and of the histone proteins in which it is packed. This epigenetic code is now recognized as a vital part of the process responsible for the differentiation - and maintenance - of different cell types in higher organisms, although virtually all cells in an individual carry the same complement of genetic information. However, unlike the replication of the DNA sequence itself, the transmission of epigenetic information during cell division is not well understood. Now, a team led by Axel Imhof at LMU's Biomedical Center, in collaboration with research groups based at the Helmholtz Zentrum München and in Denmark, has used a combination of theoretical modeling and experimentation to elucidate the mechanisms that mediate the establishment of epigenetic marks following cell division. The findings, which appear in the journal Cell Reports, provide deeper insights into the inheritance of epigenetic histone modifications.

In higher organisms, most of the DNA in cells is found in a condensed form known as chromatin, in which the DNA is wrapped around particles made of proteins known as histones. In chromatin, the functional state of any given gene is largely dependent on exactly how it is packaged. More specifically, chemical modification of histones modulates the accessibility of the DNA in chromatin, and thus controls whether the proteins required for gene expression can actually bind to the DNA. In order to ensure the stable transmission to daughter cells of the gene expression patterns that define the identities of the different cell types, it is crucial that chromatin states are maintained during cell division.

In the new study, Imhof and his colleagues focused on two specific modifications of histone H3 - methylation of the lysines at positions 27 and 36 (K27me and K36me). The attachment of a methyl group (CH 3 ) to the histone alters its binding affinity for regulatory proteins and changes the degree of chromatin compaction. K27me is usually found on H3 in regions where genes are inactive, while K36me serves as a marker for active genes.

The crucial question addressed in the study was: What happens to these modifications during the course of cell division? Cell division is preceded by DNA replication, which doubles the amount of DNA that has to be packed - and thus requires the synthesis of new histones. However, freshly synthesized histones carry no epigenetic modifications. How then do cells ensure that the new histones acquire the correct pattern of modifications within the newly formed chromatin?

The problem is a tricky one, and the experimental approach adopted to solve it was technically challenging. The team first labelled newly synthesized histones with (non- radioactive) heavy isotopes. The new (heavy) histones could therefore be distinguished from the old (light) histones by means of high-resolution mass spectrometry. They then followed the fate of these two 'generations' of histones in the daughter cells after cell division.

The patterns of modification that they observed were extremely complex. In order to make sense of them, they devised two models for the inheritance of epigenetic histone modifications and used a computer-based procedure to compare the theoretical modification patterns with the dynamic changes detected in their labeling experiments. In theory, each of the lysines at positions 27 and 36 in histone H3 can be modified with one, two or three methyl groups. This meant that 16 possible isoforms had to be taken into consideration. "Based on our modeling studies, we were able to demonstrate that the methylation patterns of the two functionally antagonistic residues K27me and K36me in cells reciprocally influence each other," says Axel Imhof. "The patterns that we actually observed can best be accounted for by the assumption that certain regions of the genome - which we refer to as domains - exhibit defined patterns of methylation." A further surprising finding was that, in rapidly dividing embryonal stem cells, the levels of demethylation observed during cell division were insignificant. The team now plans to investigate in greater detail what precisely is happening in these cells.

In the longer term, the researchers hope that their work will allow them to swiftly identify pathological alterations in the epigenetic states of cells. It is known that tumor cells often contain mutant forms of the enzymes responsible for the de novo modifications that occur during cell division, and this seems to be associated with the increased proliferation rates seen in such cells. "Consequently, a lot of work is currently going into the development of 'epidrugs' that could modulate the activity of these enzymes," says Imhof.

Credit: 
Ludwig-Maximilians-Universität München