Tech

Specific bacterium in the gut linked to irritable bowel syndrome (IBS)

image: Karolina Sjöberg Jabbar

Image: 
Elin Lindström

Researchers at the University of Gothenburg have detected a connection between Brachyspira, a genus of bacteria in the intestines, and IBS -- especially the form that causes diarrhea. Although the discovery needs confirmation in larger studies, there is hope that it might lead to new remedies for many people with irritable bowel syndrome.

The pathogenic bacterial genus, Brachyspira, is not usually present in human gut flora. A new study links the bacterium to IBS, particularly the form with diarrhea, and shows that the bacterium hides under the mucus layer protecting the intestinal surface from fecal bacteria.

Attached to intestinal cells

To detect Brachyspira, analyses of fecal samples -- which are routinely used for studying the gut flora -- were insufficient. Instead, the scientists analyzed bacterial proteins in mucus from biopsies taken from the intestine.

"Unlike most other gut bacteria, Brachyspira is in direct contact with the cells and covers their surface. I was immensely surprised when we kept finding Brachyspira in more and more IBS patients, but not in healthy individuals," says Karolina Sjöberg Jabbar, who gained her doctorate at Sahlgrenska Academy, University of Gothenburg, and is the first author of the article.

Results inspire hope

Globally, between 5 and 10 percent of the adult population have symptoms compatible with IBS (irritable bowel syndrome). The condition causes abdominal pain and diarrhea, constipation, or alternating bouts of diarrhea and constipation. People with mild forms of IBS can often live a fairly normal life, but if the symptoms are more pronounced it may involve a severe deterioration in quality of life.

"Many questions remain to be answered, but we are hopeful that we might have found a treatable cause of IBS in at least some patients," says Karolina Sjöberg Jabbar.

Bacterium found in 19 out of 62

The study was based on colonic tissue samples (biopsies) from 62 patients with IBS and 31 healthy volunteers (controls). Nineteen of the 62 IBS patients (31 percent) proved to have Brachyspira in their gut, but the bacterium was not found in any samples from the healthy volunteers. Brachyspira was particularly common in IBS patients with diarrhea.

"The study suggests that the bacterium may be found in about a third of individuals with IBS. We want to see whether this can be confirmed in a larger study, and we're also going to investigate whether, and how, Brachyspira causes symptoms in IBS. Our findings may open up completely new opportunities for treating and perhaps even curing some IBS patients, especially those who have diarrhea," says Magnus Simrén, Professor of Gastroenterology at Sahlgrenska Academy, University of Gothenburg, and Senior Consultant at Sahlgrenska University Hospital.

Several possible therapies

In a pilot study that involved treating IBS patients with Brachyspira with antibiotics, the researchers did not succeed in eradicating the bacterium.

"Brachyspira seemed to be taking refuge inside the intestinal goblet cells, which secrete mucus. This appears to be a previously unknown way for bacteria to survive antibiotics, which could hopefully improve our understanding of other infections that are difficult to treat," Sjöberg Jabbar says.

However, if the association between Brachyspira and IBS symptoms can be confirmed in more extensive studies, other antibiotic regimens, as well as probiotics, may become possible treatments in the future. Since the study shows that patients with the bacterium have a gut inflammation resembling an allergic reaction, allergy medications or dietary changes may be other potential treatment options. The researchers at the University of Gothenburg plan to investigate this in further studies.

"This is another good example of the importance of free, independent basic research that, in cooperation with healthcare, results in unexpected and important discoveries that may be beneficial to many patients. All made without the primary purpose of the study being to look for Brachyspira," says Professor Gunnar C Hansson, who is a world leading authority in research on the protective mucus layer in the intestines.

The study is published in the journal Gut.

Credit: 
University of Gothenburg

Almost like on Venus

Four-and-a-half billion years ago, Earth would have been hard to recognise. Instead of the forests, mountains and oceans that we know today, the surface of our planet was covered entirely by magma - the molten rocky material that emerges when volcanoes erupt. This much the scientific community agrees on. What is less clear is what the atmosphere at the time was like. New international research efforts led by Paolo Sossi, senior research fellow at ETH Zurich and the NCCR PlanetS, attempt to lift some of the mysteries of Earth's primeval atmosphere. The findings were published today in the journal Science Advances.

Making magma in the laboratory

"Four-and-a-half billion years ago, the magma constantly exchanged gases with the overlying atmosphere," Sossi begins to explain. "The air and the magma influenced each other. So, you can learn about one from the other."

To learn about Earth's primeval atmosphere, which was very different from what it is today, the researchers therefore created their own magma in the laboratory. They did so by mixing a powder that matched the composition of Earth's molten mantle and heating it. What sounds straightforward required the latest technological advances, as Sossi points out: "The composition of our mantle-like powder made it difficult to melt - we needed very high temperatures of around 2,000° Celsius."

That required a special furnace, which was heated by a laser and within which the researchers could levitate the magma by letting streams of gas mixtures flow around it. These gas mixtures were plausible candidates for the primeval atmosphere that, as 4.5 billion years ago, influenced the magma. Thus, with each mixture of gases that flowed around the sample, the magma turned out a little different.

"The key difference we looked for was how oxidised the iron within the magma became," Sossi explains. In less accurate words: how rusty. When iron meets oxygen, it oxidises and turns into what we commonly refer to as rust. Thus, when the gas mixture that the scientists blew over their magma contained a lot of oxygen, the iron within the magma became more oxidised.

This level of iron oxidation in the cooled-down magma gave Sossi and his colleagues something that they could compare to naturally occurring rocks that make up Earth's mantle today - so-called peridotites. The iron oxidation in these rocks still has the influence of the primeval atmosphere imprinted within it. Comparing the natural peridotites and the ones from the lab therefore gave the scientists clues about which of their gas mixtures came closest to Earth's primeval atmosphere.

A new view of the emergence of life

"What we found was that, after cooling down from the magma state, the young Earth had an atmosphere that was slightly oxidising, with carbon dioxide as its main constituent, as well as nitrogen and some water," Sossi reports. The surface pressure was also much higher, almost one hundred times that of today and the atmosphere was much higher, due to the hot surface. These characteristics made it more similar to the atmosphere of today's Venus than to that of today's Earth.

This result has two main conclusions, according to Sossi and his colleagues: The first is that Earth and Venus started out with quite similar atmospheres but the latter subsequently lost its water due to the closer proximity to the sun and the associated higher temperatures. Earth, however, kept its water, primarily in the form of oceans. These absorbed much of the CO2 from the air, thereby reducing the CO2 levels significantly.

The second conclusion is that a popular theory on the emergence of life on Earth now seems much less likely. This so-called "Miller-Urey experiment", in which lightning strikes interact with certain gases (notably ammonia and methane) to create amino acids - the building blocks of life - would have been difficult to realise. The necessary gases were simply not sufficiently abundant.

Credit: 
ETH Zurich

Neutrinos yield first experimental evidence of catalyzed fusion dominant in many stars

image: The Borexino detector lies deep under the Apennine Mountains in central Italy at the INFN's Laboratori Nazionali del Gran Sasso. It detects neutrinos as flashes of light produced when neutrinos collide with electrons in 300-tons of ultra-pure organic scintillator.

Image: 
Borexino Collaboration

AMHERST, Mass. - An international team of about 100 scientists of the Borexino Collaboration, including particle physicist Andrea Pocar at the University of Massachusetts Amherst, report in Nature this week detection of neutrinos from the sun, directly revealing for the first time that the carbon-nitrogen-oxygen (CNO) fusion-cycle is at work in our sun.

The CNO cycle is the dominant energy source powering stars heavier than the sun, but it had so far never been directly detected in any star, Pocar explains.

For much of their life, stars get energy by fusing hydrogen into helium, he adds. In stars like our sun or lighter, this mostly happens through the 'proton-proton' chains. However, many stars are heavier and hotter than our sun, and include elements heavier than helium in their composition, a quality known as metallicity. The prediction since the 1930's is that the CNO-cycle will be dominant in heavy stars.

Neutrinos emitted as part of these processes provide a spectral signature allowing scientists to distinguish those from the 'proton-proton chain' from those from the 'CNO-cycle.' Pocar points out, "Confirmation of CNO burning in our sun, where it operates at only one percent, reinforces our confidence that we understand how stars work."

Beyond this, CNO neutrinos can help resolve an important open question in stellar physics, he adds. That is, how the sun's central metallicity, as can only be determined by the CNO neutrino rate from the core, is related to metallicity elsewhere in a star. Traditional models have run into a difficulty - surface metallicity measures by spectroscopy do not agree with the sub-surface metallicity measurements inferred from a different method, helioseismology observations.

Pocar says neutrinos are really the only direct probe science has for the core of stars, including the sun, but they are exceedingly difficult to measure. As many as 420 billion of them hit every square inch of the earth's surface per second, yet virtually all pass through without interacting. Scientists can only detect them using very large detectors with exceptionally low background radiation levels.

The Borexino detector lies deep under the Apennine Mountains in central Italy at the INFN's Laboratori Nazionali del Gran Sasso. It detects neutrinos as flashes of light produced when neutrinos collide with electrons in 300-tons of ultra-pure organic scintillator. Its great depth, size and purity make Borexino a unique detector for this type of science, alone in its class for low-background radiation, Pocar says. The project was initiated in the early 1990s by a group of physicists led by Gianpaolo Bellini at the University of Milan, Frank Calaprice at Princeton and the late Raju Raghavan at Bell Labs.

Until its latest detections, the Borexino collaboration had successfully measured components of the 'proton-proton' solar neutrino fluxes, helped refine neutrino flavor-oscillation parameters, and most impressively, even measured the first step in the cycle: the very low-energy 'pp' neutrinos, Pocar recalls.

Its researchers dreamed of expanding the science scope to also look for the CNO neutrinos - in a narrow spectral region with particularly low background - but that prize seemed out of reach. However, research groups at Princeton, Virginia Tech and UMass Amherst believed CNO neutrinos might yet be revealed using the additional purification steps and methods they had developed to realize the exquisite detector stability required.

Over the years and thanks to a sequence of moves to identify and stabilize the backgrounds, the U.S. scientists and the entire collaboration were successful. "Beyond revealing the CNO neutrinos which is the subject of this week's Nature article, there is now even a potential to help resolve the metallicity problem as well," Pocar says.

Before the CNO neutrino discovery, the lab had scheduled Borexino to end operations at the close of 2020. But because the data used in the analysis for the Nature paper was frozen, scientists have continued collecting data, as the central purity has continued to improve, making a new result focused on the metallicity a real possibility, Pocar says. Data collection could extend into 2021 since the logistics and permitting required, while underway, are non-trivial and time-consuming. "Every extra day helps," he remarks.

Pocar has been with the project since his graduate school days at Princeton in the group led by Frank Calaprice, where he worked on the design, construction of the nylon vessel and the commissioning of the fluid handling system. He later worked with his students at UMass Amherst on data analysis and, most recently, on techniques to characterize the backgrounds for the CNO neutrino measurement.

Credit: 
University of Massachusetts Amherst

Cooking with wood may cause lung damage

image: Kitchen location showing the cooking stove and wood fuels used; there is a lack of sufficient ventilation to clear the smoke by-products from biomass fuel use.

Image: 
Radiological Society of North America

OAK BROOK, Ill. - Advanced imaging with CT shows that people who cook with biomass fuels like wood are at risk of suffering considerable damage to their lungs from breathing in dangerous concentrations of pollutants and bacterial toxins, according to a study being presented at the annual meeting of the Radiological Society of North America (RSNA).

Approximately 3 billion people around the world cook with biomass, such as wood or dried brush. Pollutants from cooking with biomass are a major contributor to the estimated 4 million deaths a year from household air pollution-related illness.

While public health initiatives have tried to provide support to transition from biomass fuels to cleaner-burning liquefied petroleum gas as a fuel source, a significant number of homes continue to use biomass fuels. Financial constraints and a reluctance to change established habits are factors, combined with a lack of information on the impact of biomass smoke on lung health.

"It is important to detect, understand and reverse the early alterations that develop in response to chronic exposures to biomass fuel emissions," said study co-author Abhilash Kizhakke Puliyakote, Ph.D., a postdoctoral researcher from the University of California San Diego School of Medicine.

A multidisciplinary team led by Eric A. Hoffman, Ph.D., at the University of Iowa, in collaboration with researchers from Periyar Maniammai Institute of Science and Technology, investigated the impact of cookstove pollutants in 23 people cooking with liquefied petroleum gas or wood biomass in Thanjavur, India.

The researchers measured the concentrations of pollutants in the homes and then studied the lung function of the individuals, using traditional tests such as spirometry. They also used advanced CT scanning to make quantitative measurements--for instance, they acquired one scan when the person inhaled and another after they exhaled and measured the difference between the images to see how the lungs were functioning.

Analysis showed that the ones who cooked with wood biomass were exposed to greater concentrations of pollutants and bacterial endotoxins compared to liquefied petroleum gas users. They also had a significantly higher level of air trapping in their lungs, a condition associated with lung diseases.

"Air trapping happens when a part of the lung is unable to efficiently exchange air with the environment, so the next time you breathe in, you're not getting enough oxygen into that region and eliminating carbon dioxide," Dr. Kizhakke Puliyakote said. "That part of the lung has impaired gas exchange."

The researchers found a smaller subset of the biomass users who had very high levels of air trapping and abnormal tissue mechanics, even when compared to other biomass users. In about one-third of the group, more than 50% of the air they inhaled ended up trapped in their lungs.

"This increased sensitivity in a subgroup is also seen in other studies on tobacco smokers, and there may be a genetic basis that predisposes some individuals to be more susceptible to their environment," Dr. Kizhakke Puliyakote said.

CT added important information on smoke's effect on the lungs that was underestimated by conventional tests.

"The extent of damage from biomass fuels is not really well captured by traditional tests," Dr. Kizhakke Puliyakote said. "You need more advanced, sensitive techniques like CT imaging. The key advantage to using imaging is that it's so sensitive that you can detect subtle, regional changes before they progress to full blown disease, and you can follow disease progression over short periods of time."

The lack of emphysema in the study group suggests that exposure to biomass smoke is affecting the small airways in the lungs, Dr. Kizhakke Puliyakote said, although more research is needed to understand the disease process. Regardless, the study results underscore the importance of minimizing exposure to smoke. Even in the absence of overt symptoms or breathing difficulties, the lung may have injury and inflammation that can go undetected and potentially unresolved in some people.

"For people exposed to biomass smoke for any extended duration, it is critical to have a complete assessment of lung function by health care professionals to ensure that any potential injury can be resolved with appropriate interventions," Dr. Kizhakke Puliyakote said.

While the study focused on cooking with biomass, the findings have important implications for exposure to biomass smoke from other sources, including wildfires.

"In conjunction with the increasing prevalence of biomass smoke due to wildfires in the U.S., this study can provide valuable insights regarding similar study designs serving to understand what is certain to be a growing assault on lung health."

Credit: 
Radiological Society of North America

Patterning method could pave the way for new fiber-based devices, smart textiles

Multimaterial fibers that integrate metal, glass and semiconductors could be useful for applications such as biomedicine, smart textiles and robotics. But because the fibers are composed of the same materials along their lengths, it is difficult to position functional elements, such as electrodes or sensors, at specific locations. Now, researchers reporting in ACS Central Science have developed a method to pattern hundreds-of-meters-long multimaterial fibers with embedded functional elements.

Youngbin Lee, Polina Anikeeva and colleagues developed a thiol-epoxy/thiol-ene polymer that could be combined with other materials, heated and drawn from a macroscale model into fibers that were coated with the polymer. When exposed to ultraviolet light, the polymer, which is photosensitive, crosslinked into a network that was insoluble to common solvents, such as acetone. By placing "masks" at specific locations along the fiber in a process known as photolithography, the researchers could protect the underlying areas from UV light. Then, they removed the masks and treated the fiber with acetone. The polymer in the areas that had been covered dissolved to expose the underlying materials. As a proof of concept, the researchers made patterns along fibers that exposed an electrically conducting filament underneath the thiol-epoxy/thiol-ene coating. The remaining polymer acted as an insulator along the length of the fiber. In this way, electrodes or other microdevices could be placed in customizable patterns along multimaterial fibers, the researchers say.

Credit: 
American Chemical Society

Survival protein may prevent collateral damage during cancer therapy

image: The cell survival protein BCL-XL may protect kidneys from damage caused by cancer therapies.

Image: 
WEHI, Australia

Australian researchers have identified a protein that could protect the kidneys from 'bystander' damage caused by cancer therapies.

The 'cell survival protein', called BCL-XL, was required in laboratory models to keep kidney cells alive and functioning during exposure to chemotherapy or radiotherapy. Kidney damage is a common side effect of these widely used cancer therapies, and the discovery has shed light on how this damage occurs at the molecular level.

Inhibiting BCL-XL has been proposed as a potential new cancer therapy, and this research also revealed that in contrast to genetic deletion of BCL-XL, BCL-XL-inhibitory agents can be safely used in laboratory models alone or in combination with other cancer therapies. The research was led by WEHI researchers Dr Kerstin Brinkmann, Dr Stephanie Grabow and Professor Andreas Strasser, and published today in the EMBO Journal.

At a glance

BCL-XL is a 'survival' protein that keeps cells alive and has also been identified as a promising target for anti-cancer agents.

Our researchers have discovered that BCL-XL protects the kidneys from 'collateral' damage during cancer therapy.

The research also revealed that a research compound that inhibits BCL-XL could be safely used in laboratory models, both alone or in combination with other cancer therapies.

New strategy to investigate BCL-XL

BCL-XL is a widespread 'survival' protein, found in many different cell types - but also at high levels in a range of cancers. More than a decade ago, researchers at WEHI and overseas identified BCL-XL as a vital survival factor in oxygen-carrying red blood cells, and platelets, the latter critical for blood clotting. However, the importance of BCL-XL in other cells of adults had not been investigated, said Dr Grabow.

"To address this, we developed a new laboratory model in which the BCL-XL protein was permanently removed from all cells other than blood cells," Dr Grabow said.

Using this strategy, the team explored whether BCL-XL helped cells to withstand a particularly stressful event - exposure to chemotherapy or radiotherapy, Dr Brinkmann said.

"We discovered that without BCL-XL, kidney cells were highly susceptible to damage by both chemotherapy and radiotherapy. Healthy kidneys remove waste from our body, creating urine, but also maintain healthy numbers of red blood cells by releasing a hormone called erythropoietin (EPO). Without BCL-XL, the kidneys could not perform either of these vital functions," Dr Brinkmann said.

"Kidney damage is a common side effect of anti-cancer therapies. Our discovery is the first to highlight the role of BCL-XL in protecting kidneys from this damage and may lead to better approaches to reduce this side effect for people undergoing cancer treatment," she said.

Targeting BCL-XL to treat cancer

Cancer cells are kept alive in our bodies by high levels of survival proteins, such as BCL-XL. New BH3-mimetic drugs that inhibit specific survival proteins have shown great promise in clinical trials. Because BCL-XL is found at high levels in many cancer cells, there has been considerable interest in this protein as a potential target for new anti-cancer agents, Professor Strasser said.

"Unfortunately, early studies showed that administering a BCL-XL inhibitory drug caused a loss of platelets, a serious side effect. To avoid this, the drug could only be administered at levels that, on their own, are not sufficient to efficiently kill cancer cells," he said.

The team investigated whether short-term inhibition of BCL-XL could be safely combined with other anti-cancer agents - in the hope that inhibition of BCL-XL may make cancer cells more susceptible to chemotherapy or radiotherapy.

"Because we had seen that permanently removing BCL-XL made kidney cells vulnerable to damage, we predicted that this would also occur if BCL-XL were only inhibited for a short period in a laboratory model," Professor Strasser said.

"We were thrilled to discover that a research compound that inhibits BCL-XL could be administered alone, or at a low dose even in combination with common chemotherapy drugs or radiation therapy without any evidence of kidney damage or other unwanted side effects.

"This suggests a potentially safe way to use candidate drugs that inhibit BCL-XL to treat cancer in clinical trials, even in combination with standard cancer therapies," Professor Strasser said.

Credit: 
Walter and Eliza Hall Institute

Air-sea coupling improves the simulation of the western North Pacific summer monsoon in the WRF4 model at a synoptic scale resolving resolution

Regional air-sea coupling plays a crucial role in modulating the climatology and variability of the Asian summer monsoon. The Weather Research and Forecasting (WRF) model, which is a community regional climate model, has been widely used for regional climate studies over Asia.

"Version 4 of the WRF model, namely WRF4, has just been released, and so a comparison of ocean-atmosphere coupled versus atmosphere-only WRF4 models over the WNP is a necessary but as yet unreported line of investigation," explains Dr. Liwei Zou from the Institute of Atmospheric Physics, Chinese Academy of Sciences, and author of a paper on this topic recently published in Atmospheric and Oceanic Science Letters.

Zou and his colleagues developed a new regional ocean-atmosphere coupled model based on WRF4 and the high-resolution regional version of LICOM (the LASG/IAP Climate Ocean Model) to investigate the impacts of regional air-sea coupling on the simulation of the western North Pacific summer monsoon. The resolution is set to 15 km (10 km) in the atmospheric (oceanic) model component, which is able to resolve the weather (ocean mesoscale eddies).

"Our model results indicate that WRF4-LICOM improves the simulation of the summer mean monsoon rainfall, circulations, sea surface net heat fluxes, and propagations of the daily rainband over the WNP", states Dr. Zou.

The local observed daily SST over the WNP is a response to the overlying summer monsoon. In the WRF4 model, the modeled atmosphere exhibits a passive response to the underlying daily SST anomalies. With the inclusion of regional air-sea coupling, the simulated daily SST-rainfall relationship is significantly improved.

"We recommend using the regional coupled model WRF4-LICOM for future dynamical downscaling of simulations and projections over this region," concludes Dr. Zou.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Psychological factors contributing to language learning

The problem of language acquisition is one of the complicated psychological topics. Teacher education experts are always seeking new ways of improving the efficiency of language learning.

Shishova comments, "In our previous publications we have tackled various factors influencing the success of language learning. Among them are such groups as broadly-pedagogical, methodological, broadly-psychological and individual psychological factors. The first two are external determinants of learning, and the latter two are internal."

The author conducted an empirical study on the structure of language aptitude and offered her view of language acquisition as a system of interrelated components.

"There are several factors which can be called key components of language acquisition," she says. "It's important to take account of a student's emotional and evaluative attitude towards language learning and their emotional experience in this process. We mustn't also forget about cognitive components, such as attention, perception, thinking, and memory. Effective language learning is determined by specifics of thought process and completeness of such qualities of this process as depth, flexibility, evidence-based nature, prospective thinking, analytical and conscientious nature. Personal traits, of course, are also important - self-esteem, success and failure experience, extraversion or introversion, anxiety levels, etc."

Thus, motivation for language learning is a system of cognitive, emotional, and personality-related characteristics.

Credit: 
Kazan Federal University

Towards 6G wireless communication networks: vision, enabling technologies, and new paradigm shifts

image: An overview of 6G wireless communication networks

Image: 
©Science China Press

The fifth generation (5G) wireless communication networks are being deployed worldwide from 2020 and more capabilities are in the process of being standardized, such as mass connectivity, ultra-reliability, and guaranteed low latency. However, 5G will not meet all requirements of the future in 2030 and beyond, and sixth generation (6G) wireless communication networks are expected to provide global coverage, enhanced spectral/energy/cost efficiency, better intelligence level and security, etc. To meet these requirements, 6G networks will rely on new enabling technologies, i.e., air interface and transmission technologies and novel network architecture, such as waveform design, multiple access, channel coding schemes, multi-antenna technologies, network slicing, cell-free architecture, and cloud/fog/edge computing.

A long-form review, titled "Towards 6G wireless communication networks: vision, enabling technologies, and new paradigm shifts", was published in SCIENCE CHINA Information Sciences (Vol. 64, No.1). It is co-authored by Prof. Xiaohu YOU (first and corresponding author) and Prof. Chengxiang WANG (corresponding author) from Southeast University, China, along with other 48 experts and scholars from scientific research institutes, colleges, and companies both at home and abroad.

In this article, the vision on 6G is that it will have four new paradigm shifts. First, to satisfy the requirement of global coverage, 6G will not be limited to terrestrial communication networks, which will need to be complemented with non-terrestrial networks such as satellite and unmanned aerial vehicle (UAV) communication networks, thus achieving a space-airground-sea integrated communication network. Second, all spectra will be fully explored to further increase data rates and connection density, including the sub-6 GHz, millimeter wave (mmWave), terahertz (THz), and optical frequency bands. Third, facing the big datasets generated by the use of extremely heterogeneous networks, diverse communication scenarios, large numbers of antennas, wide bandwidths, and new service requirements, 6G networks will enable a new range of smart applications with the aid of artificial intelligence (AI) and big data technologies. Fourth, network security will have to be strengthened when developing 6G networks.

This article provides a comprehensive survey of recent advances and future trends in these four aspects. Clearly, 6G with additional technical requirements beyond those of 5G will enable faster and further communications to the extent that the boundary between physical and cyber worlds disappears.

Credit: 
Science China Press

Ideal type-II Weyl points are observed in classical circuits

image: Experimental observation of ideal type-II Weyl points. (a) Circuit sample with periodic boundaries. (b)-(c) Band structures of the circuit with periodic boundaries. (d) Circuit sample with an open boundary. (e) Band structure of the circuit with an open boundary. (f) Field distribution of the surface state.

Image: 
©Science China Press

The elementary particles that build the universe have two types: bosons and fermions, where the fermions are classified as Dirac, Weyl, and Majorana fermions. In recent years, Weyl fermions are found in condensed matter systems Weyl semimetals as a kind of quasiparticles, and they manifest themselves as Weyl points from dispersion relations. In contrast to the high-energy physics which requires the stringent Lorentz symmetry, there are two types of Weyl points in condensed matter systems: type-I Weyl points with symmetric cone-like band structures and type-II Weyl points with strongly tilted band structures.

Type-II Weyl points have been observed in condensed matter systems and several artificial periodic structures, such as photonic and phononic crystals. However, these type-II Weyl points are not symmetry-related, and they have small separations and different energies. Thus, it is challenging to distinguish the type-II Weyl points with other degenerate points and observe the related phenomena such as topological surface states.

Recently, Dr. Rujiang Li and Prof. Hongsheng Chen from Zhejiang University, Dr. Bo Lv and Prof. Jinhui Shi from Harbin Engineering University, Prof. Huibin Tao from Xi'an Jiaotong University, and Prof. Baile Zhang and Prof. Yidong Chong from Nanyang Technological University observe the ideal type-II Weyl points in classical circuits by utilizing the high flexibility of circuit node connections. For a circuit structure with periodic boundaries in three dimensions (Fig. 1a), this Weyl system only has two bands. Due to the protections from mirror symmetries and the time-reversal symmetry, there exists the minimal number of four type-II Weyl points in momentum space and these Weyl points reside at the same frequency. Experimentally, they prove the existence of linear degenerate points and the strongly tilted band structure by reconstructing the band structures of the circuit system (Fig. 1b-c), which imply that these four Weyl points are ideal type-II Weyl points. Besides, they fabricate a circuit structure with an open boundary (Fig. 1d) and observe the topological surface states within an incomplete bandgap (Fig. 1e-f). These phenomena further imply the existence of ideal type-II Weyl points.

Circuit system has high flexibility and controllability. Compared with other experimental platform, lattice sites in a circuit system can be wired in an arbitrary manner with arbitrary numbers of connections per node and long-range connections, and the hopping strengths are independent of the distance between the nodes. Precisely because of this flexible and highly customizable connectivity, and the distance independent hopping, a circuit lattice that can observe the ideal type-II Weyl points are easily fabricated. This circuit platform can be used to the further study of Weyl physics and other topological phenomena.

Credit: 
Science China Press

New physical picture leads to a precise finite-size scaling of (3+1)-dimensional O(n) critical system

image: Evidence for the conjectured scaling form in the example of the critical 4D XY model. (a) Two-point correlation function. (b) Two-point correlation at the distance of half of the linear system size. (c) Magnetic susceptibility. (d) Magnetic fluctuations at non-zero Fourier modes.

Image: 
©Science China Press

Since the establishment of the renormalization group theory, it has been known that systems of critical phenomena typically possess an upper critical dimension dc (dc=4 for the O(n) model), such that in spatial dimensions at or higher than the dc, the thermodynamic behavior is governed by critical exponents taking mean-field values. In contrast to the simplicity of the thermodynamic behavior, the theory of finite-size scaling (FSS) for the d>dc O(n) model was surprisingly subtle and had remained the subject of ongoing debate till recently, when a two-length scaling ansatz for the two-point correlation function was conjectured, numerically confirmed, and partly supported by analytical calculations.

At the upper critical dimensionality dc, multiplicative and additive logarithmic corrections generally occur to the bare mean-field behavior. The clarification of logarithmic corrections in FSS becomes "notoriously hard", due to the lack of analytical insights beyond the phenomenological level and the limit of system sizes available in numerical simulations. The precise logarithmic FSS form at d=dc has remained a long-standing problem.

Recently, Jian-Ping Lv, Wanwan Xu, and Yanan Sun from Anhui Normal University, Kun Chen from Rutgers, the State University of New Jersey, and Youjin Deng from University of Science and Technology of China and Minjiang University addressed the logarithmic FSS of the O(n) symmetry at the upper critical dimensionality. Borrowing insights from higher dimensions, they established an explicit scaling form for the free energy density, which simultaneously consists of a scaling term for the Gaussian fixed point and another term with multiplicative logarithmic corrections. In particular, they conjectured that the finite-size critical two-point correlation exhibits a two-length behavior, which is governed by Gaussian fixed point at shorter distance, and enters a plateau at larger distance whose height decreases with system size in a power law corrected by a logarithmic exponent.

On this basis, the FSS of various macroscopic quantities were predicted. They then carried out extensive Monte Carlo simulations for the n-vector model with n=1,2,3, and obtained solid evidence supporting the conjectured scaling forms from the FSS of the susceptibility, the magnetic fluctuations at non-zero Fourier modes, the Binder cumulant, as well as the two-point correlation at the distance of half of the linear system size. This is a significant step toward a complete solution of the logarithmic FSS at d=dc for systems having an upper critical dimensionality.

The study is not only of theoretical importance in model systems but also of practical relevance to a large number of experimental systems. It is noted that due to technological developments, the experimental realization of the O(n) model is now available in various physical systems including quantum magnetic materials, Josephson junction arrays, and ultracold atomic systems. According to the quantum-to-classical mapping, the three-dimensional quantum O(n) systems are at the upper critical dimensionality.

Credit: 
Science China Press

Progress in electronic structure and topology in nickelates superconductors

image: (a) The band structure of LaNiO2 without SOC. The weights of different orbitals are represented by different colors. (b) The total number of electrons as a function of chemical potential. (c) The Fermi surface of LaNiO2. (d) The band structure with SOC in the shadowed area of (a), the crossings are gapped except the Dirac points along M-A.

Image: 
©Science China Press

The discovery of high Tc superconductivity in cuprates attracts people to explore superconductivity in nickelates, whose crystal structures are similar to cuprates. Recently, Danfeng Li et al. at Stanford University published an article on Nature, reporting the observed superconductivity in hole-doped nickelates Nd0.8Sr0.2NiO2. Different from cuprates, the parent compound NdNiO2 do not preserve long range magnetic order, which was thought to be responsible for superconductivity in copper oxides. Besides, the ground state of NdNiO2 is metallic. The comment article on Nature said that Li's work could become a game changer for our understanding of superconductivity in cuprates and cuprate-like systems, perhaps leading to new high-temperature superconductors."

In order to understand the mechanism of nickelates superconductor, scientists at Institute of Physics, Chinese Academy of Science made a carefully analysis on the parent compound NdNiO2, including its electronic band structure, orbital characteristics, Fermi surfaces and band topology by using the first-principles calculations and Gutzwiller variational method. The results show that the electron Fermi pockets are contributed by Ni-3dx2-y2 orbitals, while hole pockets consist of Nd-5d3z2-r2 and Nd-5dxy orbitals (shown in Fig.1). By analyzing elementary band representation in the theory of topological quantum chemistry, authors found that a two-band model can be constructed to reproduce all bands around Fermi level. The two bands originate from two orbitals, including one Ni-3dx2-y2 orbital and one s-like pseudo-orbital located on the vacancy of oxygen atoms. Besides, the authors found that band inversion happens between Ni-3dxy states and conduction bands, resulting a pair of Dirac points along M-A in the Brillouin zone.
In addition, to take the correlation effects of Ni 3d electrons into consideration, the authors performed the DFT + Gutzwiller calculation. The renormalized band structure is given in Fig.2. The results show that the half occupied 3dx2-y2 orbital have the smallest quasiparticle weight (about 0.12); namely, the 3dx2-y2 band width after renormalization is about 1/8 of DFT results. On the other hand, the Dirac points along M-A high symmetry line become closer to the Fermi level due to the band renormalization. In this work authors calculated the electronic structure, discussed topological properties and constructed a two-band model. These results will help people for the study of topology and superconductivity in nickelates.

Credit: 
Science China Press

How an infectious tumor in Tasmanian devils evolved as it spread

image: A young Tasmanian devil. Tasmanian devils are endangered by devil facial tumour 1 (DFT1), a transmissible cancer.

Image: 
Maximilian Stammnitz

A transmissible cancer in the Tasmanian devil has evolved over the past two decades, with some lineages spreading and replacing others, according to a new study in the open access journal PLOS Biology by Young Mi Kwon, Kevin Gori, and Elizabeth Murchison of the University of Cambridge (UK) and colleagues. The evolutionary dynamics of the cancer help explain how this Australian marsupial has become so quickly endangered, and may shed light on the evolution of other forms of cancer.

The Tasmanian devil is a carnivorous marsupial, about the size of a small dog, that is found only in Tasmania, an island state off the southern coast of eastern Australia. Devil facial tumor 1 (DFT1) was first observed in the mid-1990s, and has since spread to devils across much of the island, transmitted from one animal to another through biting, a common social behavior. Remarkably, tumor cells transferred in this way, rather than being eliminated by the new host's immune system, survive and establish a new tumor. Infection is usually fatal.

To understand more about the spread of the disease, the authors analyzed the genomes of 648 DFT1 tumors collected between 2003 and 2018. They found that early on in the spread of the tumor, DFT1 split into five clades, or sublineages. Two of these died out, while three continued to spread. One, clade A, split yet again. The authors mapped the distribution of each clade, which revealed how diseased devils have spread the cancer through the environment; their findings support those from epidemiological research and highlight the importance of geography in influencing the movements of devils and their disease.

Effects of human attempts to prevent spread were also reflected in the data--a pilot program to remove infected animals was likely responsible for a series of sublineage replacements in one isolated region. The authors also identified multiple types of genomic instability in the DFT1 genome, including the duplication and loss of genes and the gain or loss of whole chromosomes; they additionally described the frequency of whole-genome duplication leading to tetraploid tumors. Nonetheless, the degree of genomic diversity within the devil tumor population was small compared to that often found even within a single human tumor, the authors noted.

Largely as a result of the spread of DFT1, and now exacerbated by the emergence of a second transmissible cancer, DFT2, the Tasmanian devil population has dropped precipitously, and the species is now endangered. "The results from this study may be useful for epidemiological modelling and prediction of management intervention benefit," Murchison said.

Credit: 
PLOS

Creating higher energy density lithium-ion batteries for renewable energy applications

image: In the Journal of Vacuum Science and Technology A, researchers investigate the origins of degradation in high energy density LIB cathode materials and develop strategies for mitigating those degradation mechanisms and improving LIB performance.

Figure 1: Scanning electron microscopy images of as-synthesized NCA at different magnifications. Figure 2: Transmission electron microscopy images showing the surface of the Gr-R-nNCA particles

Image: 
Jin-Myoung Lim and Norman S. Luu, Northwestern University

WASHINGTON, November 24, 2020 -- Lithium-ion batteries (LIBs) that function as high-performance power sources for renewable applications, such as electric vehicles and consumer electronics, require electrodes that deliver high energy density without compromising cell lifetimes.

In the Journal of Vacuum Science and Technology A, by AIP Publishing, researchers investigate the origins of degradation in high energy density LIB cathode materials and develop strategies for mitigating those degradation mechanisms and improving LIB performance.

Their research could be valuable for many emerging applications, particularly electric vehicles and grid-level energy storage for renewable energy sources, such as wind and solar.

"Most of the degradation mechanisms in LIBs occur at the electrode surfaces that are in contact with the electrolyte," said author Mark Hersam. "We sought to understand the chemistry at these surfaces and then develop strategies for minimizing degradation."

The researchers employed surface chemical characterization as a strategy for identifying and minimizing residual hydroxide and carbonate impurities from the synthesis of NCA (nickel, cobalt, aluminum) nanoparticles. They realized the LIB cathode surfaces first needed to be prepared by suitable annealing, a process by which the cathode nanoparticles are heated to remove surface impurities, and then locked into the desirable structures with an atomically thin graphene coating.

The graphene-coated NCA nanoparticles, which were formulated into LIB cathodes, showed superlative electrochemical properties, including low impedance, high rate performance, high volumetric energy and power densities, and long cycling lifetimes. The graphene coating also acted as a barrier between the electrode surface and the electrolyte, which further improved cell lifetime.

While the researchers had thought the graphene coating alone would be sufficient to improve performance, their results revealed the importance of pre-annealing the cathode materials in order to optimize their surface chemistry before the graphene coating was applied.

While this work focused on nickel-rich LIB cathodes, the methodology could be generalized to other energy storage electrodes, such as sodium-ion or magnesium-ion batteries, that incorporate nanostructured materials possessing high surface area. Consequently, this work establishes a clear path forward for the realization of high-performance, nanoparticle-based energy storage devices.

"Our approach can also be applied to improve the performance of anodes in LIBs and related energy storage technologies," said Hersam. "Ultimately, you need to optimize both the anode and cathode to achieve the best possible battery performance."

Credit: 
American Institute of Physics

Potential new therapies for Alzheimer's disease are revealed through network modeling of its complex molecular interactions

Researchers from Mount Sinai and the National Center for Geriatrics and Gerontology in Japan have identified new molecular mechanisms driving late-onset Alzheimer's Disease (LOAD), as well as a promising therapeutic candidate for treatment, according to a study in the journal Neuron. LOAD is the most prevalent form of dementia among people over age 65, a progressive and irreversible brain disorder affecting more than 5.5 million people in the U.S., and the sixth leading cause of death.

"Our study advances the understanding of LOAD pathogenesis by revealing not only its global structures, but detailed circuits of complex molecular interactions and regulations in key brain regions affected by LOAD," said the lead author Bin Zhang, PhD, Professor of Genetics and Genomic Sciences at the Icahn School of Medicine at Mount Sinai and Director of the Center for Transformative Disease Modeling. "The network models we created serve as a blueprint for identifying novel therapeutic targets that respond directly to the urgent need for new ways to prevent, treat, and delay the onset of LOAD."

Previous genetic and genome-wide association studies (GWAS) have identified some genetic mutations associated with LOAD, but the causal variants of the disease have remained uncharacterized. To explore the molecular mechanisms driving the pathogenesis of LOAD, the Mount Sinai-led team performed an integrative network biology analysis of a whole genome and RNA sequencing dataset from multiple cortical brain regions of hundreds of donors, both healthy and with LOAD. This work revealed thousands of molecular changes and uncovered numerous neuron-specific gene subnetworks dysregulated in LOAD.

From that investigation researchers predicted that ATP6V1A, a protein-coding gene, plays a major role in a critical signaling pathway in the brain, and that its deficit could be traced to LOAD. That linkage was evaluated using two methods: a CRISPR-based technique to manipulate ATP6V1A levels in donor-matched brain cells in vitro, and in RNAi-based knockdown in transgenic Drosophila models, meaning that genetic material is artificially introduced into fly models and specific genes are effectively silenced to study the effects. Indeed, the knockdown of ATP6V1A worsened LOAD-related neurodegeneration in both models.

Just as significantly, researchers predicted that a drug compound, NCH-51, could normalize the dysregulated genes in LOAD, including ATP6V1A, and demonstrated that NCH-51 dramatically improved the neuronal and neurodegenerative effects of the ATP6V1A deficit in both model systems. Specifically, the CRISPR-based experiment using human induced pluripotent stem cells (hiPSC) demonstrated that repression of ATP6V1A, particularly in combination with β-amyloid -- a key neuropathological hallmark of AD -- dramatically impacted neuronal function. "The human-based system we created proved to be a promising way to model the mechanisms underlying risk and progression in diseases like LOAD where living tissues are not available," observed Kristen Brennand, PhD, Associate Professor, Genetics and Genomic Sciences, Mount Sinai, and co-author of the study.

The Drosophila experiments were also revealing, demonstrating that ATP6V1A deficit exacerbated both β-amyloid-mediated toxicity and tau-mediated axon degeneration. "This finding suggests that ATP6V1A may have broad neuroprotective effects and serve as a potential therapeutic target for other tau-related neurodegenerative diseases," says Dr. Koichi M. Iijima, Head of the Department of Alzheimer's Disease Research at the National Center for Geriatrics and Gerontology in Japan, and senior author of the study.

As Dr. Zhang points out, the groundbreaking research by Mount Sinai and its Japanese partner could have significance beyond just LOAD. "We've created a framework for advanced modeling of complex human diseases in general," he explains, "and that could well lead to the discovery of molecular mechanisms and the identification of novel targets that are able to deliver transformative new therapeutics."

Credit: 
The Mount Sinai Hospital / Mount Sinai School of Medicine