Tech

A super new theory

Tsukuba, Japan - A scientist from the Division of Quantum Condensed Matter Physics at the University of Tsukuba has formulated a new theory of superconductivity. Based on the calculation of the "Berry connection," this model helps explain new experimental results better than the current theory. The work may allow future electrical grids to send energy without losses.

Superconductors are fascinating materials that may look unremarkable at ambient conditions, but when cooled to very low temperatures, allow electrical current to flow with zero resistance. There are several obvious applications of superconductivity, such as lossless energy transmission, but the physics underlying this process is still not clearly understood. The established way of thinking about the transition from normal to superconducting is called the Bardeen-Cooper-Schrieffer (BCS) theory. In this model, as long as thermal excitations are kept small enough, particles can form "Cooper pairs" which travel together and resist scattering. However, the BCS model does not adequately explain all types of superconductors, which limits our ability to create more robust superconducting materials that work at room temperature.

Now, a scientist from the University of Tsukuba has come up with a new model for superconductivity that better reveals the physical principles. Instead of focusing on the pairing of charged particles, this new theory uses the mathematical tool called the "Berry connection." This value computes a twisting of space where electrons travel. "In the standard BCS theory, the origin of superconductivity is electron pairing. In this theory, the supercurrent is identified as the dissipationless flow of the paired electrons, while single electrons still experience resistance," Author Professor Hiroyasu Koizumi says.

As an illustration, Josephson junctions are formed when two superconductor layers are separated by a thin barrier made of normal metal or an insulator. Although widely used in high-precision magnetic field detectors and quantum computers, Josephson junctions also do not fit neatly the inside BCS theory. "In the new theory, the role of the electron pairing is to stabilize the Berry connection, as opposed to being the cause of superconductivity by itself, and the supercurrent is the flow of single and paired electrons generated due to the twisting of the space where electrons travel caused by the Berry connection," Professor Koizumi says. Thus, this research may lead to advancements in quantum computing as well as energy conservation.

Credit: 
University of Tsukuba

Using mice to open the way to prevent blocked arteries

image: Peripheral neutrophils are collected after 4 weeks of high fat diet (HFD) to
LDLR -/- mice or wild type control mice. Significant citrullinated HistoneH3 signals
are only recognized in LDLR -/- mice with HFD.

Image: 
Department of Life Science and Bioethics,TMDU

Researchers from Tokyo Medical and Dental University (TMDU) identify pathways that link a high-fat diet to atherosclerosis in mice

Tokyo, Japan - It's long been known that a high-fat diet can lead to clogged arteries, but we have only recently begun to learn in detail how the process works. A new study in experimental mice could go a long way to finding treatments to keep arteries open and flowing.

In a study published this month in JACC Basic To Translational Science, researchers from Tokyo Medical and Dental University (TMDU) have continued their research into how a high-fat diet causes atherosclerosis, the fatty buildup of plaques on the walls of blood vessels that can lead to heart attacks and other vascular disease. Atherosclerosis is not as simple as fats getting stuck in your arteries--it's actually an inflammatory disease driven by the body's own immune response, particularly neutrophils, the white blood cells that attack infections and respond to injuries.

Normally, neutrophils act to regulate the immune response, so that inflammation can resolve the problem allowing the body to return to its normal state. But when a stimulus is persistent, the immune response can shift strategies, becoming a long-term chronic condition.

"We applied this theory to what happens in atherosclerosis--in this case, the persistent stimulus is dyslipidemia (unhealthy levels of fats and cholesterol in the blood)," says lead author of the study Mizuko Osaka. "So detailing the role of neutrophils in acute inflammation can help us understand how it becomes chronic inflammation."

The group compared a group of regular wild-type mice against mice specially bred to genetically lack low-density lipoprotein (LDL) receptors on their cell membranes. This is because mice carry most plasma cholesterol in high-density lipoprotein and overall have far lower cholesterol levels; lacking these receptors means these lab mice metabolize LDL (the "bad" cholesterol) more like humans do. When fed a "humanized" high-fat diet, these mice developed high triglycerides and cholesterol, showed significantly increased levels of neutrophil adhesion to their blood vessel walls, and experienced much more inflammation.

"On the basis of certain biomarkers in the LDLR-null mice, we suspected that neutrophil extracellular traps (NETs, which help trap and eliminate pathogens from the body) could be involved in activating neutrophils," explains Masayuki Yoshida, senior author of the study. "And indeed, we found enhanced citrullination of histone H3, a known marker of NETs, in these mice. Going further, we specifically identified plasma levels of CXCL1 (a peptide that acts to attract immune cells to the site of an injury or infection) to be significantly increased in the LDLR-null mice after 7 days and 28 days of the high-fat diet."

This suggests that CXCL1 is the link between the high-fat diet and citrullination, part of the process of making NETs--and which leads to neutrophils forming plaques on the blood vessel walls when overstimulated in the long term. In fact, blocking the citrullination process led to a reversal of the increased neutrophil adhesion from the high-fat diet.

Once we fully understand how a disease happens, it's easier to develop strategies to prevent it. In fact, although mice are metabolically different from people, it's possible that certain medications already being used to control cholesterol could be used to decrease neutrophil adhesion by affecting the pathways identified in this study.

Credit: 
Tokyo Medical and Dental University

The fracking boom helps to rose crime rates in rural American states

The shale boom (mining of shale oil and gas) has contributed to an increase of crime rates in US states where 50-60% of the population lives in rural areas. That is West Virginia, North Dakota, and Arkansas. Especially the number of violent crimes has increased. This is the conclusion reached by economists from Ural Federal University (UrFU, Russia) and Shippensburg University of Pennsylvania (USA). The research results are published in The Extractive Industries and Society.

The researchers studied utilizes panel data from various sources and comprised all US states from 1999 to 2015. They compared the performance before and after 2007, when the shale boom occurred. Researchers made the dependent variable retrieved from the data of the Federal Bureau of Investigation's website, from the University of Kentucky Center for Poverty Research data, Annual State-Level Measures of Human Capital Attainment database, Measures of Income Inequality database, State-level employment database.

"The analysis showed that the fracking boom states have lower unemployment rates on average," says Kazi Sohag, research co-author, assistant professor at UrFU. "We suggest the consequence of this was a decrease in the number of crimes against families and children, as well as a decrease in the number of vagrants. However, these states are poorer, with fewer educated people, a smaller share of the private sector, and a high public sector share. Over the years of the shale boom, burglaries have increased there, and the rate of violent crime has increased by about 36%. Because of this, each state had to spend $ 15.67 million additional victimization cost per year."

The rate of violent crime could have risen for several reasons, the researchers presume. First, fracking involves the creation of low-skilled temporary jobs. Basically, such work involves men who come to earn money for some time. Inequality in the number of men and women can, on the one hand, provoke certain types of crimes against women, and on the other hand, it encourages the development of a business associated with bars, prostitution, and drugs, the authors of the article note.

"The shale boom is also associated with increased income inequality: local royalty income is concentrated among a small segment of the local population," said Kazi Sohag. "Other residents do not benefit or are economically worse off. Inequality provides a rational incentive to commit a crime."

Third, residents in fracking states are disproportionately affected by disamenities like pollution, noise, water quality, and heavy traffic. This leads to tensions between local residents and temporary workers and, as a result, causes an increase in violent crime. All this is an additional burden on local authorities and law enforcement agencies.

To solve the problem, researchers advise several tools: to make the shale industry more technological and reduce human capital in it, to strengthen the work of law enforcement officials.

"Since local royalty income is concentrated among a small segment, the distribution of income and wealth should be improved. It will help improve the well-being of residents through optimal tax policies," said Kazi Sohag. "In addition, the damage that shale mining does to the environment should be reduced. This will reduce the number of negative factors affecting the lives of state residents."

The authors caution that it is premature to generalize the results of the study to other countries. Consideration should be given to the political, cultural, and other characteristics of each country.

"Our future studies can focus on more countries like Saudi Arabia, Russia, and Norway. It will help to generalize the findings considering the spillover or spatial dependency of crimes," said Kazi Sohag.

Credit: 
Ural Federal University

New guidance for mental health

image: Flinders University Professor of Psychology Tracey Wade

Image: 
Flinders University

In spite of many clinical options, people with mental health problems including eating disorders often do not access professional help within the crucial first 12 months - in part because of lack of information in the community about accessing targeted services.

Anxiety and depression are normal reactions to situations such as pandemic lockdowns but arming yourself with some useful strategies can alleviate this, says Flinders University Distinguished Professor of Psychology Tracey
Wade.

For example, a randomised trial of 'unguided' low intensity cognitive behaviour therapy (CBT) was found to decrease signs of anxiety and depression in the comparative study led by Curtin University and international experts, including Matthew Flinders Professor Wade.

The results of the study of 225 adults in Australia and the UK found that low intensity cognitive behaviour therapy has efficacy in reducing anxiety and depression during the COVID-19 pandemic.

The majority of participants (96%) rated the intervention as useful, and most (83%) reported they spent 30 min or less reading the guide, with 83% agreeing the intervention was easy to read.

The evaluation of self-management of anxiety and depression - using an accessible online program of 'low intensity cognitive behaviour therapy' funded by the WA Government via Curtin University's Department of Psychology - confirmed its usefulness, particularly during the pressures created by the worldwide COVID-19 pandemic.

"There is an urgent need to disseminate low intensity psychological therapies to improve mental health in this challenging time," conclude researchers led by Curtin University Associate Professor Sarah Egan in the new paper in Behaviour Research and Therapy.

Meanwhile, eating disorder expert Professor Wade has helped to launch a new consumer guide on the National Eating Disorders Collaboration (NEDC) website.

The consumer checklist - https://nedc.com.au/research-and-resources/show/consumer-checklist - aims to help people navigate the system, including people between 16-24 years who might delay or have trouble finding the 'right' kind of help.

"The checklist forms a basis for a useful consumer tool in their treatment journey," says Professor Wade, who says presentations for eating disorders have escalated over COVID-19 and the associated lockdowns.

"We also hope to monitor its uptake and impact on outcomes for consumers seeking treatment."

A study last year ran a survey about the checklist, sending it to people with lived experience and clinicians to seek endorsement and feedback on each checklist item's helpfulness.

Seventeen people with lived experience and 11 clinicians gave feedback, with both groups rating the checklist as likely to help locate effective treatment earlier.

Credit: 
Flinders University

Turn off the blue light!

image: Researchers from University of Tsukuba have found that exposure to specific types of light before sleep can have variable effects on energy metabolism during sleep. Specifically, participants who went to sleep after exposure to organic light-emitting diodes (OLEDs), which emit polychromatic white light that contains less blue light than light-emitting diodes (LEDs), exhibited significantly decreased energy expenditure, core body temperature, and increased fat oxidation, indicating fewer negative health consequences compared with after nighttime exposure to LEDs. Thus, OLEDs may be a worthwhile alternative to LED lighting, especially for exposure at night.

Image: 
University of Tsukuba

Tsukuba, Japan - Extended exposure to light during nighttime can have negative consequences for human health. But now, researchers from Japan have identified a new type of light with reduced consequences for physiological changes during sleep.

In a study published in June 2021 in Scientific Reports, researchers from University of Tsukuba compared the effects of light-emitting diodes (LEDs), which have been widely adopted for their energy-saving properties, with organic light-emitting diodes (OLEDs) on physical processes that occur during sleep.

Polychromatic white LEDs emit a large amount of blue light, which has been linked with many negative health effects, including metabolic health. In contrast, OLEDs emit polychromatic white light that contains less blue light. However, the impact of LED and OLED exposure at night has not been compared in terms of changes in energy metabolism during sleep, something the researchers at University of Tsukuba aimed to address.

"Energy metabolism is an important physiological process that is altered by light exposure," says senior author of the study Professor Kumpei Tokuyama. "We hypothesized that compared with LEDs, OLED exposure would have a reduced effect on sleep architecture and energy metabolism, similar to that of dim light."

To test this hypothesis, the researchers exposed 10 male participants to LED, OLED, or dim light for 4 hours before they slept in a metabolic chamber. The researchers then measured energy expenditure, core body temperature, fat oxidation, and 6-sulfatoxymelatonin--which is a measure of melatonin levels--during sleep. The participants had not recently traveled or participated in shift work.

"The results confirmed part of our hypothesis," explains Professor Tokuyama. "Although no effect on sleep architecture was observed, energy expenditure and core body temperature during sleep were significantly decreased after OLED exposure. Furthermore, fat oxidation during sleep was significantly lower after exposure to LED compared with OLED."

In addition, fat oxidation during sleep was positively correlated with 6-sulfatoxymelatonin levels following exposure to OLED, suggesting that the effect of melatonin activity on energy metabolism varies depending on the type of light exposure.

"Thus, light exposure at night is related to fat oxidation and body temperature during sleep. Our findings suggest that specific types of light exposure may influence weight gain, along with other physiological changes," says Professor Tokuyama.

Many occupations and activities involve exposure to artificial light before sleep. New information about the effects of different kinds of light on physical processes may facilitate the selection of alternative light sources to mitigate the negative consequences of light exposure at night. Furthermore, these findings advance our knowledge regarding the role of light in energy metabolism during sleep.

Credit: 
University of Tsukuba

Seeing with radio waves

image: University of Tsukuba researchers achieved micrometer spatial resolution for radio-frequency imaging of nitrogen-vacancy centers in diamond by enhancing the signal with quantum spin-locking. This work may lead to more accurate material characterization, medical diagnostics, and quantum computers.

Image: 
University of Tsukuba

Tsukuba, Japan - Scientists from the Division of Physics at the University of Tsukuba used the quantum effect called "spin-locking" to significantly enhance the resolution when performing radio-frequency imaging of nitrogen-vacancy defects in diamond. This work may lead to faster and more accurate material analysis, as well as a path towards practical quantum computers.

Nitrogen-vacancy (NV) centers have long been studied for their potential use in quantum computers. A NV center is a type of defect in the lattice of a diamond, in which two adjacent carbon atoms have been replaced with a nitrogen atom and a void. This leaves an unpaired electron, which can be detected using radio-frequency waves, because its probability of emitting a photon depends on its spin state. However, the spatial resolution of radio wave detection using conventional radio-frequency techniques has remained less than optimal.

Now, researchers at the University of Tsukuba have pushed the resolution to its limit by employing a technique called "spin-locking." Microwave pulses are used to put the electron's spin in a quantum superposition of up and down simultaneously. Then, a driving electromagnetic field causes the direction of the spin to precess around, like a wobbling top. The end result is an electron spin that is shielded from random noise but strongly coupled to the detection equipment. "Spin-locking ensures high accuracy and sensitivity of the electromagnetic field imaging," first author Professor Shintaro Nomura explains. Due to the high density of NV centers in the diamond samples used, the collective signal they produced could be easily picked up with this method. This permitted the sensing of collections of NV centers at the micrometer scale. "The spatial resolution we obtained with RF imaging was much better than with similar existing methods," Professor Nomura continues, "and it was limited only by the resolution of the optical microscope we used."

The approach demonstrated in this project may be applied in a broad variety of application areas--for example, the characterizations of polar molecules, polymers, and proteins, as well as the characterization of materials. It might also be used in medical applications--for example, as a new way to perform magnetocardiography.

Credit: 
University of Tsukuba

RIXS demonstrates magnetic behaviour in nickelate superconductors

image: Part of the resonant inelastic x-ray scattering (RIXS) instrument at Diamond that was used to study the magnetic excitations in infinite-layer nickelates.

Image: 
Diamond Light Source

The discovery of the first high-temperature superconductor in 1986 brought with it the hope that superconductivity would one day revolutionise power transmission, electronic devices and other technologies. Materials that show superconductivity (zero electrical resistance) generally do so at an extremely low temperature. For their use to become widespread and world-changing, we need to develop a material that is superconducting close to room temperature. Research showed that the first high-temperature superconductor - a copper oxide compound - was part of a family of materials known as cuprates. Scientists have found cuprates that are superconducting at temperatures as high as 133.5 K. However, we don't fully understand how superconductivity arises in the high temperature superconductors. In 2019, the discovery of superconductivity in a nickel oxide compound opened up new avenues of research. Scientists can now compare this nickelate material to cuprates to discover the similarities and differences. In a study recently published in?Science, researchers at Diamond, SLAC, Stanford University, and Leiden University, used Resonant Inelastic X-ray Scattering (RIXS) to study the magnetic properties of the nickelate superconductor. Their results shed light on the underlying physics that gives rise to superconductivity.

Compare and Contrast

The discovery of superconductivity in a second family of materials offers the opportunity to assess their similarities and differences and could bring us closer to understanding how high-temperature superconductivity arises.

For superconductivity to occur, electrons that normally repel each other (due to their identical charge) have to become attractive. Electrons pair up, and at a low enough temperature (the critical temperature for the material, Tc), they condense to form a superfluid that flows without friction. Zero electrical resistance is the result. Copper oxides are antiferromagnetic, whose spins align in a regular pattern with neighboring spins pointing in opposite directions forming long-range order. One theory as to why zero resistance in cuprates is that the residual spin fluctuations of the moving electrons may cause them pair up.

In 2019, researchers from Stanford and the SLAC overcame considerable challenges to produce the nickelate Nd0.8Sr0.2NiO2 and found that it is superconducting at around 9-15 K. The nickelate material has a crystal structure similar to that of cuprates, but would its electrons behave in the same way?

These superconducting materials are prepared as thin nickel or copper oxide sheets sandwiched between layers of other elements, such as rare-earth ions. These are "doped", adjusting the density of free-flowing electrons. Before being doped, cuprate materials are insulators with electrons that do not move around. After doping, the electrons move freely in their cuprate layers, rarely travelling into the rare-earth layers.

In work published in Nature Materials in 2020, studies carried out on Diamond's I21 beamline showed that the electron behaviour in nickelates is very different. An undoped nickelate is a metallic material with free-flowing electrons that move between layers, creating a 3D metallic state.

A magnetic mystery

Another interesting facet of the superconductor conundrum is that - so far - the nickelates do not show the same kind of magnetic order seen in cuprates. To investigate further, it was back to the RIXS beamline.

Principal beamline scientist Ke-Jin Zhou explains: "RIXS is the only technique powerful enough to analyse the tiny magnetic signals coming from the nickelate sample. It's a minute sample, a tiny volume of the material, and I21 has the most advanced RIXS instrument in the world. We achieved spectacular results. I was amazed that the entire spectrum from the nickelate is so similar to that of a cuprate in terms of magnetic fluctuation. This is the first experimental confirmation of magnetic behaviour in the nickelate superconductor."

These results shed some light on the mechanisms underlying superconductivity. They bring us one step closer to the day we can develop a room-temperature superconductor and explore the many benefits of these remarkable materials.

Credit: 
Diamond Light Source

Virtual learning may help NICU nurses recognize baby pain

Babies younger than four weeks old, called neonates, were once thought not to perceive pain due to not-yet-fully-developed sensory systems, but modern research says otherwise, according to researchers from Hiroshima University in Japan.

Not only do babies experience pain, but the various levels can be standardized to help nurses recognize and respond to the babies' cues -- if the nurses have the opportunity to learn the scoring tools and skills needed to react appropriately. With tight schedules and limited in-person courses available, the researchers theorized, virtual e-learning may be able to provide a path forward for nurses to independently pursue training in this area.

To test this hypothesis, researchers conducted a pilot study of 115 nurses with varying levels of formal training and years of experience in seven hospitals across Japan. They published their results on May 27 in Advances in Neonatal Care.

"Despite a growing body of knowledge and guidelines being published in many countries about the preventions and management of pain in neonates hospitalized in the NICU, neonatal pain remains unrecognized, undertreated, and generally challenging," said paper author Mio Ozawa, associate professor in the Graduate School of Biomedical and Health Sciences at Hiroshima University.

The researchers developed a comprehensive multimedia virtual program on neonatal pain management, based on selected standardized pain scales, for nursing staff to independently learn how to employ measurement tools. The program, called e-Pain Management of Neonates, is the first of its kind in Japan.

"The aim of the study was to verify the feasibility of the program and whether e-learning actually improves nurses' knowledge and scoring skills," said paper author Mio Ozawa, associate professor in the Graduate School of Biomedical and Health Sciences at Hiroshima University. "The results of this study suggest that nurses could obtain knowledge and skills about the measurement of neonatal pain through e-learning."

The full cohort took a pre-test at the start of the study, before embarking on a self-paced, four-week e-learning program dedicated to learning standardized pain scales to measure discomfort in babies. However, only 52 nurses completed the post-test after four weeks. For those 52, scores increased across a range of years of experience and formal education.

Ozawa noted that the sample size is small but also said that the improved test scores indicated the potential for e-learning.

"Future research will need to go beyond the individual level to determine which benefits are produced in the management of neonatal pain in hospitals where nurses learn neonatal pain management through e-learning," Ozawa said. "This study demonstrates that virtually delivered neonatal pain management program can be useful for nurses' attainment of knowledge and skills for managing neonatal pain, including an appropriate use of selected scoring tools."

Credit: 
Hiroshima University

Elevated warming, ozone have detrimental effects on plant roots, promote soil carbon loss

Two factors that play a key role in climate change - increased climate warming and elevated ozone levels - appear to have detrimental effects on soybean plant roots, their relationship with symbiotic microorganisms in the soil and the ways the plants sequester carbon.

The results, published in the July 9 edition of Science Advances, show few changes to the plant shoots aboveground but some distressing results underground, including an increased inability to hold carbon that instead gets released into the atmosphere as a greenhouse gas.

North Carolina State University researchers examined the interplay of warming and increased ozone levels with certain important underground organisms - arbuscular mycorrhizal fungi (AMF) - that promote chemical interactions that hold carbon in the ground by preventing the decomposition of soil organic matter, thereby halting the escape of carbon from the decomposing material.

"The ability to sequester carbon is very important to soil productivity - in addition to the detrimental effects of increasing greenhouse gases when this carbon escapes," said Shuijin Hu, professor of plant pathology at NC State and corresponding author of the paper.

Present in the roots of about 80% of plants that grow on land, AMF have a win-win relationship with plants. AMF take carbon from plants and provide nitrogen and other useful soil nutrients that plants need in order to grow and develop.

In the study, researchers set up plots of soybeans with increased air temperatures of about 3 degrees Celsius, plots with higher levels of ozone, plots with higher levels of both warming and ozone, and control plots with no modifications. The resulting experiments showed that warming and increased ozone levels make soybean roots thinner as they save resources to get the nutrients they need.

Soybean cultivars are often sensitive to ozone, Hu said. Ozone levels have been somewhat stable or even declining in some parts of the United States over the past decade but have risen dramatically in areas of rapid industrialization, like India and China, for example.

"Ozone and warming have been shown to be very stressful to a lot of crops - not just soybeans - and a lot of grasses and tree species," Hu said. "Ozone and warming make the plants weak. Plants try to maximize nutrient uptake, so their roots become thinner and longer as they need to exploit the sufficient volume of soil for resources. This weakness results in a reduction of AMF and faster root and fungal hyphal turnovers, which stimulates decomposition and makes carbon sequestration more difficult. These cascading events may have profound effects underground, although the plant shoots appear normal in some cases."

Hu said he was surprised that the plant shoots weren't greatly affected by the stresses of warming and ozone; the biomass of plant leaves in both control and experimental plots was about the same.

Perhaps even more surprisingly, Hu said that more warming and ozone changed the type of AMF that colonize soybean plants.

The study showed that levels of an AMF species called Glomus decreased with more warming and ozone, while a species called Paraglomus increased.

"Glomus protects organic carbon from microbial decomposition while Paraglomus is more efficient at absorbing nutrients," Hu said. "We didn't expect these communities to shift in this way."

Hu plans to continue to study the systems surrounding carbon sequestration in soil as well as other greenhouse gas emissions from soil, like nitrous oxide, or N2O.

Credit: 
North Carolina State University

Evolution in real time

How does unicellular life transition to multicellular life? The research team of Professor Lutz Becks at the Limnological Institute of the University of Konstanz has taken a major step forward in explaining this very complex process. They were able to demonstrate - in collaboration with a colleague from the Alfred Wegner Institute (AWI) - that the unicellular green algae Chlamydomonas reinhardtii, over only 500 generations, develops mutations that provide the first step towards multicellular life. This experimentally confirmed a theory on the origin of multicellular life, which says that the evolution of cell groups and the subsequent steps towards multicellularity can only take place when cell groups are both better at reproduction and more likely to survive than single cells. These findings have been published in the current edition of Nature Communications from 9 July 2021.

Read the full article including two video clips in the university's online magazine campus.kn:
https://www.campus.uni-konstanz.de/en/science/evolution-in-real-time

Credit: 
University of Konstanz

Efficient genetic engineering platform established in methylotrophic yeast

image: Establishing an efficient genetic engineering platform for metabolic engineering of Methylotrophic yeast P. pastoris

Image: 
DICP

Pichia pastoris (syn. Komagataella phaffii), a model methylotrophic yeast, can easily achieve high density fermentation, and thus is considered as a promising chassis cell for efficient methanol biotransformation. However, inefficient gene editing and lack of synthetic biology tools hinder its metabolic engineering toward industrial application.

Recently, a research group led by Prof. ZHOU Yongjin from the Dalian Institute of Chemical Physics (DICP) of the Chinese Academy of Sciences established an efficient genetic engineering platform in Pichia pastoris.

The study was published in Nucleic Acids Research on July 1.

The researchers developed novel genetic tools for precise genome editing in Pichia pastoris by enhancing homologous recombination (HR) rates and engineering the multiple intrusion-induced rearrangement (MIR) processes. The key gene RAD52, which played crucial role in HR repair in Pichia pastoris, was overexpressed for improving the efficiency of single gene editing to 90%.

Furthermore, they increased the efficiency of multi-fragment recombination at one site by 13.5 times, and identified and characterized 46 neutral sites and 18 promoters for genome integration and gene expression.

Finally, they developed a two-factorial regulation system for regulating fatty alcohol biosynthesis in Pichia pastoris from different carbon sources.

"This advanced gene editing systems can theoretically realize stable loading of more than 100 exogenous genes and precise regulating of gene expression in Pichia pastoris, which will provide convenience for the synthetic biology research of Pichia pastoris. It also provides insights for metabolic engineering of other unconventional yeast," said Prof. ZHOU.

Credit: 
Dalian Institute of Chemical Physics, Chinese Academy Sciences

Dying cells protect their neighbors to maintain tissue integrity

image: Artistic rendering of dying cells protecting their neighbors to maintain tissue integrity. Holes in epithelium created by uncoordinated cell death are shown in purple.

Image: 
© Institut Pasteur / Le?o Valon et Romain Levayer

To enable tissue renewal, human tissues constantly eliminate millions of cells, without jeopardizing tissue integrity, form and connectivity. The mechanisms involved in maintaining this integrity remain unknown. Scientists from the Institut Pasteur and the CNRS today revealed a new process which allows eliminated cells to temporarily protect their neighbors from cell death, thereby maintaining tissue integrity. This protective mechanism is vital, and if disrupted can lead to a temporary loss of connectivity. The scientists observed that when the mechanism is deactivated, the simultaneous elimination of several neighboring cells compromises tissue integrity. This lack of integrity could be responsible for chronic inflammation. The results of the research were published in the journal Developmental Cell on June 2, 2021.

Human epithelia are tissues found in several parts of the body (such as the epidermis and internal mucosa). They are composed of layers of contiguous cells that serve as a physical and chemical barrier. This role is constantly being put to the test by both the outside environment and their own renewal. Tissue renewal involves the formation of new cells by cell division and the elimination of dead cells. The mechanisms that regulate the ability of epithelia to maintain their integrity in contexts involving large numbers of eliminated cells remain poorly understood, despite the fact that this situation occurs regularly during embryogenesis or the maintenance of adult tissues. For example, more than ten billion cells can be eliminated every day in an adult intestine. How are these eliminations orchestrated to maintain tissue integrity and connectivity?

Scientists from the Institut Pasteur and the CNRS set out to identify the mechanisms involved in epithelial integrity and the conditions that can affect epithelial connectivity by using Drosophila (or vinegar flies), an organism studied in the laboratory with a similar epithelial architecture to humans.

Using protein-sensitive fluorescent markers, the research team revealed that when a cell dies, the EGFR-ERK pathway - a cell activation signaling pathway known for its involvement in the regulation of cell survival - is temporarily activated in the neighboring cells. The scientists observed that the activation of the EGFR-ERK pathway protected neighboring cells from cell death for approximately one hour, thereby preventing the simultaneous elimination of a group of cells. "We already knew that this pathway plays a key role in regulating cell survival in epithelial tissue, but we were surprised to observe such protective dynamics between cells," comments Romain Levayer, Head of the Cell Death and Epithelial Homeostasis Unit at the Institut Pasteur and last author of the study.

The scientists' research also shows that inhibiting this protective mechanism has a drastic effect on epithelial tissue: cell elimination becomes random and neighboring cells can be eliminated simultaneously, leading to repeated losses of connectivity. The elimination of groups of neighboring cells is never observed in epithelial tissue in normal conditions, when the EGFR-ERK pathway is not deliberately inhibited, even if a large number of cells are eliminated.

By using a new optogenetic tool that can control cell death in time and space and bypass the protective mechanism, the scientists confirmed that epithelial integrity was compromised when neighboring cells were eliminated simultaneously. "Surprisingly, epithelial tissue is highly sensitive to the spatial distribution of eliminated cells. Although it can withstand the elimination of a large number of cells, epithelial integrity is affected if just three neighboring cells are eliminated simultaneously," explains Léo Valon, a scientist in the Cell Death and Epithelial Homeostasis Unit at the Institut Pasteur and first author of the study.

The scientists' observations confirm that tissues need to develop mechanisms preventing the elimination of neighboring groups of cells. "These observations are important as they illustrate the incredible self-organizing ability of biological tissues, a property that enables them to withstand stressful conditions. So there is no need for a conductor to orchestrate where and when the cells should die; everything is based on highly local communications between neighboring cells," adds Romain Levayer.

This process seems to have been conserved during evolution. The same protective mechanism based on local EGFR-ERK activation was discovered independently in human cell lines by the research group led by Olivier Pertz at the University of Bern in Switzerland (the results are published in the same journal2). The results of the other study suggest that the protective mechanism is conserved between species separated by hundreds of millions of years, indicating that it is a relatively universal mechanism.

Future research will reveal whether disruption to this cell death coordination mechanism and repeated loss of connectivity in epithelial tissue could be one of the roots of chronic inflammation, a phenomenon responsible for various diseases that are currently among the leading causes of death worldwide.

Credit: 
Institut Pasteur

Normal brain growth curves for children developed childhood brain disorders, infections and injuries

image: Using brain scans from children across the United States, researchers developed normalized growth charts for how healthy brains should grow during the first 18 years of life. Image:

Image: 
humonia/Image compilation: College of Engineering

In the United States, nearly every pediatric doctor's visit begins with three measurements: weight, height and head circumference. Compared to average growth charts of children across the country, established in the 1970s, a child's numbers can confirm typical development or provide a diagnostic baseline to assess deviations from the curve. Yet, the brain, of vital importance to the child's development, is merely hinted at in these measurements.

Head circumference may indicate a head growth issue, which could be further investigated to determine if there is an issue with brain size or extra fluid. But now, in the age of noninvasive brain scanning such as magnetic resonance imaging (MRI), could researchers develop normalized growth curve charts for the brain?

That was the question Steven Schiff, Brush Chair Professor of Engineering at Penn State, and his multi-institution research team set out to answer. They published their results today (July 9) in the Journal of Neurosurgery, Pediatrics.

"Brain size research also has a very unfortunate history, as it was often used to attempt to scientifically prove one gender or race or culture of people as better than another," said Schiff, also a professor of engineering science and mechanics in the College of Engineering and of neurosurgery in the College of Medicine. "In this paper, we discuss the research going back about 150 years and then look at what the data of a contemporary cohort really tells us."

The researchers analyzed 1,067 brain scans of 505 healthy children, ages 13 days to 18 years old, from the National Institutes of Health (NIH) Pediatric MRI Repository. To ensure a representative sample population across sex, race, socioeconomic status and geographic location, the MRI scans were taken sequentially over several years at hospitals and medical schools in California, Massachusetts, Missouri, Ohio, Pennsylvania and Texas. To ensure calibrated results, one person was established as a control and scanned at each site.

"The study of brain size and growth has a long and contentious history -- even in the era of MRI, studies defining normal brain volume growth patterns often include small sample sizes, limited algorithm technology, incomplete coverage of the pediatric age range and other issues," said first author Mallory R. Peterson, a Penn State student who is pursuing both a doctorate in engineering science and mechanics in the College of Engineering and a medical degree in the College of Medicine. She earned her bachelor of science degree in biomedical engineering from Penn State in 2016. "These studies have not addressed the relationship between brain growth and cerebrospinal fluid in depth, either. In this paper, we resolve both of these issues."

The first startling finding, according to Schiff and Peterson, was the difference in brain volume between male and female children. Even after adjusting for body size, males exhibited larger overall brain volume -- but specific brain structures did not differ in size between sexes, nor did cognitive ability.

"Clearly, sex-based differences do not account for intelligence -- we have known that for a long time, and this does not suggest differently," Schiff said. "The important thing here is that there is a difference in how the brains of male and female children grow. When you're diagnosing or treating a child, we need to know when a child's brain isn't growing normally."

The second finding was one of striking similarity rather than differences.

"Regardless of the sex or the size of the child, we unexpectedly found that the ratio between the size of the child's brain and the volume of fluid within the head -- cerebrospinal fluid -- was universal," Schiff said. "This fluid floats and protects the brain, serving a variety of functions as it flows through the brain. Although we have not recognized this tight normal ratio before, this relationship of fluid to brain is exactly what we try to regulate when we treat children for excess fluid in conditions of hydrocephalus."

The researchers plan to continue studying the ratio and its potential functions, as well as underlying mechanisms, in children and across the life span.

"The apparent universal nature of the age-dependent brain-cerebrospinal fluid ratio, regardless of sex or body size, suggests that the role of this ratio offers novel ways to characterize conditions affecting the childhood brain," Peterson said.

The researchers also settled a longstanding controversy in terms of the temporal lobe, according to Schiff. After two years of age, the left side of this brain structure -- where language function is typically localized -- was clearly larger than the right side throughout childhood. A portion of the temporal lobe called the hippocampus, which can be a cause of epilepsy, was larger on the right than the left as it grew during childhood.

"These normal growth curves for these critical structures often involved in epilepsy will help us determine when these structures are damaged and smaller than normal for age," said Schiff.

This approach to normal brain growth during childhood could help researchers understand normal from excessive volume loss throughout the later lifespan, according to Schiff.

"Brain volume peaks at puberty," Schiff said. "It then decreases as we age, and it decreases more rapidly in people with certain types of dementia. If we can better understand both brain growth and the ratio of brain to fluid at every age, we can not only improve how we diagnose clinical conditions, but also how we treat them."

Credit: 
Penn State

Tiny but mighty precipitates toughen a structural alloy

image: Mechanical properties, such as strength and ductility, can be tailored by adding nanoprecipitates, represented above by blue orbs, to a phase-transformable alloy and tuning their sizes and spacings.

Image: 
Michelle Lehman/ORNL, U.S. Dept. of Energy

Scientists at the Department of Energy's Oak Ridge National Laboratory and the University of Tennessee, Knoxville, have found a way to simultaneously increase the strength and ductility of an alloy by introducing tiny precipitates into its matrix and tuning their size and spacing. The precipitates are solids that separate from the metal mixture as the alloy cools. The results, published in the journal Nature, will open new avenues for advancing structural materials.

Ductility is a measure of a material's ability to undergo permanent deformation without breaking. It determines, among other things, how much a material can elongate before fracturing and whether that fracturing will be graceful or catastrophic. The higher the strength and ductility, the tougher the material.

"A holy grail of structural materials has long been, how do you simultaneously enhance strength and ductility?" said Easo George, principal investigator of the study and Governor's Chair for Advanced Alloy Theory and Development at ORNL and UT. "Defeating the strength-ductility trade-off will enable a new generation of lightweight, strong, damage-tolerant materials."

If structural materials could become stronger and more ductile, components of cars, planes, power plants, buildings and bridges could be built using less material. Lighter-weight vehicles would be more energy-efficient to make and operate, and tougher infrastructure would be more resilient.

Co-principal investigator Ying Yang of ORNL conceived and led the Nature study. Guided by computational thermodynamics simulations, she designed and custom-made model alloys with the special ability to undergo a phase transformation from a face-centered cubic, or FCC, to a body-centered cubic, or BCC, crystal structure, driven by changes in either temperature or stress.

"We put nanoprecipitates into a transformable matrix and carefully controlled their attributes, which in turn controlled when and how the matrix transformed," Yang said. "In this material, we intentionally induced the matrix to have the capability to undergo a phase transformation."

The alloy contains four major elements -- iron, nickel, aluminum and titanium -- that form the matrix and precipitates, and three minor elements -- carbon, zirconium and boron -- that limit the size of grains, individual metallic crystals.

The researchers carefully kept the composition of the matrix and the total amount of nanoprecipitates the same in different samples. However, they varied precipitate sizes and spacings by adjusting the processing temperature and time. For comparison, a reference alloy without precipitates but having the same composition as the matrix of the precipitate-containing alloy was also prepared and tested.

"The strength of a material usually depends on how close the precipitates are to each other," George said. "When you make them a few nanometers [billionths of a meter] in size, they can be very closely spaced. The more closely spaced they are, the stronger the material gets."

While nanoprecipitates in conventional alloys can make them super strong, they also make the alloys very brittle. The team's alloy avoids this brittleness because the precipitates perform a second useful function: by spatially constraining the matrix, they prevent it from transforming during a thermal quench, a quick immersion in water that cools the alloy to room temperature. Consequently, the matrix remains in a metastable FCC state. When the alloy is then stretched ("strained"), it progressively transforms from metastable FCC to stable BCC. This phase transformation during straining increases strength while maintaining adequate ductility. In contrast, the alloy without precipitates transforms fully to stable FCC during the thermal quench, which precludes further transformation during straining. As a result, it is both weaker and more brittle than the alloy with precipitates. Together, the complementary mechanisms of conventional precipitation strengthening and deformation-induced transformation increased strength by 20%-90% and elongation by 300%.

"Adding precipitates to block dislocations and make materials ultra-strong is well known," George said. "What is new here is that adjusting the spacing of these precipitates also affects phase transformation propensity, which allows multiple deformation mechanisms to be activated as needed to enhance ductility."

The study also revealed a surprising reversal of the normal strengthening effect of nanoprecipitates: an alloy with coarse, widely spaced precipitates is stronger than the same alloy with fine, closely spaced precipitates. This reversal happens when the nanoprecipitates become so tiny and tightly packed that the phase transformation is essentially shut down during straining of the material, not unlike the transformation suppressed during the thermal quench.

This study relied on complementary techniques performed at DOE Office of Science user facilities at ORNL to characterize the nanoprecipitates and deformation mechanisms. At the Center for Nanophase Materials Sciences, atom probe tomography showed the size, distribution and chemical composition of precipitates, whereas transmission electron microscopy exposed atomistic details of local regions. At the High Flux Isotope Reactor, small-angle neutron scattering quantified the distribution of fine precipitates. And at the Spallation Neutron Source, neutron diffraction probed the phase transformation after different levels of strain.

"This research introduces a new family of structural alloys," Yang said. "Precipitate characteristics and alloy chemistry can be precisely tailored to activate deformation mechanisms exactly when needed to thwart the strength-ductility trade-off."

Next the team will investigate additional factors and deformation mechanisms to identify combinations that could further improve mechanical properties.

It turns out, there is a lot of room for improvement. "Today's structural materials realize but a small fraction -- perhaps only 10% -- of their theoretically capable strengths," George said. "Imagine the weight savings that would be possible in a car or an airplane -- and the consequent energy savings -- if this strength could be doubled or tripled while maintaining adequate ductility."

Credit: 
DOE/Oak Ridge National Laboratory

Collective battery storage beneficial for decarbonized world

Batteries are potentially a game-changing technology as we decarbonize our economy, and their benefits are even greater when shared across communities, a University of Otago-led study has found.

Co-author Associate Professor Michael Jack, Director of the Energy Programme in the Department of Physics, says reducing costs are seeing rapid deployment of batteries for household use, mainly for storing solar and wind power for later use, but they could have a variety of uses in a future electricity grid.

"For example, they could be used to feed energy back into the grid when there is a shortfall in renewable supply. Or they could allow a house to reduce its demand on the grid during times of constraint, thus reducing the need for expensive new lines.

"As we move towards more renewable energy, and increase our use of electric vehicles, these services would be beneficial to a local community and the national grid, not just the individual house with the battery," he says.

The study, published in journal Energy & Buildings, focused on finding the capacity a battery would need to have to keep the peak demand below a certain value for both individual houses and a group of houses.

The researchers considered both load smoothing around the average, and peak shaving, where the battery ensures grid power demand does not exceed a set threshold.

"Our key result is that the size of the battery required for this purpose is much smaller - up to 90 per cent smaller - if the houses are treated collectively rather than individually. For instance, if peak shaving occurred for demand above 3 kW per house, deploying batteries individually for 20 houses would require 120 kWh of storage, whereas deploying them collectively would only require 7 kWh. Sharing batteries or having one battery per 20 houses will be a much cheaper approach to providing these services.

"Another important finding was that as peaks are mainly in winter, the battery would still be largely available for storing energy from solar cells in summer, so this would be an additional service and not competing with the main use of the battery," Associate Professor Jack says.

While electricity markets are not currently set up to harness this potential, the situation is rapidly changing.

"There is currently a trial lead by Aurora Energy and SolarZero to use batteries in the way we have described in our paper to solve issues with constrained lines in upper Clutha. Once proven, this model has the potential to become much more widespread," he says.

In the future, many households may have batteries and be using these, or batteries within their electric vehicles, to provide services to the grid. These batteries and other appliances in homes and businesses will have smart controllers that enable them to reduce demand or feed electricity back into the grid to accommodate the fluctuations of variable renewable supply and minimize the need for grid infrastructure. People responding in this way would be paid for their services to the wider grid.

"This could enable a much lower cost, collective, route to decarbonizing New Zealand's energy system."

Credit: 
University of Otago