Tech

Shedding light on dark traps

In the last decade, perovskites - a diverse range of materials with a specific crystal structure - have emerged as promising alternatives to silicon solar cells, as they are cheaper and greener to manufacture, while achieving a comparable level of efficiency.

However, perovskites still show significant performance losses and instabilities, particularly in the specific materials that promise the highest ultimate efficiency. Most research to date has focused on ways to remove these losses, but their actual physical causes remain unknown.

Now, in a paper published today in Nature, researchers from Dr. Sam Stranks's group at Cambridge University's Department of Chemical Engineering and Biotechnology and Cavendish Laboratory, and Professor Keshav Dani's Femtosecond Spectroscopy Unit at OIST in Japan, identify the source of the problem. Their discovery could streamline efforts to increase the efficiency of perovskites, bringing them closer to mass-market production.

Perovskite materials are much more tolerant of defects in their structure than silicon solar cells, and previous research carried out by Stranks's group found that to a certain extent, some heterogeneity in their composition actually improves their performance as solar cells and light-emitters.

However, the current limitation of perovskite materials is the presence of a "deep trap" caused by a certain type of defect, or minor blemish, in the material. These are areas in the material where energised charge carriers can get stuck and recombine, losing their energy to heat, rather than converting it into useful electricity or light. This undesirable recombination process can have a significant impact on the efficiency and stability of solar panels and LEDs.

Until now, very little was known about the cause of these traps, in part because they appear to behave rather differently to traps in traditional solar cell materials.

In 2015, Dr Stranks and colleagues published a Science paper looking at the luminescence of perovskites, which reveals how good they are at absorbing or emitting light. "We found that the material was very heterogeneous; you had quite large regions that were bright and luminescent, and other regions that were really dark," says Stranks. "These dark regions correspond to power losses in solar cells or LEDs. But what was causing the power loss was always a mystery, especially because perovskites are otherwise so defect-tolerant."

Due to limitations of standard imaging techniques, the group couldn't tell if the darker areas were caused by one, large trap site, or many smaller traps, making it difficult to establish why they were forming only in certain regions.

Later on in 2017, Professor Keshav Dani's group at OIST published a paper in Nature Nanotechnology, where they made a movie of how electrons behave in semiconductors after absorbing light. "You can learn a lot from being able to see how charges move in a material or device after shining light. For example, you can see where they might be getting trapped," says Dani. "However, these charges are hard to visualize as they move very fast - on the timescale of a millionth of a billionth of a second; and over very short distances - on the length scale of a billionth of a metre."

On hearing of Dani's work, Dr Stranks reached out to see if they could work together to address the problem visualising the dark regions in perovskites.

The team at OIST used a technique called photoemission electron microscopy (PEEM) for the first time on perovskites, where they probed the material with ultraviolet light and built up an image based on how the emitted electrons scattered.

When they looked at the material, they found that the dark regions contained traps, around 10-100 nanometers in length, which were clusters of smaller atomic-sized trap sites. These trap clusters were spread unevenly throughout the perovskite material, explaining the heterogeneous luminescence seen in Stranks's earlier research.

Intriguingly, when the researchers overlaid images of the trap sites onto images that showed the crystal grains of the perovskite material, they found that the trap clusters only formed at specific places, at the boundaries between certain grains.

To understand why this only occurred at certain grain boundaries, the groups worked together with Professor Paul Midgley's team from Cambridge University's Department of Materials Science and Metallurgy using a technique called scanning electron diffraction to create detailed images of the perovskite crystal structure. The project team made use of the electron microscopy setup at the ePSIC facility at the Diamond Light Source Synchrotron, which has specialized equipment for imaging beam-sensitive materials, like perovskites.

"Because these materials are very beam-sensitive, typical techniques that you would use to probe local crystal structure on these length scales will quite quickly change the material as you're looking at it, which can make interpreting the data very difficult" explains Tiarnan Doherty, a PhD student in Stranks's group and co-lead author of the study. "Instead, we were able to use very low exposure doses and therefore prevent damage.

"From the work at OIST, we knew where the trap clusters were located, and at ePSIC, we scanned around those same areas to see the local structure. We were then able to quickly pinpoint unexpected variations in the crystal structure around the trap clusters."

The group discovered that the trap clusters only formed at junctions where an area of the material with slightly distorted structure met an area with pristine structure.

"In perovskites, we have these regular mosaic grains of material and most of the grains are nice and pristine - the structure we would expect," says Stranks. "But every now and again, you get a grain that's slightly distorted and the chemistry of that grain is inhomogeneous. What was really interesting and which initially confused us, was that it's not the distorted grain that's the trap but where that grain meets a pristine grain; it's at that junction that the traps cluster."

With this understanding of the nature of the traps, the team at OIST also used the custom-built PEEM instrumentation to visualize the dynamics of the charge carrier trapping process happening in the perovskite material. "This was possible as one of the unique features of our PEEM setup is that it can image ultrafast processes - as short as femtoseconds," explains Andrew Winchester, a PhD student in Prof. Dani's Unit, and co-lead author of this study. "We found that the trapping process was dominated by charge carriers diffusing to the trap clusters."

These discoveries represent a major breakthrough in the quest to bring perovskites to the solar energy market.

"We still don't know exactly why the traps are clustering there, but we now know that they do form there, and seemingly only there," says Stranks. "That's exciting because it means we now know what to target to bring up the performances of perovskites. We need to target those inhomogeneous phases or get rid of these junctions in some way."

'The fact that charge carriers must first diffuse to the traps could also suggest other strategies to improve these devices," says Dani. "Maybe we could alter or control the arrangement of the trap clusters, without necessarily changing their average number, such that charge carriers are less likely to reach these defect sites."

The teams' research focused on one particular perovskite structure. The scientists will now be investigating whether the cause of these trapping clusters is universal across other perovskite materials.

"Most of the progress in device performance has been trial and error and so far, this has been quite an inefficient process," says Stranks. "To date, it really hasn't been driven by knowing a specific cause and systematically targeting that. This is one of the first breakthroughs that will help us to use the fundamental science to engineer more efficient devices."

Credit: 
University of Cambridge

Face up to eating disorders, and seek help

A new study has found young people are leaving it 'too late' to seek help for eating disorders, citing fear of losing control over their eating or weight, denial, and failure to perceive the severity of the illness as reasons not to get professional advice.

The recent online survey of almost 300 Australian young adults aged 18-25 years found a majority had eating, weight or body shape concerns, and even those with anorexia or bulimia reportedly found reasons to delay getting treatment or expert interventions.

The first author of the study, Kathina Ali, Research Associate in Psychology at Flinders University, explains that concern for others and the belief one should solve their own problems were the two most common barriers towards seeking help for eating concerns.

"Not wanting others to worry about their problems was the highest endorsed barrier - it reflects the wish for autonomy and also the fear of being a burden to others in this group of young adults."

Feeling embarrassed about their problems or fearing that other people do not believe eating disorders are real illnesses even prevented young adults experiencing symptoms of anorexia nervosa or bulimia nervosa from seeking help, says fellow psychology researcher Dr Dan Fassnacht.

"Concerningly, only a minority of people with eating disorder symptoms had sought professional help and few believed they needed help despite the problems they were experiencing," says Dr Fassnacht, Flinders University Psychology Lecturer, co-author of a new paper just published in the International Journal of Eating Disorders.(Wiley).

In the research article, entitled 'What prevents young adults from seeking help? Barriers toward help?seeking for eating disorder symptomatology', the Australian and German researchers recommended clinicians (counsellors, health workers and others) and the public be made aware of these barriers.

More information and education about the severity and the impact of eating disorders - and how symptoms can get worse without interventions or treatment - should be available to young adults, including the importance of seeking help, and self-management strategies.

Helpful and free evidence-based online resources are available at websites such as Australia's Butterfly Foundation and the National Eating Disorders Collaboration.

Credit: 
Flinders University

Mouse study shows 'chaperone protein' protects against autoimmune diseases

image: Fluorescent radiography images of two wild type (WT, or normal) mice on top and two knockout (KO) mice on bottom. The KO mice are bred without a gene that produces a chaperone protein for protecting against immune activity on healthy cells. The presence of helper T cells and collagen is indicated by the green and orange signals, respectively. KO mice (bottom two images) show higher levels of collagen reactive helper T cells in the joints of the legs, indicating immune activity against collagen, a fibrous protein, and showing that collagen-induced arthritis, a mouse autoimmune disorder, has occurred.

Image: 
Johns Hopkins Medicine

Like a parent of teenagers at a party, Mother Nature depends on chaperones to keep one of her charges, the immune system, in line so that it doesn't mistakenly attack normal cells, tissues and organs in our bodies. A recent study by Johns Hopkins Medicine researchers has demonstrated that in mice -- and probably humans as well -- one biological chaperone may play a key role in protection from such attacks, known as autoimmune responses, which are a hallmark of diseases such as multiple sclerosis, rheumatoid arthritis, systemic lupus erythematosus and type 1 diabetes.

The researchers detailed their study in a paper published on Feb. 18, 2020, in the journal PLOS Biology.

"Short protein fragments, known as peptides, that come from bacteria, viruses and other pathogens act as antigens to trigger our immune system to remove the invaders, a process that depends on other proteins acting and interacting in a specific sequence of events," says Scheherazade Sadegh-Nasseri, Ph.D., professor of pathology at the Johns Hopkins University School of Medicine and senior author of the paper. "In our mouse study, we have shown that a specific disruption in this regimen can redirect the immune system to turn against a healthy body -- something that we believe also is likely occurring in humans."

In their effort to identify this "chaperone disruption," the researchers relied on the fact that for a mammal's immune system to trigger a response, antigenic peptides must be exposed, or "presented," to immune cells known as T lymphocytes, or T cells. This is achieved when the protein fragments attach to a molecule called major histocompatibility complex II, or MHC II, that sits on the surface of a white blood cell known as an antigen presenting cell, or APC.

Immature T cells are biologically attracted to these presented antigens, which are called epitopes. If the T cell has a receptor on its surface with a shape that conforms to the antigen -- akin to fitting a key into a lock -- it latches on and triggers the T cell's maturation into what is called a helper T cell (also known as a CD4 T cell).

These cells then kick the immune response into high gear, helping to fight the internal war against foreign invaders by activating other immunity soldiers -- B cells, macrophages and "killer" T cells -- to secrete antibodies, digest and destroy microbes, and remove infected cells, respectively.

Once activated, the immune system remembers the antigen for a faster response to future attacks by the same infectious agent.

Two chaperone proteins in humans -- DO and DM -- work together to assist the presentation of antigens so that the immune system correctly determines that they are foreign and not normal, healthy components of the body. While previous research has provided a good understanding of DM's role in this process, the function of DO has remained unclear until now.

To better define DO's involvement in immunity and autoimmunity, Sadegh-Nasseri and her colleagues focused on H2-O, the chaperone protein in mice that is comparable to DO in humans.

"Based on our previous studies, we knew that DM and DO collaborate to ensure that the best-fitting antigenic epitope is selected to bind to MHC II, allowing for the most potent recognition by helper T cells," says doctoral candidate Robin Welsh, who is co-lead author of the PLOS Biology paper with colleague Nianbin Song, Ph.D. "However, the extent of DO's contribution to this collaboration -- and what would happen if it didn't function as intended -- was undetermined. So, we studied the mouse version of this process to get clues as to what might be happening in humans."

In their first experiment, the researchers extracted B cells from normal mice and "knockout" mice bred without the gene that produces the H2-O chaperone protein. From these cells, they isolated peptides from the mouse equivalent of MHC II, known as 1-Ab molecules, and found that the peptides recovered from normal mice were stronger binders than those from the animals lacking H2-O.

"These results provide evidence that H2-O in mice, and likely DO in humans, may be helping select the stronger binding peptides -- the ones targeted as being from antigens -- for presentation, ensuring that the immune response is highly specific," Welsh explains.

"Furthermore, since the lack of H2-O means poor scrutiny in selecting the best fitting epitopes, this may adversely affect the 'learning process' immature T cells undergo in the thymus so that they can recognize which proteins are considered self," she adds.

Building on these findings, the researchers next looked to see whether the absence of H2-O would disrupt normal helper T cell function and cause an autoimmune reaction. To do this, they injected collagen, a fibrous protein normally responsible for wound repair, into their normal and knockout mice to sensitize the mouse immune systems to it. The researchers found that without H2-O, collagen was mistakenly presented as an antigen.

Using a fluorescent marker to detect helper T cells and broken-down collagen in the joints of the mice, the researchers found much higher amounts of both in the knockouts versus the normal mice. This was a sign of immune activity against the connective tissue protein -- and characteristic of collagen-induced arthritis, or CIA, a laboratory induced autoimmune disease in mice used to model rheumatoid arthritis in humans.

"This is a significant finding because rheumatoid arthritis in humans is thought to be caused by a similar mechanism where the synovial membrane in the joints -- which contains collagenous tissue -- is incorrectly attacked as foreign," says Sadegh-Nasseri.

Finally, the researchers used mice to see if the lack of H2-O could also be tied to experimental autoimmune encephalomyelitis, or EAE, a laboratory-induced autoimmune disorder in rodents that is similar to multiple sclerosis in humans.

Both normal and H2-O knockout mice were first immunized with myelin oligodendrocyte glycoprotein (MOG), a structural component of the myelin sheath that surrounds nerve cells, both protecting them and facilitating the transmission of electrical impulses between the brain and body. The researchers wanted MOG to be presented by the 1-Ab molecules to determine how both types of immune systems, with and without the H2-O chaperone, would respond.

The researchers again used a fluorescent marker to detect an autoimmune response, but this time against the myelin sheath. Again, the knockout mice without H2-O showed more glowing traces of myelin removed from the nerves (a process known as demyelination) than in the normal mice. An examination of brain-infiltrating immune cells taken from the knockout mice revealed large numbers of helper T cells with a strong affinity for MOG. This suggests that the immune systems in these animals incorrectly see myelin as foreign and target nerve cells for attack.

By linking the absence of a key chaperone protein, H2-O, with two different experimental autoimmune disorders in mice, Sadegh-Nasseri, Welsh, Song and their colleagues say this points to a similar impact in humans if DO is not present to keep the immune system focused on true invaders.

"We know that DO evolved later than DM in warm-blooded mammals, so perhaps DO's chaperoning role was nature's solution for preventing autoimmune disorders," Sadegh-Nasseri says. "Better understanding of this role could lead to improved diagnostic techniques and therapies for such diseases."

Credit: 
Johns Hopkins Medicine

The retention effect of training

Especially in times of shortage of skilled workers, some companies do not offer continuing education that improves the employees' chances on the labour market. Behind this restraint is the employer's fear that employees who have undergone extensive training will use their improved opportunities to switch to other companies.

Their fear seems to be unfounded, as Professor Thomas Zwick of Julius-Maximilians-Universität (JMU) Würzburg in Bavaria, Germany, and Dr. Daniel Dietz found out. "On average, training significantly increases employee loyalty to the company providing training by more than ten percentage points," says Zwick, who heads the JMU Chair of Human Resource Management and Organisation.

Higher company loyalty, rising productivity

In a publication in the International Journal of Human Resource Management, the economists show that training not only increases the productivity of employees, it also reduces the tendency to leave the company.

"Interestingly, the retention effect also occurs in the case of training content that would enable employees to take a wage-increasing career step outside the company," says the JMU professor. Even if the participants receive a certificate from an external training provider and can provide convincing evidence of their newly acquired skills, the retention effect remains positive.

Data basis of the analysis

For their study, Zwick and Dietz analysed the continuing education and career information of approximately 4,300 employees in 150 German companies over five consecutive years. All the companies studied were participants in the company panel of the Institute for Employment Research (IAB) in Nuremberg, Germany.

From this database, the researchers were able to ascertain, among other things, when employees had participated in continuing education, at which company they had worked during their training and whether they were still employed there in the following calendar year. Another key aspect of the study was whether the content of the trainings was also of interest to other employers, whether training was certified and whether the certificates came from external providers.

Credit: 
University of Würzburg

Speeding-up quantum computing using giant atomic ions

An international team of researchers have found a new way to speed up quantum computing that could pave the way for huge leaps forward in computer processing power.

Scientists from the University of Nottingham and University of Stockholm have sped-up trapped ion quantum computing using a new experimental approach - trapped Rydberg ions; their results have just been published in Nature.

In conventional digital computers, logic gates consist of operational bits that are silicon based electronic devices. Information is encoded in two classical states ("0" and "1") of a bit. This means that capacities of a classical computer increase linearly with the number of bits. To deal with emerging scientific and industrial problems, large computing facilities or supercomputers are built.

Quantum entanglement enhancing capacity

A quantum computer is operated using quantum gates, i.e. basic circuit operations on quantum bits (qubits) that are made of microscopic quantum particles, such as atoms and molecules. A fundamentally new mechanism in a quantum computer is the utilisation of quantum entanglement, which can bind two or a group of qubits together such that their state can no longer be described by classical physics. The capacity of a quantum computer increases exponentially with the number of qubits. The efficient usage of quantum entanglement drastically enhances the capacity of a quantum computer to be able to deal with challenging problems in areas including cryptography, material, and medicine sciences.

Among the different physical systems that can be used to make a quantum computer, trapped ions have led the field for years. The main obstacle towards a large-scale trapped ion quantum computer is the slow-down of computing operations as the system is scaled-up. This new research may have found the answer to this problem.

The experimental work was conducted by the group of Markus Hennrich at SU using giant Rydberg ions, 100,000,000 times larger than normal atoms or ions. These huge ions are highly interactive, and exchange quantum information in less than a microsecond. The interaction between them creates quantum entanglement. Chi Zhang from the University of Stockholm and colleagues used the entangling interaction to carry out a quantum computing operation (an entangling gate) around 100 times faster than is typical in trapped ion systems.

Chi Zhang explains, "Usually quantum gates slow down in bigger systems. This isn't the case for our quantum gate and Rydberg ion gates in general! Our gate might allow quantum computers to be scaled up to sizes where they are truly useful!"

Theoretical calculations supporting the experiment and investigating error sources have been conducted by Weibin Li (University of Nottingham, UK) and Igor Lesanovsky (University of Nottingham, UK, and University of Tübingen, Germany). Their theoretical work confirmed that there is indeed no slowdown expected once the ion crystals become larger, highlighting the prospect of a scalable quantum computer.

Weibin Li, Assistant Professor, School of Physics and Astronomy at the University of Nottingham adds: "Our theoretical analysis shows that a trapped Rydberg ion quantum computer is not only fast, but also scalable, making large-scale quantum computation possible without worrying about environmental noise. The joint theoretical and experimental work demonstrate that quantum computation based on trapped Rydberg ions opens a new route to implement fast quantum gates and at the same time might overcome many obstacles found in other systems."

Currently the team is working to entangle larger numbers of ions and achieve even faster quantum computing operations.

Credit: 
University of Nottingham

Network pharmacology analysis on Zhichan powder in the treatment of Parkinson's disease

Parkinson's disease (PD) is a debilitating neurodegenerative disorder which is characterized by the degradation and subsequent loss in activity of the motor nerve system in the body. The disease is incurable and patients also tend exhibit non motor function disabilities as the disease progresses. Current treatment modalities often focus on improving the symptoms of PD after its onset.

Zhichan is a Chinese herb which has been used for its medicinal properties to treat patients suffering from PD. Zhichan's beneficial effects have been attributed to its ability to regulate the expression of monoamine oxidase B and tyrosine hydroxylase in the substantia nigra of PD model rats has been documented. Dr. Jiajun Chen, at the Department of Neurology, China-Japan Union Hospital of Jilin University, Jilin, is at the forefront of research on Zhichan. Dr Chen's team has recently conducted a systematic analysis of the effects of Zhichan powder for PD treatment. To achieve this, the team used the Traditional Chinese Medicine Systems Pharmacology database to screen for active compounds against PD, and established a medicine-target-disease network model with computational network pharmacology.

"We identified 18 major active components in Zhichan powder through the screening method," says Dr. Chen. He believes that this is strong evidence for a connection between chemical components of this Traditional Chinese Medicine and Parkinson's disease-related targets.

The medicine-target-disease system of Zhichan powder established by the network pharmacology method permitted the researchers to visualize clusters and differences among chemical components in this specific herb, as well as the complex mechanism of molecular activities among those effective components, relevant targets, pathways, and PD. "Our results provide a new perspective and method for revealing the mechanism of action of Traditional Chinese Medicine prescriptions," Dr. Chen notes.

Credit: 
Bentham Science Publishers

Milk pioneers: East African herders consumed milk 5,000 years ago

image: A modern day Kenyan collects fresh cow's milk in a gourd.

Image: 
Oliver Rudd

When you pour a bowl of cereal, you probably aren't considering how humans came to enjoy milk in the first place. But animal milk was essential to east African herders at least 5,000 years ago, according to a new study that uncovers the consumption habits in what is now Kenya and Tanzania -- and sheds a light on human evolution.

Katherine M. Grillo, assistant professor of anthropology at the University of Florida and a 2012 PhD graduate of Washington University in St. Louis, teamed up with researchers, including Washington University's Fiona Marshall, the James W. and Jean L. Davis Professor in Arts & Sciences, for the study published this week in the Proceedings of the National Academy of Sciences. Julie Dunne at the University of Bristol in the United Kingdom is co-first author on the paper with Grillo.

After excavating pottery at sites throughout east Africa, team members analyzed organic lipid residues left in the pottery and were able to see evidence of milk, meat and plant processing.

"(This is) the first direct evidence we've ever had for milk or plant processing by ancient pastoralist societies in eastern Africa," Grillo said.

"The milk traces in ancient pots confirms the story that bones have been telling us about how pastoralists lived in eastern Africa 5,000 to 3,000 years ago -- an area still famous for cattle herding and the historic way of life of people such as Maasai and Turkana," Marshall said.

Marshall continued: "Most people don't think about the fact that we are not really designed to drink milk as adults -- most mammals can't. People who had mutations that allowed them to digest fresh milk survived better, we think, among herders in Africa. But there's a lot we don't know about how, where and when this happened.

"It's important because we still rely on our genetics to be able to drink fresh cow's milk once we are adults."

This research shows, for the first time, that herders who specialized in cattle -- as opposed to hunting the abundant wildlife of the Mara Serengeti -- were certainly drinking milk.

"One of the reasons pastoralism has been so successful around the world is that humans have developed lactase persistence -- the ability to digest milk due to the presence of specific alleles," Grillo said.

Notably, in east Africa there are distinctive genetic bases for lactase persistence that are different from other parts of the world. Geneticists believed that this ability to digest milk in northeast Africa evolved around 5,000 years ago, but archaeologists knew little about the archaeological contexts in which that evolution took place.

The development of pastoralism in Africa is unique as well, where herding societies developed in areas that often can't support agriculture.

Credit: 
Washington University in St. Louis

Where did the antimatter go? Neutrinos shed promising new light

image: Detection of an electron neutrino (on the left) and an electron antineutrino (on the right) in the Super-Kamiokande. When an electron neutrino or antineutrino interacts with water, an electron or a positron is produced. They emit a faint ring of light (called Cherenkov light) that is detected by almost 13,000 photodetectors. The colour on the figures shows how photons are detected over time.

Image: 
© T2K Collaboration

We live in a world of matter - because matter overtook antimatter, though they were both created in equal amounts by the Big Bang when our universe began. As featured on the cover of Nature on 16 April 2020, neutrinos and the associated antimatter particles, antineutrinos, are reported to have a high likelihood of differing behaviour that offers a promising path to explaining the asymmetry between matter and antimatter. These observations may explain this mysterious antimatter disappearance. They come from the T2K experiment conducted in Japan and in which three French laboratories are involved, affiliated with the CNRS, École Polytechnique - Institut Polytechnique de Paris, Sorbonne Université and the CEA.

Physicists have long been convinced, from their experiments, that matter and antimatter were created in equal quantities at the beginning of the universe. When they interact, matter and antimatter particles destroy each other, which should have left the universe empty, containing only energy. But as we can see from looking around us, matter won out over antimatter. To explain this imbalance, physicists look for asymmetry in how matter and antimatter particles behave, asymmetry that they call violation of the Charge-Parity (CP) symmetry (1).

For decades, scientists have detected symmetry violations between quarks (components of atoms) and their antiparticles. However, this violation is not large enough to explain the disappearance of antimatter in the universe. Another path looks promising: asymmetry between the behaviour of neutrinos and antineutrinos could fill in a large part of the missing answer. This is what the T2K (2) experiment is researching. It is located in Japan; its French collaborators are the Leprince-Ringuet Laboratory (CNRS/École Polytechnique - Institut Polytechnique de Paris), the Laboratoire de Physique Nucléaire and des Hautes Energies (CNRS/Sorbonne Université) and the CEA's Institut de Recherche sur les Lois Fondamentales de l'Univers.

Neutrinos are extremely light elementary particles. They pass through materials, are very difficult to detect, and are even harder to study precisely. Three kinds of neutrinos - or flavours - exist: the electron, muon and tau neutrinos. The behaviour that could differ for neutrinos and antineutrinos is oscillation, the capacity of these particles to change flavour as they propagate (3). The T2K experiment uses alternating beams of muon neutrinos and muon antineutrinos, produced by a particle accelerator at the J-PARC research centre, on Japan's east coast. Towards its west coast, a small fraction of the neutrino (or antineutrino) beams sent by J-PARC are detected using the light pattern that they leave in the 50,000 tonnes of water in the Super-Kamiokande detector, set up 1,000 metres deep in a former mine. During their 295 km journey through rock (taking a fraction of a second at the speed of light), some of the muon neutrinos (or antineutrinos) oscillated and took on another flavour, becoming electron neutrinos.

By counting the number of particles that reached Super-Kamiokande with a different flavour than the one they were produced with at J-PARC, the T2K collaboration has shown that neutrinos seem to oscillate more often than antineutrinos. The data even point to almost maximum asymmetry (See graph below) between how neutrinos and antineutrinos behave.

These results, the fruit of ten years of data accumulated in the Super-Kamiokande with a total of 90 electronic neutrinos and 15 electronic antineutrinos detected, are not yet statistically large enough to qualify this as a discovery; however it is a strong indication and an important step. The T2K experiment will now continue with higher sensitivity. A new generation of experiments should multiply data production in the coming years: Hyper-K, the successor to the Super-Kamiokande in Japan, whose construction has just been started, and Dune, being built in the USA, ought to be operational around 2027-2028. If their new data confirm the preliminary results from T2K, ten years from now neutrinos could provide the answer to why antimatter disappeared in our universe.

Credit: 
CNRS

New textile could keep you cool in the heat, warm in the cold

image: A microstructured fiber (left) contains pores (right) that can be filled with a phase-changing material that absorbs and releases thermal energy.

Image: 
<i>ACS Applied Materials & Interfaces</i> <b>2020</b>, DOI: 10.1021/acsami.0c02300

Imagine a single garment that could adapt to changing weather conditions, keeping its wearer cool in the heat of midday but warm when an evening storm blows in. In addition to wearing it outdoors, such clothing could also be worn indoors, drastically reducing the need for air conditioning or heat. Now, researchers reporting in ACS Applied Materials & Interfaces have made a strong, comfortable fabric that heats and cools skin, with no energy input.

"Smart textiles" that can warm or cool the wearer are nothing new, but typically, the same fabric cannot perform both functions. These textiles have other drawbacks, as well -- they can be bulky, heavy, fragile and expensive. Many need an external power source. Guangming Tao and colleagues wanted to develop a more practical textile for personal thermal management that could overcome all of these limitations.

The researchers freeze-spun silk and chitosan, a material from the hard outer skeleton of shellfish, into colored fibers with porous microstructures. They filled the pores with polyethylene glycol (PEG), a phase-changing polymer that absorbs and releases thermal energy. Then, they coated the threads with polydimethylsiloxane to keep the liquid PEG from leaking out. The resulting fibers were strong, flexible and water-repellent. To test the fibers, the researchers wove them into a patch of fabric that they put into a polyester glove. When a person wearing the glove placed their hand in a hot chamber (122 F), the solid PEG absorbed heat from the environment, melting into a liquid and cooling the skin under the patch. Then, when the gloved hand moved to a cold (50 F) chamber, the PEG solidified, releasing heat and warming the skin. The process for making the fabric is compatible with the existing textile industry and could be scaled up for mass production, the researchers say.

Credit: 
American Chemical Society

Nature: Don't hope mature forests to soak up carbon dioxide emissions

image: Western Sydney University

Image: 
Western Sydney University

Globally, forests act as a large carbon sink, absorbing a substantial portion of the anthropogenic CO2 emissions. Whether mature forests will remain carbon sinks into the future is of critical importance for aspirations to limit climate warming to no more than 1.5 °C above pre-industrial levels? Researchers at Western Sydney University's EucFACE (Eucalyptus Free Air CO2 Enrichment, see the photo) experiment have found new evidence of limitations in the capacity of mature forests to translate rising atmospheric CO2 concentrations into additional plant growth and carbon storage. The unique experiment was carried out in collaboration with many scientist over the world. The Head of the Centre of Excellence EcolChange Professor Ülo Niinemets and senior researcher Astrid Kännaste from the Estonian University of Life Sciences have contributed to data collection and data analysis of this study.

Carbon dioxide (CO2) is sometimes described as "food for plants" as it is the key ingredient in plant photosynthesis. Experiments in which single trees and young, rapidly growing forests have been exposed to elevated CO2 concentrations have shown that plants use the extra carbon acquired through photosynthesis to grow faster.

However, scientists have long wondered whether mature native forests would be able to take advantage of the extra photosynthesis, given that the trees also need nutrients from the soil to grow. This question is particularly relevant for Australia. In the first experiment of its kind applied to a mature native forest, Western Sydney University researchers exposed a 90-year old eucalypt woodland to elevated CO2-levels. "Just as we expected, the trees took in about 12% more carbon under the enriched CO2 conditions," said Distinguished Professor Belinda Medlyn. "However, the trees did not grow any faster, prompting the question 'where did the carbon go?'".

The researchers combined their measurements into a carbon budget that accounts for all the pathways of carbon into and out of the EucFACE forest ecosystem, through the trees, grasses, insects, soils and leaf litter. This carbon-tracking analysis showed that the extra carbon absorbed by the trees was quickly cycled through the soil and returned to the atmosphere, with around half the carbon being returned by the trees themselves, and half by fungi and bacteria in the soil. "The trees convert the absorbed carbon into sugars, but they can't use those sugars to grow more, because they don't have access to additional nutrients from the soil. Instead, they send the sugars below-ground where they 'feed' soil microbes", explained Professor Medlyn.

These findings have global implications: models used to project future climate change, and impacts of climate change on plants and ecosystems, currently assume that mature forests will continue to absorb carbon over and above their current levels, acting as carbon sinks. Professor Niinemets said: "What did we find? Increased uptake by the forest in elevated CO2, but not increased retention of this extra C. Instead, the extra C that was taken up was released back to the atmosphere. The future emissions could mean worse outcomes than we thought in terms of future climate, given this lack of response by nutrient-limited mature forests."

Credit: 
Estonian Research Council

Research finds teachers just as likely to have racial bias as non-teachers

WASHINGTON, D.C., April 15, 2020--Research released today challenges the notion that teachers might be uniquely equipped to instill positive racial attitudes in children or bring about racial justice, without additional support or training from schools. Instead, the results, published in Educational Researcher (ER), find that "teachers are people too," holding almost as much pro-White racial bias as non-teachers of the same race, level of education, age, gender, and political affiliation. ER is a peer-reviewed journal of the American Educational Research Association.

In their ER article, researchers Jordan Starck (Princeton University), Travis Riddle (Princeton University), Stacey Sinclair (Princeton University), and Natasha Warikoo (Tufts University) analyze data from two studies measuring the explicit and implicit biases of American adults by occupation. The results are the first that the authors are aware of that use national data to compare teachers' and non-teachers' levels of implicit, or unconscious, racial bias.

"Well-intentioned teachers may be subject to biases they are not entirely conscious of, potentially limiting their capacity to facilitate racial equity," said Warikoo, a sociology professor at Tufts University. "If we expect schools to promote racial equity, teachers need support and training to either shift or mitigate the effects of their own racial biases."

The article drew from two complementary national datasets: Project Implicit, which is a large, non-representative sample, and the 2008 wave of the American National Election Studies (ANES), which is a smaller but nationally representative sample. From Project Implicit, the authors used data on 1.6 million respondents, including 68,930 who self-identified as K-12 instructors, from 2006 to 2017. The ANES dataset used by the authors included a total sample 1,984 respondents, including 63 preK-12 teachers.

Examining the first dataset, Warikoo and her coauthors analyzed data from a Black-White Implicit Association Test used to evaluate people's implicit bias. The test measures how quickly and accurately respondents pair White faces with "good" words and Black faces with "bad" words in comparison to the inverse. The test scores reflect respondents' pro-White/anti-Black or Pro-black/anti-White biases. The authors' findings from this dataset indicated that PreK-12 teachers and other adults with similar characteristics both exhibited a significant amount of pro-White/anti-Black implicit bias. Seventy-seven percent of teachers demonstrated implicit bias, compared to 77.1 percent of non-teachers.

To measure explicit bias, the authors subtracted participants' reported warmth towards Black people from their reported warmth toward White people. The results showed that 30.3 percent of the teachers had explicit bias, compared to 30.4 percent of the non-teachers.

To validate the findings from the first study in a nationally representative sample, the authors analyzed a second dataset, from a survey in which adults throughout the U.S., both teachers and non-teachers, were asked to judge Chinese characters as "pleasant" or "unpleasant" after being shown pictures of a Black or White young adult face. As in the first study, authors found no significant association between occupation and level of bias: teachers held the same levels of implicit and explicit bias as non-teachers.

"Overall, our findings suggest that schools are best understood as microcosms of society rather than as antidotes to inequality," said Warikoo. "Teachers are people too. Like all of us, they need support in combating their biases. We shouldn't assume that good intentions and care for all students make a teacher bias free."

The authors noted that various field-based interventions by other scholars suggest that strategies that encourage teachers to pause and reconsider their decisions in critical moments can reduce racial disparities. For example, one intervention that provides a 45-minute training session in a variety of prejudice-reduction techniques, such as imagining stereotype-challenging examples, can reduce implicit bias levels over two months.

"Several of the relatively small interventions that have been conducted have shown promise," said Sinclair, a psychology professor at Princeton University. "It would be helpful to test them at a larger scale, with an eye toward how such strategies work in different situations and with school personnel subject to different types and levels of bias."

Credit: 
American Educational Research Association

Researchers create tools to help volunteers do the most good after a disaster

In the wake of a disaster, many people want to help. Researchers from North Carolina State University and the University of Alabama have developed tools to help emergency response and relief managers coordinate volunteer efforts in order to do the most good.

"Assigning volunteers after a disaster can be difficult, because you don't know how many volunteers are coming or when they will arrive," says Maria Mayorga, corresponding author of two studies on the issue and a professor in NC State's Edward P. Fitts Department of Industrial and Systems Engineering.

"In addition, the challenge can be complicated for efforts, such as food distribution, where you also don't know the amount of supplies you will have to distribute or how many people will need assistance."

The researchers used advanced computational models to address these areas of uncertainty in order to develop guidelines, or rules of thumb, that emergency relief managers can use to help volunteers make the biggest difference.

The most recent paper focuses on assigning volunteers to deal with tasks where the amount of work that needs to be done can change over time, such as search and rescue, needs assessment and distribution of relief supplies.

"Essentially, we developed a model that can be used to determine the optimal assignment of volunteers to tasks when you don't know how much work will be required," Mayorga says. "For example, in relief distribution, there is uncertainty in both the supply of relief items and what the demand will be from disaster survivors.

"We then used the model to create and test rules of thumb that can be applied even when relief managers don't have access to computers or the internet."

The researchers found that a simple policy that performs well is the "Largest Weighted Demand (LWD) policy," which assigns volunteers to the task that has the most work left to be done. In this case, work is prioritized by its importance. For example, fulfilling demand for water is more important than fulfilling demand for cleaning supplies.

However, if the difference in importance between tasks becomes large enough, then the best option is for managers to assign volunteers based on "Largest Queue Clearing Time (LQCT)," which is the time needed to complete the current work if the current number of volunteers is unchanged.

"In fact, the LQCT heuristic worked well in all of the instances we tested, but it is harder to assess quickly," Mayorga says. "So we recommend that managers use the LWD rule unless there is a really large difference in the importance of the tasks."

However, the LWD and LQCT rules of thumb don't work for all tasks.

In fact, the researchers found that the rules of thumb that make sense for volunteer tasks where you don't know how much work will be required are actually a bad fit for tasks with clearly defined workloads - such as clearing debris after a disaster.

In a 2017 paper, the researchers found that a good rule of thumb for clearing debris was "Fewest Volunteers," in which volunteers are simply assigned to whichever task has the fewest volunteers working on it.

"Spontaneous volunteers are people who, in the wake of a disaster, impulsively contribute to response and recovery efforts without affiliations to recognized volunteer organizations (e.g. the Red Cross) or other typical first responders," Mayorga says. "These people constitute a labor source that is both invaluable and hard to manage.

"Our work in these papers provides strategies for incorporating spontaneous volunteers into organized relief efforts to help us achieve safe and responsive disaster management. It's also worth noting that these works focused on a single organization assigning volunteers to tasks. In our future work, we are focusing on strategies that can be used by multiple agencies to coordinate efforts and amplify the volunteer response."

Credit: 
North Carolina State University

Why didn't the universe annihilate itself? Neutrinos may hold the answer

Alysia Marino and Eric Zimmerman, physicists at CU Boulder, have been on the hunt for neutrinos for the last two decades.

That's no easy feat: Neutrinos are among the most elusive subatomic particles known to science. They don't have a charge and are so lightweight--each one has a mass many times smaller than the electron--that they interact only on rare occasions with the world around them.

They may also hold the key to some of physics' deepest mysteries.

In a study published today in the journal Nature, Marino, Zimmerman and more than 400 other researchers on an experiment called T2K come closer to answering one of the big ones: Why didn't the universe annihilate itself in a humungous burst of energy not long after the Big Bang?

The new research suggests that the answer comes down to a subtle discrepancy in the way that neutrinos and their evil twins, the antineutrinos, behave--one of the first indications that phenomena called matter and antimatter may not be the exact mirror images many scientists believed.

The group's findings showcase what scientists can learn by studying these unassuming particles, said Zimmerman, a professor in the Department of Physics.

"Even 20 years ago, the field of neutrino physics was much smaller than it is today," he said.

Marino, an associate professor of physics, agreed. "There's still a lot we're trying to understand about how neutrinos interact," she said.

Big Bang

Neutrinos, which weren't directly detected until the 1950s, are often produced deep within stars and are among the most common particles in the universe. Ever second, trillions of them pass through your body, although few if any will react with a single one of your atoms.

To understand why this cosmic dandelion fluff is important, it helps to go back to the beginning--the very beginning.

Based on their calculations, physicists believe that the Big Bang must have created a huge amount of matter alongside an equal quantity of antimatter. These particles behave exactly like, but have opposite charges from, the protons, electrons and all the other matter that makes up everything you can see around you.

There's just one problem with that theory: Matter and antimatter obliterate each other on contact.

"Our universe today is dominated by matter and not antimatter," Marino said. "So there had to be some process in physics that distinguished matter from antimatter and could have given rise to a small excess of protons or electrons over their antiparticles."

Over time, that small excess became a big excess until there was virtually no antimatter left in the cosmos. According to one popular theory, neutrinos underly that discrepancy.

Zimmerman explained that these subatomic particles come in three different types, which scientists call "flavors," with unique interactions. They are the muon neutrino, electron neutrino and tau neutrino. You can think of them as the physicist's Neapolitan ice cream.

These flavors, however, don't stay put. They oscillate. If you give them enough time, for example, the odds that a muon neutrino will stay a muon neutrino can shift. Imagine opening your freezer and not knowing whether the vanilla ice cream you left behind will now be chocolate or strawberry, instead.

But is the same true for antineutrinos? Proponents of the theory of "leptogenesis" argue that if there were even a small difference in how these mirror images behave, it could go a long way toward explaining the imbalance in the universe.

"The next big step in neutrino physics is to understand whether neutrino oscillations happen at the same rate as antineutrino oscillations," Zimmerman said.

Traveling Japan

That, however, means observing neutrinos up close.

The T2K, or Tokai to Kamioka, Experiment goes to extreme lengths to do just that. In this effort, scientists use a particle accelerator to shoot beams made up of neutrinos from a research site in Tokai, Japan, to detectors in Kamioka--a distance of more than 180 miles or the entire width of Japan's largest island, Honshu.

Zimmerman and Marino have both participated in the collaboration since the 2000s. For the last nine years, the duo and their colleagues from around the world have traded off studying beams of muon neutrinos and muon antineutrinos.

In their most recent study, the researchers hit pay dirt: These bits of matter and antimatter seem to behave differently. Muon neutrinos, Zimmerman said, are more inclined to oscillate into electron neutrinos than their antineutrino counterparts.

The results come with major caveats. The team's findings are still quite a bit shy of the physics community's gold standard for a discovery, a measure of statistical significance called "five-sigma." The T2K collaboration is already upgrading the experiment so that it can collect more data and faster to reach that mark.

But, Marino said, the results provide one of the most tantalizing hints to date that some kinds of matter and antimatter may act differently--and not by a trivial amount.

"To explain the T2K results, the difference needs to be almost the largest amount that you could possibly get" based on theory, she said.

Marino sees the study as one window to the fascinating world of neutrinos. There are many more pressing questions around these particles, too: How much, for example, does each flavor of neutrino weigh? Are neutrinos, in a really weird twist, actually their own antiparticles? She and Zimmerman are taking part in a second collaboration, an upcoming effort called the Deep Underground Neutrino Experiment (DUNE), that will aid the upgraded T2K in finding those answers.

"There are still things we're figuring out because neutrinos are so hard to produce in a lab and require such complicated detectors," Marino said. "There's still room for more surprises."

Credit: 
University of Colorado at Boulder

Study reveals how 'hypermutated' malignant brain tumors escape chemotherapy and immunotherapy

image: Keith Ligon, MD, PhD

Image: 
Dana-Farber Cancer Institute

Cancers whose cells are riddled with large numbers of DNA mutations often respond favorably to drugs called checkpoint blockers that unleash the immune system against the tumor. But a new study shows that malignant brain tumors known as gliomas generally don't respond to the immunotherapy drugs even when the tumor cells are "hypermutated" - having thousands of DNA mutations that, in other kinds of cancer, provoke the immune system into an attack mode.

An analysis of more than 10,000 gliomas and clinical outcomes reported in Nature by scientists in Boston and Paris found that glioma patients whose tumors were hypermutated actually had no significant benefit when treated with checkpoint blockers. This finding was somewhat unexpected, because immune checkpoint blockers have been shown to be often effective in other types of cancer - including melanoma, colorectal, and endometrial cancers - if their cells have defective DNA damage repair mechanisms and are hypermutated. The results of the study are further evidence of the challenges presented by malignant brain tumors, which are initially treated by surgery but are difficult to remove in their entirety, necessitating systemic treatment radiation and chemotherapy.

Immunotherapy, which has become an important tool in cancer treatment in recent years, has yet to show much benefit in brain tumors. "It seems that in gliomas you can have hundreds or thousands of DNA mutations and the immune system is still suppressed and ultimately unable to recognize cancer cells as being abnormal," said Mehdi Touat, MD, of the Sorbonne University and neuro-oncologist in Pitié-Salpêtrière hospitals in Paris, a co-first author of the report. Keith Ligon, MD, PhD, of Dana-Farber Cancer Institute, Brigham and Women's Hospital, and the Broad Institute of MIT and Harvard, is co-senior author of the study.

The results also suggest that the quality of mutations, not just their absolute number, may be important when predicting who will benefit from immune checkpoint treatments. "Our data indicates that the absence of an immune response in gliomas likely results from several complex aspects of immunosuppression in the brain which will need further characterization," wrote the authors. "Approaches that increase the tumor microenvironment infiltration by cytotoxic lymphocytes are likely required to improve immunotherapy response in gliomas."

Ligon noted that the study also showed how treatment with the drug temozolomide - the standard chemotherapy for gliomas - can lead to the tumors becoming hypermutated and resistant to further treatment. Temozolomide (Temodar) does benefit patients, but in some patients it also seems to cause the emergence of hypermutated cells that can resist the drug and then these surviving glioma cells cause the tumors to progress, said Ligon. The researchers said the results do not suggest that temozolomide should not be used in glioma patients, but that once resistance develops, further treatment with temozolomide would not be effective. Instead, they showed that treatment with another chemotherapy drug called lomustine (CeeNU) seemed to still be effective in that setting for some patients.

"We've demonstrated that the longer people took temozolomide the more likely it was that their tumors became hypermutated," said David Reardon, MD, clinical director of Dana-Farber's Center for Neuro-Oncology and an author on the Nature report. He said the finding that treatment with lomustine was not associated with the hypermutated resistant state is a glimmer of good news. "If a patient becomes resistant to temozolomide, there is not much to offer them, but this suggests that some of these patients might get benefit from lomustine. Our data suggests we could try this."

Credit: 
Dana-Farber Cancer Institute

Miller School researchers alert otolaryngologists about high COVID-19 transmission risk

image: Xuezhong Liu, M.D., Ph.D., the Marian and Walter Hotchkiss Endowed Chair in Otolaryngology and vice chairman of otolaryngology at University of Miami Miller School of Medicine.

Image: 
University of Miami Health System

A harsh reality has emerged as COVID-19 has spread around the globe. Several thousand doctors, nurses and others caring for COVID-19 patients are dying from the virus. To alert providers in otolaryngology, one of the hardest-hit medical specialties, about the high risk of transmission and how to avoid it, University of Miami Miller School of Medicine researchers studied data from China.

The results will be published in the journal Otolaryngology-Head and Neck Surgery and are also available on the American Academy of Otolaryngology-Head and Surgery website here.

Otolaryngologists routinely come into direct contact with patients who have upper respiratory issues. Otolaryngologists are also on the frontlines at hospitals during the pandemic, performing such procedures as tracheotomies, during which they surgically create a hole in the windpipe to help patients breathe, according to Xuezhong Liu, M.D., Ph.D., the Marian and Walter Hotchkiss Endowed Chair in Otolaryngology and vice chairman of otolaryngology at University of Miami Miller School of Medicine.

Regardless of whether otolaryngologists practice in the hospital or community, the nature of the specialty puts them at high risk for COVID-19 infection. In fact, they might not realize they're encountering a positive patient because COVID-19 symptoms often mimic those they see routinely with other conditions.

"Recent evidence suggests that more than half of COVID-19 patients don't have a fever early in the course of the disease. They might have mild or no symptoms but can easily spread COVID-19. Otolaryngologists and other specialists who see patients for things such as a runny nose, loss of taste or smell, or a minor sore throat or cough, might not realize the patient before them has COVID-19," Dr. Liu said.

To help them alert otolaryngologists about the high risk of transmission from even mild and asymptomatic patients and what to do to prevent transmission, Dr. Liu and colleagues looked at data from China. Three of the paper's coauthors are former clinical fellows at the Miller School, including Drs. Qi Yao, who works at an academic center in Wuhan; Di Zhang from the otolaryngology department of a hospital in Shenzhen; and Yilai Shu, from an academic otolaryngology department in Shanghai.

The researchers studied 20 hospitalized COVID-19 patients from ENT departments at four Chinese hospitals during the pandemic. They found ENTs performed six tracheotomies. Six patients underwent procedures to control nose bleeding and seven were treated for routine ENT complaints, such as sore throat, nasal congestion, and loss of the ability to smell.

Despite coming into close contact and performing procedures on hospitalized COVID-19 patients, none of the ENT health care workers got the virus. All implemented appropriate protection strategies, whether in the hospital or outpatient setting.

The message to otolaryngology providers is to suspect COVID-19 in all patient encounters and to take necessary precautions with personal protective equipment, including at the very least N95 masks and face shields. Data from China suggest providers who protect themselves are far less likely to contract the virus, according to Dr. Liu.

Other protective strategies used in China include pre-appointment screening, triaging, restriction of non-urgent visits and surgeries, and telemedicine.

The findings emphasize the need for hospitals and outpatient clinics to provide the appropriate personal protective equipment (PPE) for healthcare workers, based on their job, work area and degree of exposure risk, according to the paper.

"PPE is the most obvious aspect of infection control," the authors wrote.

In China, ENT health care workers had third-level protection -- the highest -- when performing invasive procedures, such as a tracheotomy, in the hospital. They used second-level protection measures for more routine evaluations, treatments and throat swabs. Second-level protection includes protective masks, face shields, protective clothing, gloves and more.

It's also important for ENT providers in the community to realize that they can easily catch the virus from people who have no fever and common mild symptoms or no symptoms. Relatively routine symptoms, like loss of taste and smell, are early warning signs of COVID-19 infection, according to Dr. Liu.

"We can avoid infection even in our at-risk specialty if we take the proper precautions," he said.

Other high-risk specialties, he noted, include emergency medicine, anesthesiology and ophthalmology.

Credit: 
University of Miami Miller School of Medicine