Tech

LANL news: High altitude water Cherenkov Observatory tests speed of light

image: 
This compound graphic shows a view of the sky in ultra-high energy gamma rays. The arrows indicate the four sources of gamma rays with energies over 100 TeV from within our galaxy (courtesy of the HAWC collaboration) imposed over a photo of the HAWC Observatory's 300 large water tanks. The tanks contain sensitive light detectors that measure showers of particles produced by the gamma rays striking the atmosphere more than 10 miles overhead (courtesy of Jordan Goodman).

Image: 
Courtesy of Jordan Goodman

LOS ALAMOS, N.M., March 30, 2020 -- New measurements confirm, to the highest energies yet explored, that the laws of physics hold no matter where you are or how fast you're moving. Observations of record-breaking gamma rays prove the robustness of Lorentz Invariance - a piece of Einstein's theory of relativity that predicts the speed of light is constant everywhere in the universe. The High Altitude Water Cherenkov observatory in Puebla, Mexico detected the gamma rays coming from distant galactic sources.

"How relativity behaves at very high energies has real consequences for the world around us," said Pat Harding, an astrophysicist in the Neutron Science and Technology group at Los Alamos National Laboratory and a member of the HAWC scientific collaboration. "Most quantum gravity models say the behavior of relativity will break down at very high energies. Our observation of such high-energy photons at all raises the energy scale where relativity holds by more than a factor of a hundred."

Lorentz Invariance is a key part of the Standard Model of physics. However, a number of theories about physics beyond the Standard Model suggest that Lorentz Invariance may not hold at the highest energies. If Lorentz Invariance is violated, a number of exotic phenomena become possibilities. For example, gamma rays might travel faster or slower than the conventional speed of light. If faster, those high-energy photons would decay into lower-energy particles and thus never reach Earth.

The HAWC Gamma Ray Observatory has recently detected a number of astrophysical sources which produce photons above 100 TeV (a trillion times the energy of visible light), much higher energy than is available from any earthly accelerator. Because HAWC sees these gamma rays, it extends the range that Lorentz Invariance holds by a factor of 100 times.

"Detections of even higher-energy gamma rays from astronomical distances will allow more stringent the checks on relativity. As HAWC continues to take more data in the coming years and incorporate Los Alamos-led improvements to the detector and analysis techniques at the highest energies, we will be able to study this physics even further," said Harding.

Credit: 
DOE/Los Alamos National Laboratory

Experts call for health and climate change warning labels on petrol pumps

Warning labels should be displayed on petrol pumps, energy bills, and airline tickets to encourage consumers to question their own use of fossil fuels, say health experts in The BMJ today.

Like health warnings on cigarette packets, the labels should state clearly that continuing to burn fossil fuels worsens the climate emergency, with major projected health impacts increasing over time.

Dr Mike Gill, a former regional director of public health and colleagues argue that telling consumers at the point of use about the climate and health risks that come with burning fossil fuels could change attitudes and behaviour.

Their call is part of a special collection of articles on planetary health, published by The BMJ today to raise awareness of the threats to humanity and natural systems and to identify opportunities for action.

Gill and colleagues point out that, like smoking, fossil fuel use harms others through ambient air pollution that accounts for about 3.5 million premature deaths per year, as well as through climate change, which increasingly threatens the health of current and future generations.

And while fossil fuel use is already subject to government intervention in many countries, for example through fuel taxes and vehicle emissions standards, "these are insufficient to prevent dangerous climate change and do not reflect the full economic costs of burning fossil fuels," they write.

"Warning labels connect the abstract threat of the climate emergency with the use of fossil fuels in the here and now."

They acknowledge that implementing warnings will face challenges, but say the initial focus should be on high income nations that have contributed disproportionately to greenhouse gas emissions and on major sources of greenhouse gas emissions in emerging economies where they are rising rapidly.

Additional policies could also include increasing restrictions on advertising by fossil fuel companies, particularly to prevent misleading claims about investments in renewable energy when these represent a minority of their portfolio.

They call on governments to take urgent, decisive steps to raise awareness of personal choices that reduce greenhouse gas emissions as well as implementing national policies to decarbonise the economy.

"There is an opportunity for national and local governments to implement labelling of fossil fuels in the run-up to COP26 in Glasgow and in particular for the UK Government, as the host of the COP, to show leadership, as part of a package of measures to accelerate progress on getting to 'Net Zero' emissions," they write.

"When the covid-19 pandemic eventually wanes, labelling could play an important role in helping to reduce the risk of a rapid rebound in greenhouse gas emissions as the economy expands," they conclude.

Credit: 
BMJ Group

10-year data show cardiac stenting equal to CABG in preventing events

In a study with the longest follow-up to date of patients with a high-risk form of heart disease known as left main coronary artery disease (LMCAD), researchers found no significant differences in rates of death, heart attack or stroke between patients who were treated with a stent and those who underwent heart bypass surgery. The research, presented at the American College of Cardiology's Annual Scientific Session Together with World Congress of Cardiology (ACC.20/WCC), also showed that more patients receiving a stent had to have the procedure repeated during the study period.

LMCAD is a condition in which the main coronary artery that supplies blood to the chambers on left side of the heart becomes blocked by a buildup of fatty deposits, or plaque. The left-side heart chambers push blood out to the body and are larger than the right-side chambers, which receive blood and send it to the lungs to be resupplied with oxygen. People with blockages in the arteries serving the left side of the heart are at higher risk for a heart attack or stroke than people with other forms of coronary artery disease.

Stenting, also known as coronary angioplasty or percutaneous coronary intervention, involves threading a flexible tube (catheter) through an artery. A tiny balloon at the tip of the catheter is inflated to unblock the artery and a stent, a tiny mesh tube that, in this study, was coated with medication, is inserted to prop it open. The procedure is done under local anesthesia and has a shorter recovery time than bypass surgery (also known as coronary artery bypass graft or CABG). Surgery, however, is considered a more durable procedure and has been regarded as the standard of care for patients with LMCAD, according to researchers.

"Our findings support the long-term safety and efficacy of stenting compared with bypass surgery after more than 10 years of follow-up in a patient population with a poor outlook," said Duk-Woo Park, MD, of Asan Medical Center, Seoul, South Korea, and first author of the study.

In the trial, known as PRECOMBAT, 600 patients with LMCAD were randomly assigned either to receive a drug-releasing stent or to undergo CABG at 13 medical centers in South Korea between 2004 and 2009. The average age of patients at treatment was 62.3 years, 76.5% were men and 32% were being treated for diabetes.

The primary study endpoint was the combined occurrence of death from any cause, heart attack, stroke or the need for a second procedure to unblock the same artery. The trial was designed to determine whether outcomes with stenting were not worse than with CABG. Follow-up results at one year and five years, published in 2011 and 2015, found no significant differences between the two groups for the primary endpoint or for any of its components except that patients treated with a stent were more likely to need a second procedure to unblock the same artery.

In the current study, the median length of follow-up was 11.3 years. Data showed that 87 patients treated with a stent (29.8%) and 72 treated with bypass surgery (24.7%) had died, had a heart attack or stroke, or needed a second procedure to unblock the same artery, a difference that was not statistically significant. However, when the researchers looked only at the proportion of patients who required a second procedure, the difference between the groups was statistically significant, with 16.1% of patients receiving a stent needing a second procedure compared with 8% of those who had bypass surgery.

Of note, patients enrolled in the PRECOMBAT trial were treated with "first-generation" drug-coated stents. The "second-generation" stents being used today are both safer and more effective than the ones that were available 16 years ago when the PRECOMBAT study began, Park said, adding that additional studies are needed to assess the outcomes after 10 or more years of follow-up for patients treated with these second-generation stents.

"Our extended follow-up provides important insights on long-term outcomes, which may aid in decision-making about the optimal treatment strategy for patients with LMCAD," he said, noting that two other recently published studies comparing stenting with bypass surgery in patients with LMCAD had reached conflicting conclusions. The EXCEL trial, published in the New England Journal of Medicine in November 2019, found no significant difference in outcomes after five years of follow-up between patients who received stents and those who had bypass surgery. By contrast, the NOBLE trial, published in The Lancet in December 2019, found that bypass surgery was superior to stenting after five years of follow-up.

Credit: 
American College of Cardiology

A new tool for controlling reactions in microrobots and microreactors

image: Thomas Russell and Ganhua Xie at UMass Amherst and Lawrence Berkeley National Lab use capillary forces to develop a simple method for producing self-assembling hanging droplets of an aqueous polymer solution from the surface of a second aqueous polymer solution in well-ordered arrays. The technique relies on natural properties, in particular surface tension.

Image: 
UMass Amherst

AMHERST, Mass. - In a new paper, Thomas Russell and postdoctoral fellow Ganhua Xie, at the University of Massachusetts Amherst and Lawrence Berkeley National Laboratory, report that they have used capillary forces to develop a simple method for producing self-assembling hanging droplets of an aqueous polymer solution from the surface of a second aqueous polymer solution in well-ordered arrays.

"These hanging droplets have potential applications in functional microreactors, micromotors and biomimetic microrobots," they explain. Microreactors assist chemical reactions in extremely small - less than 1 millimeter - spaces and microprobes aid new drug engineering and manufacturing. Both allow researchers to closely control reaction speed, selective diffusion and processing, for example. Selective diffusion refers to how cell membranes decide which molecules to allow in or keep out.

Russell and colleagues say that functions in their new system can be directed with magnetic microparticles to accomplish this. They "control the locomotion of the droplets, and, due to the nature of the assemblies, can selectively transport chemicals from one droplet to another or be used as encapsulated reaction vessels, where reactions rely on the direct contact with air," Russell explains.

For this work, he and Xie collaborated with others from Hong Kong University, Beijing University of Chemical Technology and Tohoku University, Japan. Details are in Proceedings of the National Academy of Sciences.

Their technique relies on natural properties, Russell explains, in particular surface tension, the phenomenon that allows water-walking creatures and human-made robots that mimic them to avoid sinking. The researchers use it to bind heavier droplets, which would otherwise sink, to interfaces. This helps to build two-dimensional ensembles of structurally complex droplets that have sacs in which target reactions can be isolated.

They did this, Russell says, by hanging a coacervate-encased droplet of a denser aqueous dextran solution from the surface of a different, polyethylene glycol (PEG) aqueous solution. In their earlier work, Xie, Russell and colleagues used these same two polymer aqueous solutions, PEG-plus-water and dextran-plus-water, which can be combined but do not mix. This creates a "classic example of coacervation" forming two separate domains like the non-mixing wax-and-water in a lava lamp, Russell explains.

He says that up to now, synthetic systems in labs have been limited to far fewer reactions than natural systems in the body, which can carry out many rapid and serial reactions. More closely mimicking nature has been a major goal for years, he adds.

The new work represents a major advance, Russell says, because "we use a delicate balance between a surface energy and gravity to hang the sacs from the surface of the liquid, like some insect larvae, and the hanging sacs have direct contact with air through the opening in the top. Direct contact to air allows the user to introduce gases, like oxygen, for a reaction."

To imagine the new mechanism, he explains, it helps to know that polycations are materials with more than one positive charge and polyanions have more than one negative. "Think of the sac, the inside is a polyanion and the outside is a polyanion. This means that anions can flow out but not cations and cations can flow in but not anions. This selective diffusion allows us to do reactions inside the sac that feeds a second reaction on the outside of the sac and vice versa. So, we can produce cascading reaction schemes, similar to that found inside your body or other biological systems."

Credit: 
University of Massachusetts Amherst

Biological 'atlas' shows dual personality for immune cells that cause Type 1 diabetes

image: Senior author Ben Youngblood, PhD, and co-author Caitlin Zebley, MD, both of Immunology at St. Jude, discover how T cells have a dual biological personality.

Image: 
St. Jude Children's Research Hospital

Immunologists at St. Jude Children's Research Hospital have created a database that identifies gene-regulatory mechanisms in immune cells that facilitate Type 1 diabetes. The findings were published today in Nature Immunology.

Type 1 diabetes is an autoimmune disease in which the immune system attacks the body's own cells. In Type 1 diabetes, immune cells called CD8 T cells kill insulin-producing islet cells in the pancreas. By creating an epigenetic "atlas," the researchers revealed that these T cells have a dual biological personality. That dual personality enables the T cells to retain the ability to attack insulin-producing cells across successive generations of T cells.

"A major question has been why these T cells remain functional over long periods of time," said senior author Ben Youngblood, Ph.D., of the St. Jude Department of Immunology. "Our research provides important insights into the stability of that response by establishing the central role of epigenetic programming in human T cell differentiation."

Regulation through epigenetics

The activity of cells is governed by genetic and epigenetic regulation, control switches that give instructions to a cell. The epigenetic regulation mechanisms include a process called methylation in which methyl molecules can be plugged into DNA molecules at key points to suppress their genetic activity.

Youngblood and his team charted the pattern of methylation across the genome of CD8 T cells to understand the epigenetic programming that governs their development, or "differentiation," from immature cells called stem-memory T cells. The researchers collected data on methylation patterns of a variety of T cells, ranging from naïve-- not yet possessing the ability to attack -- to active effector cells.

From the atlas, investigators discovered that the diabetes-causing T cells possessed a dual personality of both naïve and effector-associated epigenetic programs, revealing for the first time that the cells were epigenetic hybrids, possessing both programs.

The researchers also performed the same analysis on mouse CD8 T cells, revealing they also showed such a dual personality.

Understanding a dual personality

A key to understanding both the human and mouse atlas was a multipotency index developed by co-authors Yiping Fan, Ph.D., of the St. Jude Center for Applied Bioinformatics, and Caitlin Zebley, M.D., a clinical fellow in the Department of Immunology. Through cutting-edge, machine-learning approaches, Fan and Zebley interrogated this data to understand the differentiation status of the autoreactive T cells. Using this novel index, they were able show that methylation sites across the T cells' genome can be used to predict a T cell's differentiation.

The autoreactive CD8 T cells scored high on the index, revealing their preservation of the less-differentiated hybrid state. The atlas and the multipotency index offer important new tools for developing treatments and diagnosis of Type 1 diabetes.

"We now have an epigenetic signature for these cells that we can use to explore treatments for Type 1 diabetes that induce immunological tolerance of these T cells to prevent their attack on islet cells," Youngblood said.

The index could be used as the basis for a diagnostic tool to predict which patients would respond to therapies that encourage that tolerance. To advance this work, Youngblood and his colleagues are collaborating with the Immune Tolerance Network to examine data from past clinical trials to see whether the index could predict which patients would respond to such therapies and which would not. The ITN, funded by the National Institute of Allergy and Infectious Diseases, is a collaboration of researchers aimed at developing immune tolerance therapies.

The insights from the epigenetic atlas can also be applied to cancer immunotherapies, in which T cells are engineered to recognize and selectively attack tumor cells. Using the multipotency index, researchers could measure how effective such engineered T cells would be in attacking cancer cells. The atlas can also be used to understand the nature of T cell activity in chronic viral infections.

Credit: 
St. Jude Children's Research Hospital

Water pressure: Ancient aquatic crocs evolved, enlarged to avoid freezing

image: Nebraska's Will Gearty holds the skull of a porpoise, whose torpedo-like body resembles those of ancient crocodiles that spent their entire lives in the water. Gearty's research suggests that fully aquatic species got larger not because water released them from the constraints of land -- as longstanding theories have proposed -- but instead to insulate themselves against water's lower temperatures and its capacity to steal body heat.

Image: 
Steve Castillo / Evolution: International Journal of Organic Evolution / Scott Schrage

Taking the evolutionary plunge into water and abandoning land for good, as some crocodilian ancestors did nearly 200 million years ago, is often framed as choosing freedom: from gravity, from territorial boundaries, from dietary constraints.

Water might inflict more pressure in the pounds-per-square-inch sense, the thinking went, but it also probably relieved some -- especially the sort that kept crocs from going up a size or 10. If they wanted to enjoy the considerable spoils of considerable size, water seemed the easy way.

A recent study from the University of Nebraska-Lincoln's Will Gearty, who compiled a database of 264 species stretching back to the Triassic Period, says that freedom was actually compulsion in disguise.

After analyzing the database of crocodyliforms -- a lineage of crocodile-like species that share a common ancestor -- Gearty found that the average weights of aquatic crocodyliforms did easily surpass their semi-aquatic and landlocked counterparts, sometimes by a factor of 100.

But the study suggests that this disparity represented a response to, not a release from, the pressures of natural selection. Rather than expanding the range of crocodyliform body sizes, as some longstanding theories would predict, taking to the water instead seemed to compress that range by raising the minimum size threshold needed to survive its depths. The maximum size of those aquatic species, by contrast, barely budged over time.

And when Gearty derived a set of equations to estimate the largest feasible body sizes under aquatic, semi-aquatic and terrestrial conditions?

"All three habitats had roughly the same upper limit (on size)," he said. "So even though it seems like you're released from this pressure, you're actually squeezed into an even smaller box than before."

Two major factors -- lung capacity and body heat -- seem to have helped initiate the squeeze play. Prior research had proposed that aquatic crocodyliforms got big in part because they needed to dive deeply for food, including the choice prey that would sustain a larger size. Upon digging into the literature, though, Gearty learned that lung volume increases more or less in lockstep with body size.

"So you actually don't have much excess lung volume to spare," said Gearty, a postdoctoral researcher in biological sciences. "When you get bigger, (lung capacity) is just basically scaling up with your body size to accommodate that extra size. The amount of time you could stay underwater increases a little bit, but not that much."

At larger sizes, the evolutionary tradeoff between the benefits of longer, deeper dives and the energy demands of finding more food probably also reached a stalemate, he said, that helped cement the aquatic ceiling on size.

As for the higher floor? That's where the thermal conductivity of water cranked up the evolutionary heat, Gearty said. Unfortunately for the aquatic crocs, water steals heat far faster than air does. The issue was likely compounded by the fact that temperatures in the waters they occupied were lower than the air temperatures enjoyed by their land-dwelling counterparts.

That would have left smaller aquatic crocodyliforms with only bad options: limit the duration and depth of their dives so that they could regularly return to the surface and warm themselves in the sun, or risk freezing to death during deeper hunts for food. Whether by starvation or hypothermia, either would eventually spell doom.

"The easiest way to counteract that is to get bigger," Gearty said.

Getting bigger was especially appealing because the volume of body tissue, which generates heat, increases faster than the surface area of the skin that surrenders it. But the unforgiving consequences of heat loss still limited the pool of ancestors from which aquatic crocodyliforms could evolve.

"They actually needed to start at a larger size," Gearty said. "So it's not like a marine crocodile could have just evolved from anywhere. It had to be evolving from some non-marine crocodile that was already a little larger than normal."

The fossil records of the crocodyliforms allowed Gearty and Jonathan Payne, his former doctoral adviser at Stanford University, to pinpoint the minimum weight threshold for aquatic survival: 10 kilograms, or about 22 pounds. And when they plotted the relationships of heat loss and lung capacity to body mass, they discovered that the two slopes crossed at almost exactly the same value: 10 kilograms.

"We were able to explain, with these physiological equations, exactly why there were no marine crocodiles below a certain size," Gearty said. "This indicates that these very fundamental physiological constraints and controls ... actually may be some of the strongest forces for pushing animals to different body sizes through time. Not whether there's an asteroid hitting the world, not whether you're being (hunted) by some other animal -- that just these physical and chemical properties of the world we live in are what drive this. And if you want to enter a new habitat, you need to conform to that new set of properties."

The findings mostly reinforce a 2018 Gearty-led study that found similar trends among nearly 7,000 living and extinct mammal species. An elementary difference between mammals and reptiles, though, initially left the verdict in doubt.

"The whole (premise) of the marine mammal project was that these things are warm-blooded, and they have to keep their temperature up," Gearty said. "They have to really worry about this heat loss. So the idea was, 'Well, would the same constraint occur in cold-blooded organisms that are also living in the ocean?'

"There have been a couple papers suggesting that some of these marine crocodiles may have been somewhat warm-blooded, and so they may have been able to kind of reheat their own bodies. But even if that's the case, they were still going to be losing heat like these marine mammals would. They were still constrained by these thermoregulatory controls."

GREYHOUNDS AND DOLPHINS

With the help of an undergraduate student at Stanford and funding from the National Science Foundation, Gearty spent most of the summer of 2017 tracking down the fossil records that informed the new study.

"But that was to find the stuff that's readily available online," he said. "Then you've got, you know, undocumented books that you need to find, and they have to get shipped from Europe or somewhere. So there were a lot of these one-offs. I was still measuring specimen photos and getting records up until I submitted the paper in the middle of last year."

Gearty said he was mostly spared the time and expense of traveling to museums and physically measuring fossil dimensions, as some of his colleagues have in the name of their own research. But the haphazardness of some older classifications and documentation still had him following false leads and trying to make sense of the nonsensible.

"A lot of the crocodiles that people have described in papers have never actually been documented the way they're supposed to be," he said. "Someone might say, 'Here's the Nebraska crocodile.' It's just a colloquial name. And you'll be like, 'I guess I've got to go find the Nebraska crocodile.' You look this up, and you see that there's this crocodile from Nebraska, and this one, and this one. You don't know which one is the 'Nebraska crocodile.'

"You need to follow this trail of breadcrumbs, sometimes, to find these papers that may or may not have ever been published on these crocodiles that may or may not have ever been found. Fortunately, I was able to get most of the specimens just from the literature. But it did take a lot of digging to find the last probably 10% of the crocodiles."

Many of the terrestrial fossils, in particular, trace body shapes that barely resemble the low-slung profile of the modern crocodile.

"The example I like to give is: Imagine a greyhound, and then put a crocodile skull on it," Gearty said. "There were things like that running around on land probably 200 million years ago."

Though their maximum size remained almost constant, marine species did evolve two to three times faster than the semi-aquatic and terrestrial groups, Gearty found. Along with increasing the size of smaller aquatic species, natural selection molded body forms to surmount the challenges presented by water. Scales, plates and other drag-increasing skin deposits disappeared. Heads and tails flattened. Snouts narrowed.

"All of these were probably more dolphin-like than modern crocodiles, with even longer, thinner tails," he said. "And some of them had very paddle-like feet, almost like flippers."

Despite the fact that virtually all modern crocodile species are semi-aquatic, Gearty said those adaptations served the aquatic crocodyliforms well -- more than 100 million years before mammals ventured into the deep.

"No one has talked about it much, but really, these things were quite successful," he said. "And some of them weren't even fazed by some of the big, (cataclysmic) events. When the asteroid hit that killed all the dinosaurs, one of the marine groups just kind of kept going like nothing happened. A lot of the terrestrial species went extinct, but this group just kept ticking along for a long time."

Credit: 
University of Nebraska-Lincoln

Changing forests

As the climate is changing, so too are the world's forests. From the misty redwoods in the west to the Blue Ridge forest of Appalachia, many sylvan ecosystems are adapting to drier conditions.

Using the U.S. Forest Service Forest Inventory and Analysis database, researchers at UC Santa Barbara, the University of Utah and the U.S. Forest Service have studied how the traits of tree communities are shifting across the contiguous United States. The results, published in the Proceedings of the National Academy of Sciences, indicate that communities, particularly in more arid regions, are becoming more drought tolerant, primarily through the death of less hardy trees.

To understand what might be driving changes in the ability of forests to cope with climate change, the scientists considered two main physiological traits: a species' average tolerance to water stress and how close this was to its maximum tolerance (essentially how much wiggle room it had when dealing with water stress).

"We basically put a number on what species composition means in terms of their ability to deal with water stress," said lead author Anna Trugman, an assistant professor in UC Santa Barbara's Department of Geography.

Fortunately for the team, the U.S. Department of Agriculture tracks tree species, size and abundance in more than 160,000 forest plots randomly distributed across the country. What's more, the U.S. Forest Service Forest Inventory and Analysis database includes over 200 different types of ecosystems ranging from dry pinyon pine forests to cypress swamps, and Atlantic hardwood forests to the temperate rainforests of the Pacific Northwest.

Trugman and her colleagues matched the traits they were interested in to the species abundance in these plots. Then they used this to calculate a weighted average value for the community of trees in each plot, which essentially corresponded to the community's drought tolerance. Since these plots are surveyed every five to 10 years, the scientists could track shifts in community trait composition and relate these to tree mortality, recruitment and climate.

There are two ways a community can become more drought tolerant: Less hardy trees can die or more resilient trees can grow faster. Both result in a community that is hardier overall.

Trugman found that it was primarily the death of less robust trees that drove the shifts toward greater drought tolerance, though she notes that the effects of sapling recruitment have been less evident over such a short time span. She also noticed that the scope of traits in a given plot didn't automatically correlate with the number of species present. "You don't necessarily have a larger range in strategies if you have more species," she said.

For instance, the eastern U.S. doesn't experience as much routine drought stress as its western counterpart, but it has relatively high species diversity. As a result, most of the trees have similar strategies to cope with water stress. Compare that to the Southwest, where there are species living together that have a range of strategies for dealing with drought, despite many plots having relatively low species diversity overall.

Maps of plant traits are useful to scientists because they inform the models that forecast how climate change will affect the landscape, Trugman explained. The trait maps help researchers assess the mismatch between climate suitability and the community's current trait composition.

Trugman's study refers to this change in a community's traits as the "trait velocity." The quicker the change, the faster the velocity. Similarly, scientists who study the change in an area's climactic conditions refer to the change over time as the "climate velocity." Intuitively, the two rates ought to be related, with communities and ecosystems changing to adapt to the changing climate in the region.

"But you could actually have a mismatch between these two," Trugman said. "Your trait velocity could be much slower than your climate velocity, in which case the trees in that particular location are not going to be very suitable for the new climate." In other words, the trees may be surviving for now, but they won't be growing or reproducing.

It's possible that scientists could see no shift in traits at all, Truman noted. And while this might sound more hopeful, in reality it would indicate that something was preventing the communities from adapting -- perhaps a loss of species or genetic diversity, or simply the absence of more resilient species nearby that can provide seeds.

On the other hand, Trugman saw accelerated trait velocities in more arid regions. This study was a first pass; she plans to further investigate the relationship between trait and climate velocities in future research.

Overall, the results indicate that forests are shifting to communities that can cope with greater average water stress as well as more variability in water stress. This should buffer forests against some of the effects of climate change, at least in the short term, according to Trugman.

"Ultimately," she said, "we want to put trait velocities and climate velocities in some comparable context to understand how mismatches between the two will affect our forests."

Credit: 
University of California - Santa Barbara

Study finds room for improvement in TAVR outcomes across US

Thirty-four medical centers, representing 11% of facilities that perform transcatheter aortic valve replacement (TAVR) in the U.S., saw worse than expected outcomes in terms of survival and quality of life among patients undergoing the procedure in an analysis presented at the American College of Cardiology's Annual Scientific Session Together with World Congress of Cardiology (ACC.20/WCC).

The study assessed outcomes in 54,217 patients treated at 301 sites. Eighty percent of sites had an as-expected rate of in-hospital complications, while 8% had better than expected complication rates. Researchers reported a substantial difference in complication rates among sites with worse than expected performance and those with better than expected performance.

"There's clearly an opportunity to improve processes and try to better standardize care to decrease variation between different sites," said Nimesh Desai, MD, PhD, a cardiac surgeon and associate professor of surgery at the Hospital of the University of Pennsylvania, and the study's lead author. "The overarching goal of this work is to provide transparency to the public and also to provide feedback to sites so that they can review their practices and develop ways to improve the results in their patients."

TAVR is a procedure in which operators thread surgical equipment to the aorta through an artery in the chest or groin to replace a patient's malfunctioning valve with an artificial one. The procedure has become increasingly popular in recent years as a less invasive alternative to open heart valve replacement surgery.

Researchers analyzed data for 2015, 2016 and 2017 from the Society of Thoracic Surgeons/American College of Cardiology Transcatheter Valve Therapy (STS/ACC TVT) Registry, a Centers for Medicare and Medicaid Services-mandated registry that includes nearly all patients who undergo TAVR in the U.S. From there, they created a model to assess serious complications that patients would likely want to consider when making decisions about a TAVR procedure.

"We wanted to develop a way of assessing quality using endpoints that are very important to patients," Desai said. "Among registries for major cardiovascular procedures, this is the first metric to incorporate the patient's functional status and quality of life, both in the risk assessment of the patient and in the derivation of the outcome measures."

In addition to mortality, the researchers identified four key outcomes that can significantly impact patients' quality of life a year after a TAVR procedure: stroke, life-threatening or disabling bleeding, stage three acute kidney injury and moderate or severe paravalvular leak (movement of blood across the artificial valve). The outcomes were ranked according to their impact on a patient's daily functioning and quality of life.

Applying the metric to the registry data, researchers identified the average in-hospital complication rate across all sites and categorized sites whose outcomes were outside 95% confidence intervals of that average as performing better or worse than expected. Because it incorporates multiple outcomes on a ranked basis, the model was found to generate reliable assessments even when including sites performing a low volume of TAVR procedures per year.

The researchers plan to further analyze the data to identify any features or factors that may be associated with worse than expected performance. In addition, Desai said the model can help establish a platform for public reporting that patients and hospitals could use to inform decision-making.

Credit: 
American College of Cardiology

Mystery solved: The origin of the colors in the first color photographs

image: Edmond Becquerel, Solar spectra, 1848, photochromatic images, Musée Nicéphore Niépce, Chalon-sur-Saône.

Image: 
Edmond Becquerel, Solar spectra, 1848, photochromatic images, Musée Nicéphore Niépce, Chalon-sur-Saône.

A palette of colours on a silver plate: that is what the world's first colour photograph looks like. It was taken by French physicist Edmond Becquerel in 1848. His process was empirical, never explained, and quickly abandoned. A team at the Centre de recherche sur la conservation (CNRS/Muséum National d'Histoire Naturelle/Ministère de la Culture) has now shone a light on this, in collaboration with the SOLEIL synchrotron and the Laboratoire de Physique des Solides (CNRS/Université Paris-Saclay). The colours obtained by Edmond Becquerel were due to the presence of metallic silver nanoparticles, according to their study published on 30 March 2020 in Angewandte Chemie International Edition.

In 1848, in the Muséum d'Histoire Naturelle in Paris, Edmond Becquerel managed to produce a colour photograph of the solar spectrum. These photographs, which he called "photochromatic images", are considered to be the world's first colour photographs. Few of these have survived (1) because they are light-sensitive and because very few were produced in the first place. It took the introduction of other processes (2) for colour photography to become popular in society.

For more than 170 years, the nature of these colours has been debated in the scientific community, without resolution. Now we know the answer, thanks to a team at the Centre de recherche sur la conservation (CNRS/Muséum National d'Histoire Naturelle/Ministère de la Culture) in collaboration with the SOLEIL synchrotron and the Laboratoire de Physique des Solides (CNRS/Université Paris-Saclay). After having reproduced Edmond Becquerel's process to make samples of different colours, the team started by re-examining 19th century hypotheses in light of 21st century tools. If the colours were due to pigments formed during the reaction with light, we should have seen variations in chemical composition from one colour to another, which no spectroscopy method has shown. If they were the result of interference, like the shades of some butterflies, the coloured surface should have shown regular microstructures about the size of the wavelength of the colour in question. Yet no periodic structure was observed using electron microscopy.

However, when the coloured plates were examined, metallic silver nanoparticles were revealed in the matrix made of silver chloride grains -- and the distributions of sizes and locations of these nanoparticles vary according to colour. The scientists assume that according to the light's colour (and therefore its energy), the nanoparticles present in the sensitised plate reorganise: some fragment and others coalesce. The new configuration gives the material the ability to absorb all colours of light, with the exception of the colour that caused it: and therefore that is the colour that we see. Nanoparticles having properties related to colour is a phenomenon known to physicists as surface plasmons (3), electron vibrations (here, those of the metallic silver nanoparticles) that propagate in the material. A spectrometer in an electron microscope measured the energies of these vibrations to confirm this hypothesis.

Credit: 
CNRS

Resarchers find way to improve cancer outcomes by examining patients' genes

image: Aakrosh Ratan (from left), Pankaj Kumar, Anindya Dutta and Ajay Chatrath have identified a way to improve cancer outcomes by examining patients' genes.

Image: 
Dan Addison | UVA Communications

By mining a vast trove of genetic data, researchers at the University of Virginia School of Medicine are enhancing doctors' ability to treat cancer, predict patient outcomes and determine which treatments will work best for individual patients.

The researchers have identified inherited variations in our genes that affect how well a patient will do after diagnosis and during treatment. With that information in hand, doctors will be able to examine a patient's genetic makeup to provide truly personalized medicine.

"Oncologists can estimate how a patient will do based on the grade of the tumor, the stage, the age of the patient, the type of tumor, etc. We found [adding a single genetic predictor] can improve our predictive ability by 5% to 10%," said UVA's Anindya Dutta, MBBS, PhD. "Many of the cancers had multiple inherited genetic change that were predictive of outcome, so if we add those in, instead of a 10% increase we might get a 30% increase in our ability to predict accurately how patients will do with our current therapy. That's amazing."

Dutta, the chairman of UVA's Department of Biochemistry and Molecular Genetics, believes reviewing the inherited genetic make up of a patient can provide similar benefits for predicting outcome and choosing therapy for many, many other conditions, from diabetes to cardiac problems. As such, the approach represents a major step forward in doctors' efforts to tailor treatments specifically to the individual's needs and genetic makeup.

Why Some Patients Fare Better Than Others

The research offers answers to questions that have long perplexed doctors. "Every clinician has this experience: Two patients come in with exactly the same cancer - same grade, same stage, received the same treatment. One of them does very well, and the other one doesn't," Dutta said. "The assumption has always been that there is something about the two that we didn't understand, like maybe there are some tumor-specific mutations that one patient had but the other did not. But it occurred to us that with all this genomic data, there is another hypothesis that we could test."

To determine if genetic differences in the patients could be the answer, Dutta and his colleagues did a deep dive into the Cancer Genome Atlas, an enormous repository of genetic information assembled by the National Institutes of Health's National Cancer Institute. The researchers sought to correlate inherited genetic variations with patient outcomes.

"This incredibly smart MD, PhD student in the lab, Mr. Ajay Chatrath, decided that this was a perfect time to explore this," Dutta recalled. "With the help of cloud computing services at UVA, we managed to download all this genomic sequencing data and identify what are known as germline variants -- not just tumor-specific mutations but the mutations that were inherited from the parents and are present in all cells of the patient."

The researchers started small but soon realized how quickly the work could be done and how big the benefits could be. "Once we realized this was a very easy thing to do. we went on to do all 33 cancers and all 10,000 patients, and that took another six months," Dutta said. "All of this came together beautifully. It was very exciting that every single member in the lab contributed to the analysis."

Dutta is eager to share his findings in hopes of finding collaborators and inspiring researchers and private industry to begin mining the data for other conditions. "This is very low-hanging fruit," he said. "Germline variants predicting outcome can be applicable to all types of diseases and not just cancer, and [they can predict] responsiveness to all types of therapy, and that's why I'm particularly excited."

Credit: 
University of Virginia Health System

How at risk are you of getting a virus on an airplane?

image: Frontera is the fifth most powerful supercomputer in the world and fastest academic supercomputer, according to the November 2019 rankings of the Top500 organization. Frontera is located at the Texas Advanced Computing Center and supported by National Science Foundation.

Image: 
TACC

Fair or not, airplanes have a reputation for germs. However, there are ways to minimize the risks.

Historic research based on group movements of humans and animals suggest three simple rules:

move away from those that are too close.

move toward those that are far away.

match the direction of the movement of their neighbors.

This research is especially used for air travel where there is an increased risk for contagious infection or disease, such as the recent worldwide outbreak of the coronavirus, which causes COVID-19 disease.

"Airlines use several zones in boarding," said Ashok Srinivasan, a professor in the Department of Computer Science University of West Florida. "When boarding a plane, people are blocked and forced to stand near the person putting luggage in the bin -- people are very close to each other. This problem is exacerbated when many zones are used. Deplaning is much smoother and quicker --there isn't as much time to get infected."

Srinivasan is the principal investigator of new research on pedestrian dynamics models that has recently been used in the analysis of procedures to reduce the risk of disease spread in airplanes. The research was published in the journal PLOS ONE in March 2020.

For many years scientists have relied on the SPED (Self Propelled Entity Dynamics) model, a social force model that treats each individual as a point particle, analogous to an atom in molecular dynamics simulations. In such simulations, the attractive and repulsive forces between atoms govern the movement of atoms. The SPED model modifies the code and replaces atoms with humans.

"[The SPED model] changes the values of the parameters that govern interactions between atoms so that they reflect interactions between humans, while keeping the functional form the same," Srinivasan said.

Srinivasan and his colleagues used the SPED model to analyze the risk of an Ebola outbreak in 2015, which was widely covered in news outlets around the world. However, one limitation of the SPED model is that it is slow -- which makes it difficult to make timely decisions. Answers are needed fast in situations such as an outbreak like COVID-19.

The researchers decided there was a need for a model that could simulate the same applications as SPED, while being much faster. They proposed the CALM model (for constrained linear movement of individuals in a crowd). CALM produces similar results to SPED, but is not based on MD code. In other words, CALM was designed to run fast.

Like SPED, CALM was designed to simulate movement in narrow, linear passageways. The results of their research shows that CALM performs almost 60 times faster than the SPED model. Apart from the performance gain, the researchers also modeled additional pedestrian behaviors.

"The CALM model overcame the limitations of SPED where real time decisions are required," Srinivasan said.

Computational Work Using Frontera

The scientists designed the CALM model from scratch so it could run efficiently on computers, especially on GPUs (graphic processing units.

For their research, Srinivasan and colleagues used Frontera, the #5 most powerful supercomputer in the world and fastest academic supercomputer, according to the November 2019 rankings of the Top500 organization. Frontera is located at the Texas Advanced Computing Center and supported by National Science Foundation.

"Once Blue Waters started being phased out, Frontera was the natural choice, given that it was the new NSF-funded flagship machine," Srinivasan said. "One question you have is whether you have generated a sufficient number of scenarios to cover the range of possibilities. We check this by generating histograms of quantities of interest and seeing if the histogram converges. Using Frontera, we were able to perform sufficiently large simulations that we now know what a precise answer looks like."

In practice, it isn't feasible to make precise predictions due to inherent uncertainties, especially at the early stages of an epidemic -- this is what makes the computational aspect of this research challenging.

"We needed to generate a large number of possible scenarios to cover the range of possibilities. This makes it computationally intensive," Srinivasan said.

The team validated their results by examining disembarkation times on three different types of airplanes. Since a single simulation doesn't capture the variety of human movement patterns, they performed simulations with 1,000 different combinations of values and compared it to the empirical data.

Using Frontera's GPU subsystem, the researchers were able to get the computation time down to 1.5 minutes. "Using the GPUs turned out to be a fortunate choice because we were able to deploy these simulations in the COVID-19 emergency. The GPUs on Frontera are a means of generating answers fast."

But Wait -- Models Don't Capture Extreme Events?

In terms of general preparation, Srinivasan wants people to understand that scientific models often don't capture extreme events accurately.

Though there have been thorough empirical studies on several flights to understand human behavior and cleanliness of the surfaces and air, a major infection outbreak is an extreme event -- data from typical situations may not capture it.

There are about 100,000 flights on an average day. A very low probability event could lead to frequent infection outbreaks just because the number of flights is so large. Although models have predicted infection transmission in planes as unlikely, there have been several known outbreaks.

Srinivasan offers an example.

"It's generally believed that infection spread in planes happens two rows in front and back of the index patient," he said. "During the SARS outbreak in 2002, on the few flights with infection spread, this was mostly true. However, a single outbreak accounted for more than half the cases, and half of the infected were seated farther than two rows away on that flight. One might be tempted to look at this outbreak as an outlier. But the 'outlier' had the most impact, and so people farther than two rows away accounted for a significant number of people infected with SARS on flights."

Currently, with regard to COVID-19, the typical infected person is believed to sicken 2.5 others. However, there have been communities were a single 'super-spreader' infected a large number of people and played the driving role in an outbreak. The impact of such extreme events, and the difficulty in modeling them accurately, makes prediction difficult, according to Srinivasan.

"In our approach, we don't aim to accurately predict the actual number of cases," Srinivasan said. "Rather, we try to identify vulnerabilities in different policy or procedural options, such as different boarding procedures on a plane. We generate a large number of possible scenarios that could occur and examine whether one option is consistently better than the other. If it is, then it can be considered more robust. In a decision-making setting, one may wish to choose the more robust option, rather than rely on expected values from predictions."

Some Practical Advice

Srinivasan has some practical advice for readers as well.

"You may be still be at risk [for a virus] even if you are farther away than six feet," he said. "In discussion with modelers who advocate it, it appears that those models don't take air flow into account. Just as a ball goes farther if you throw it with the wind, the droplets carrying the viruses will go farther in the direction of the air flow."

These are not just theoretical considerations. In Singapore, they observed that an exhaust air vent of a toilet used by a patient tested positive for the new Coronavirus and attributed it to air flow.

"Models don't account for all factors impacting reality. When the stakes are high, one may wish to err on the side of caution," Srinivasan concludes.

Credit: 
University of Texas at Austin, Texas Advanced Computing Center

Unearthing gut secret paves way for targeted treatments

image: This is Professor Nick Spencer.

Image: 
Flinders University

Scientists at Flinders University have, for the first time, identified a specific type of sensory nerve ending in the gut and how these may 'talk' to the spinal cord, communicating pain or discomfort to the brain.

This discovery is set to inform the development of new medications to treat problems associated with gut-to-brain communication, paving the way for targeted treatments to mitigate related dysfunction.

While our understanding of the gut's neurosensory abilities has grown rapidly in recent years, two of the great mysteries have been where and how the different types of sensory nerve endings in the gut lie, and how they are activated.

An important step in answering these questions has been made possible through the development of new techniques by Professor Nick Spencer's Visceral Neurophysiology laboratory at Flinders University in South Australia.

"We know that many disorders of the brain and gut are associated with each other, so unravelling their connection is critical to developing targeted, efficient treatments for what can be debilitating conditions like irritable bowel syndrome, chronic constipation or ulcerative colitis," Professor Spencer says.

Professor Spencer's research revealed an extraordinarily complex array of nerve endings that are located over multiple layers of tissues in the lower colon.

"Our study identified the two classes of neurons involved and their location in a range layers in the colon including muscle and mucus membranes, which are potentially capable of detecting sensory stimuli."

His research forms one of many studies underway at Flinders University's five 'Neurogastroenterology' laboratories, which are contributing to the growing bank of global knowledge on the gut's interaction with the brain, including its impact on higher cognitive function.

Credit: 
Flinders University

Machine learning puts a new spin on spin models

image: The low and high temperature phases are found in the right proportions at different temperatures relative to the transition point for different sizes of lattice. (inset) The size of the lattice may be accounted for to give a single master curve.

Image: 
Tokyo Metropolitan University

Tokyo, Japan - Researchers from Tokyo Metropolitan University have used machine learning to study spin models, used in physics to study phase transitions. Previous work showed that image/handwriting classifying AI could be applied to distinguish states in the simplest models. The team showed the approach is applicable to more complex models and found that an AI trained on one model and applied to another could reveal key similarities between distinct phases in different systems.

Machine learning and artificial intelligence (AI) are revolutionizing how we live, work, play, and drive. The self-driving car, the algorithm that beat a go grandmaster and advances in finance are just the tip of the iceberg of a wide range of applications which are having a significant impact on society. AI is also making waves in scientific research. A key attraction of these algorithms is how they can be trained with pre-classified data (e.g. images of handwritten letters) and be applied to classify a much wider range of data.

In the field of condensed matter physics, recent work by Carrasquilla and Melko (Nature Physics (2017) 13, 431-434) has shown that the same kind of AI used to interpret handwriting, neural networks, could be used to distinguish different phases of matter (e.g. gas, liquid and solid) in simple physical models. They studied the Ising model, the simplest model for the emergence of magnetism in materials. A lattice of atoms with a spin (up or down) has an energy which depends on the relative alignment of adjacent spins. Depending on the conditions, these spins can line up into a ferromagnetic phase (like iron) or assume random directions in a paramagnetic phase. Usually, studies of this kind of system involve analyzing some averaged quantity (e.g. sum of all the spins). The fact that an entire microscopic configuration can be used to classify a phase presented a genuine paradigm shift.

Now, a team led by Professors Hiroyuki Mori and Yutaka Okabe of Tokyo Metropolitan University are collaborating with the Bioinformatics Institute in Singapore to take this approach to the next level. In its existing form, the method of Carrasquilla and Melko cannot be applied to more complex models than the Ising model. Take the q-state Potts model, where atoms can take one of q states instead of just "up" or "down". Though it also has a phase transition, telling the phases apart is not trivial. In fact, if we consider a 5-state model, there are 120 states which are physically equivalent. To help an AI tell the phases apart, the team gave it more microscopic information, specifically how the state of a particular atom relates to the state of another atom some distance away, or how the spins correlate over separation. Having trained the AI with many of these correlation configurations for 3 and 5-state Potts models, they found that it was able to correctly classify phases and identify the temperature where the transition took place. They were also able to correctly account for the number of points in their lattice, the finite-size effect.

Having demonstrated that their method works, they tried the same approach on a q-state clock model, where spins adopt one of q orientations on a circle. When q is greater than or equal to 5, there are three phases which the system can take: an ordered low temperature phase, a high temperature phase, and a phase in between known as the Berezinskii-Kosterlitz-Thouless (BKT) phase, the investigation of which won John M. Kosterlitz, David J. Thouless and Duncan Haldane the 2016 Nobel Prize for Physics. They proceeded to successfully train an AI to tell the three phases apart with a 6-state clock model. When they applied it to configurations from a 4-state clock model, where there are only two phases expected, they discovered that the algorithm could classify the system as being in a BKT phase near the phase transition. This goes to show there is a deep connection between the BKT phase and the critical phase arising at the smooth 'second-order' phase transition point in the 4-state system.

The method presented by the team is generally applicable to a wide range of scientific problems. A key part of physics is universality, identifying traits in seemingly unrelated systems or phenomena which give rise to unified behavior. Machine learning is uniquely placed to tease these features out of the most complex models and systems, letting scientists take a peek at the deep connections that govern nature and our universe.

Credit: 
Tokyo Metropolitan University

Bubbles go with the flow

image: The University of Tokyo researchers develop a new physical model incorporating the density dependence of viscosity to understand the interactions of flowing viscous fluids with pipe walls, with promise to improve efficiency of industrial processes such as oil transportation.

Image: 
Institute of Industrial Science, The University of Tokyo

Tokyo, Japan - Researchers at the Institute of Industrial Science, The University of Tokyo, used a sophisticated physical model to simulate the behavior of fluids moving through pipes. By including the possibility of shear-induced bubble formation, they find that, contrary to the assumptions of many previous works, fluids can experience significant slippage when in contact with fixed boundaries. This research may help reduce energy losses when pumping fluids, which is a significant concern in many industrial applications, such as gas and oil suppliers.

Fluid dynamics is one of the most challenging areas of physics. Even with powerful computers and the use of simplifying assumptions, accurate simulations of fluid flow can be notoriously difficult to obtain. Researchers often need to predict the behavior of fluids in real-world applications, such as oil flowing through a pipeline. To make the problem easier, it has been common practice to assume that at the interface between the fluid and the solid boundary--in this case, the pipe wall--the fluid flows without slipping. However, the evidence to support this shortcut has been lacking. More recent research has shown the slippage can occur under certain circumstances, but the physical mechanism has remained mysterious.

Now, to more rigorously understand the origin of flow slippage, researchers at The University of Tokyo created an advanced mathematical model that includes the possibility of dissolved gas turning into bubbles on the pipe's inner surface.

"The no-slip boundary condition of liquid flow is one of the most fundamental assumptions in fluid dynamics," explains first author Yuji Kurotani. "However, there is no rigorous physical foundation for this condition, which ignores the effects of gas bubbles."

To do this, the researchers combined the Navier-Stokes equations, which are the basic laws that govern fluid flow, with Ginzburg-Landau theory, which describe phase transitions such as the change from a liquid to a gas. The simulations revealed that flow slippage can be caused by tiny microbubbles that form on the pipe wall. The bubbles, which are created by the shear forces in the fluid, often escape detection in real life because they remain very small.

"We found that the density changes that accompany viscosity variation can destabilize the system toward bubble formation. Shear-induced gas-phase formation provides a natural physical explanation for flow slipping," says senior author Hajime Tanaka.

Says Kurotani, "The results of our project can help design new pipes that transport viscous fluids, like fuel and lubricants, with much smaller energy losses."

Credit: 
Institute of Industrial Science, The University of Tokyo

Multi-stage deformation process in high-entropy alloys at ultra-low temperatures revealed

image: Muhammad Naeem prepares the experiment at TAKUMI, an engineering materials diffractometer at Japan Proton Accelerator Research Complex (J-PARC) used to perform in-situ neutron diffraction measurements multiple HEA samples, which all showed a multi-stage deformation process.

Image: 
© Professor Wang Xunli / City University of Hong Kongc

An international research team led by scientists from City University of Hong Kong (CityU) has recently discovered that high-entropy alloys (HEAs) exhibit exceptional mechanical properties at ultra-low temperatures due to the coexistence of multiple deformation mechanisms. Their discovery may hold the key to design new structural materials for applications at low temperatures.

Professor Wang Xunli, a newly elected Fellow of the Neutron Scattering Society of America, Chair Professor and Head of Department of Physics at CityU, joined hands with scientists from Japan and mainland China in conducting this challenging study on HEAs' deformation behaviours at ultra-low temperatures. Their research findings were published in the latest issue of the scientific journal Science Advances, titled "Cooperative deformation in high-entropy alloys at ultralow temperatures".

Neutron scattering: a powerful measurement tool

HEAs are a new class of structural materials with different favorable mechanical properties, such as excellent strength-ductility combination, high fracture toughness, and resistance against corrosion. It consists of multiple principal elements, contributing to complex deformation behaviours.

Usually, materials would become brittle at low temperatures because the atoms are "frozen" and lose their mobility. But HEAs demonstrate high ductility and they can be stretched to a large deformation at low temperatures. "This phenomenon was first discovered in 2014, but the mechanism behind is still unknown. It's intriguing," said Professor Wang who has been studying the mechanism since then and is the corresponding author of the paper.

To solve this puzzle, the research team led by Professor Wang made use of the in-situ neutron diffraction technique to study the deformation process of HEAs. "Neutron diffraction measurement is one of the only few means to observe what's going on during the materials' deformation. We can see every step: which mechanism kicks in first and how each of them interacts with the others, which is not feasible by conventional experimental methods like the transmission electron microscopy," Professor Wang explained, who is also the director of CityU Center on Neutron Scattering.

"More importantly, it can conduct measurements at ultra-low temperatures, i.e. near absolute zero. And the measurements are representative of the bulk of the sample rather than from the surface or localized area, providing microscopic information like how different grains of the materials interacted with each other," he added.

Sequence of deformation mechanisms revealed

Using this technique, the sequence of deformation mechanisms in HEAs at ultra-low temperatures are revealed for the first time. The team found out that at 15 Kelvin (K), the HEA deforms in four stages.

It begins with the dislocation slip, a common deformation mechanism for face-centered-cubic materials, where the planes of crystal lattice slide over each other. While the dislocations continue, stacking faults gradually become active and dominant, where the stacking sequence of crystal lattice planes are changed by the deformation. It is then followed by twinning, where the misorientation of lattice planes occurs, resulting in a mirror image of parent crystal. Finally, it transits to serrations where the HEA shows big oscillations of deforming stress.

"It is interesting to see how these mechanisms become active and cooperate with each other when the material deforms," said Mr Muhammad Naeem, a graduating PhD student and Senior Research Assistant from CityU's Department of Physics who is the first author of the paper.

In their experiments, they found that the HEAs showed a higher and more stable strain hardening (strain hardening means materials become stronger and harder after deformation), and the exceedingly large ductility as the temperature decreased. Based on the quantitative analysis of their in-situ experimental data, they concluded that the three observed additional deformation mechanisms - stacking faults, twinning, and serrations - as well as the interaction among these mechanisms, are the source of those extraordinary mechanical properties.

A new terrain: deformations at ultra-low temperatures

The whole study took the team almost three years. But there is a lot for further exploration. "Complicated deformation mechanisms in HEAs at ultra-low temperatures is a new terrain that very few people have ventured before. The findings of this study only show the tip of an iceberg," said Professor Wang.

For their next step, the team will further investigate when stacking faults will appear in other alloys and their deformation mechanisms at different temperatures. "Understanding deformation mechanisms will facilitate the design of new alloys. By deploying different mechanisms in synergy, we can tune them to achieve better mechanical properties for applications at low temperatures," said Mr Naeem.

Credit: 
City University of Hong Kong