Tech

Neural nets used to rethink material design

image: Engineers at Rice University and Lawrence Livermore National Laboratory are using neural networks to accelerate the prediction of how microstructures of materials evolve. This example predicts snowflake-like dendritic crystal growth.

Image: 
Mesoscale Materials Science Group/Rice University

HOUSTON - (April 30, 2021) - The microscopic structures and properties of materials are intimately linked, and customizing them is a challenge. Rice University engineers are determined to simplify the process through machine learning.

To that end, the Rice lab of materials scientist Ming Tang, in collaboration with physicist Fei Zhou at Lawrence Livermore National Laboratory, introduced a technique to predict the evolution of microstructures -- structural features between 10 nanometers and 100 microns -- in materials.

Their open-access paper in the Cell Press journal Patterns shows how neural networks (computer models that mimic the brain's neurons) can train themselves to predict how a structure will grow under a certain environment, much like a snowflake forms from moisture in nature.

In fact, snowflake-like, dendritic crystal structures were one of the examples the lab used in its proof-of-concept study.

"In modern material science, it's widely accepted that the microstructure often plays a critical role in controlling a material's properties," Tang said. "You not only want to control how the atoms are arranged on lattices, but also what the microstructure looks like, to give you good performance and even new functionality.

"The holy grail of designing materials is to be able to predict how a microstructure will change under given conditions, whether we heat it up or apply stress or some other type of stimulation," he said.

Tang has worked to refine microstructure prediction for his entire career, but said the traditional equation-based approach faces significant challenges to allow scientists to keep up with the demand for new materials.

"The tremendous progress in machine learning encouraged Fei at Lawrence Livermore and us to see if we could apply it to materials," he said.

Fortunately, there was plenty of data from the traditional method to help train the team's neural networks, which view the early evolution of microstructures to predict the next step, and the next one, and so on.

"This is what machinery is good at, seeing the correlation in a very complex way that the human mind is not able to," Tang said. "We take advantage of that."

The researchers tested their neural networks on four distinct types of microstructure: plane-wave propagation, grain growth, spinodal decomposition and dendritic crystal growth.

In each test, the networks were fed between 1,000 and 2,000 sets of 20 successive images illustrating a material's microstructure evolution as predicted by the equations. After learning the evolution rules from these data, the network was then given from 1 to 10 images to predict the next 50 to 200 frames, and usually did so in seconds.

The new technique's advantages quickly became clear: The neural networks, powered by graphic processors, sped the computations up to 718 times for grain growth, compared to the previous algorithm. When run on a standard central processor, they were still up to 87 times faster than the old method. The prediction of other types of microstructure evolution showed similar, though not as dramatic, speed increases.

Comparisons with images from the traditional simulation method proved the predictions were largely on the mark, Tang said. "Based on that, we see how we can update the parameters to make the prediction more and more accurate," he said. "Then we can use these predictions to help design materials we have not seen before.

"Another benefit is that it's able to make predictions even when we do not know everything about the material properties in a system," Tang said. "We couldn't do that with the equation-based method, which needs to know all the parameter values in the equations to perform simulations."

Tang said the computation efficiency of neural networks could accelerate the development of novel materials. He expects that will be helpful in his lab's ongoing design of more efficient batteries. "We're thinking about novel three-dimensional structures that will help charge and discharge batteries much faster than what we have now," Tang said. "This is an optimization problem that is perfect for our new approach."

Credit: 
Rice University

UVA engineering computer scientists discover new vulnerability affecting computers globally

CHARLOTTESVILLE, Va. - In 2018, industry and academic researchers revealed a potentially devastating hardware flaw that made computers and other devices worldwide vulnerable to attack.

Researchers named the vulnerability Spectre because the flaw was built into modern computer processors that get their speed from a technique called "speculative execution," in which the processor predicts instructions it might end up executing and preps by following the predicted path to pull the instructions from memory. A Spectre attack tricks the processor into executing instructions along the wrong path. Even though the processor recovers and correctly completes its task, hackers can access confidential data while the processor is heading the wrong way.

Since Spectre was discovered, the world's most talented computer scientists from industry and academia have worked on software patches and hardware defenses, confident they've been able to protect the most vulnerable points in the speculative execution process without slowing down computing speeds too much.

They will have to go back to the drawing board.

A team of University of Virginia School of Engineering computer science researchers has uncovered a line of attack that breaks all Spectre defenses, meaning that billions of computers and other devices across the globe are just as vulnerable today as they were when Spectre was first announced. The team reported its discovery to international chip makers in April and will present the new challenge at a worldwide computing architecture conference in June.

The researchers, led by Ashish Venkat, William Wulf Career Enhancement Assistant Professor of Computer Science at UVA Engineering, found a whole new way for hackers to exploit something called a "micro-op cache," which speeds up computing by storing simple commands and allowing the processor to fetch them quickly and early in the speculative execution process. Micro-op caches have been built into Intel computers manufactured since 2011.

Venkat's team discovered that hackers can steal data when a processor fetches commands from the micro-op cache.

"Think about a hypothetical airport security scenario where TSA lets you in without checking your boarding pass because (1) it is fast and efficient, and (2) you will be checked for your boarding pass at the gate anyway," Venkat said. "A computer processor does something similar. It predicts that the check will pass and could let instructions into the pipeline. Ultimately, if the prediction is incorrect, it will throw those instructions out of the pipeline, but this might be too late because those instructions could leave side-effects while waiting in the pipeline that an attacker could later exploit to infer secrets such as a password."

Because all current Spectre defenses protect the processor in a later stage of speculative execution, they are useless in the face of Venkat's team's new attacks. Two variants of the attacks the team discovered can steal speculatively accessed information from Intel and AMD processors.

"Intel's suggested defense against Spectre, which is called LFENCE, places sensitive code in a waiting area until the security checks are executed, and only then is the sensitive code allowed to execute," Venkat said. "But it turns out the walls of this waiting area have ears, which our attack exploits. We show how an attacker can smuggle secrets through the micro-op cache by using it as a covert channel."

Venkat's team includes three of his computer science graduate students, Ph.D. student Xida Ren, Ph.D. student Logan Moody and master's degree recipient Matthew Jordan. The UVA team collaborated with Dean Tullsen, professor of the Department of Computer Science and Engineering at the University of California, San Diego, and his Ph.D. student Mohammadkazem Taram to reverse-engineer certain undocumented features in Intel and AMD processors.

They have detailed the findings in their paper: "I See Dead μops: Leaking Secrets via Intel/AMD Micro-Op Caches."

This newly discovered vulnerability will be much harder to fix.

"In the case of the previous Spectre attacks, developers have come up with a relatively easy way to prevent any sort of attack without a major performance penalty" for computing, Moody said. "The difference with this attack is you take a much greater performance penalty than those previous attacks."

"Patches that disable the micro-op cache or halt speculative execution on legacy hardware would effectively roll back critical performance innovations in most modern Intel and AMD processors, and this just isn't feasible," Ren, the lead student author, said.

"It is really unclear how to solve this problem in a way that offers high performance to legacy hardware, but we have to make it work," Venkat said. "Securing the micro-op cache is an interesting line of research and one that we are considering."

Venkat's team has disclosed the vulnerability to the product security teams at Intel and AMD. Ren and Moody gave a tech talk at Intel Labs worldwide April 27 to discuss the impact and potential fixes. Venkat expects computer scientists in academia and industry to work quickly together, as they did with Spectre, to find solutions.

The team's paper has been accepted by the highly competitive International Symposium on Computer Architecture, or ISCA. The annual ISCA conference is the leading forum for new ideas and research results in computer architecture and will be held virtually in June.

Venkat is also working in close collaboration with the Processor Architecture Team at Intel Labs on other microarchitectural innovations, through the National Science Foundation/Intel Partnership on Foundational Microarchitecture Research Program.

Venkat was well prepared to lead the UVA research team into this discovery. He has forged a long-running partnership with Intel that started in 2012 when he interned with the company while he was a computer science graduate student at the University of California, San Diego.

This research, like other projects Venkat leads, is funded by the National Science Foundation and Defense Advanced Research Projects Agency.

Venkat is also one of the university researchers who co-authored a paper with collaborators Mohammadkazem Taram and Tullsen from UC San Diego that introduce a more targeted microcode-based defense against Spectre. Context-sensitive fencing, as it is called, allows the processor to patch running code with speculation fences on the fly.

Introducing one of just a handful more targeted microcode-based defenses developed to stop Spectre in its tracks, "Context-Sensitive Fencing: Securing Speculative Execution via Microcode Customization" was published at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems in April 2019. The paper was also selected as a top pick among all computer architecture, computer security, and VLSI design conference papers published in the six-year period between 2014 and 2019.

The new Spectre variants Venkat's team discovered even break the context-sensitive fencing mechanism outlined in Venkat's award-winning paper. But in this type of research, breaking your own defense is just another big win. Each security improvement allows researchers to dig even deeper into the hardware and uncover more flaws, which is exactly what Venkat's research group did.

Credit: 
University of Virginia School of Engineering and Applied Science

Branching worm with dividing internal organs growing in sea sponge

image: Fragment of the anterior end of an individual living worm (Ramisyllis multicaudata) dissected out of its host sponge. Bifurcation of the gut can be seen where the worm branches. The yellow structure is a differentiation of the digestive tube typical of the Family Syllidae.

Image: 
Ponz-Segrelles & Glasby

The marine worm Ramisyllis multicaudata, which lives within the internal canals of a sponge, is one of only two such species possessing a branching body, with one head and multiple posterior ends. An international research team led by the Universities of Göttingen and Madrid is the first to describe the internal anatomy of this intriguing animal. The researchers discovered that the complex body of this worm spreads extensively in the canals of their host sponges. In addition, they describe the anatomical details and nervous system of its unusual reproductive units, the stolons, which form their own brain when detached for fertilization, allowing them to navigate their environment. The results were published in the Journal of Morphology.

The research team found the host sponges and their guest worms in a remote area in Darwin, Australia, where these animals live. They collected samples, some of which are now located in the collections of the Biodiversity Museum at the University of Göttingen. For their analysis, they combined techniques such as histology, electronic optical microscopy, immunohistochemistry, confocal laser microscopy, and X-ray computed microtomography. This made it possible to obtain three-dimensional images both of the worms' different internal organs and of the interior of the sponges that they inhabit. The scientists show that when the body of these animals divides, so do all their internal organs, something that has never been observed before.

Furthermore, the three-dimensional models developed during this research have made it possible to find a new anatomical structure exclusive to these animals, which is formed by muscular bridges that cross between the different organs whenever their body has to form a new branch. These muscular bridges are essential because they confirm that the bifurcation process does not occur in the early stages of life, but once the worms are adults and then throughout their lives. In addition, researchers propose that this unique "fingerprint" of muscle bridges makes it theoretically possible to distinguish the original branch from the new one in each bifurcation of the complex body network.

In addition, this new study investigates the anatomy of the reproductive units (stolons) that develop in the posterior ends of the body when these animals are about to reproduce, and that are characteristic of the family to which they belong (Syllidae). The results show that these stolons form a new brain and have their own eyes. This allows them to navigate their environment when they are detached from the body for fertilization. This brain is connected to the rest of the nervous system by a ring of nerves that surrounds the intestine.

"Our research solves some of the puzzles that these curious animals have posed ever since the first branched annelid was discovered at the end of the 19th century," explains senior author Dr Maite Aguado, University of Göttingen. "However, there is still a long way to go to fully understand how these fascinating animals live in the wild. For example, this study has concluded that the intestine of these animals could be functional, yet no trace of food has ever been seen inside them and so it is still a mystery how they can feed their huge branched bodies. Other questions raised in this study are how blood circulation and nerve impulses are affected by the branches of the body." This research lays the foundations for understanding how these creatures live and how their incredible branched body came to evolve.

Credit: 
University of Göttingen

New view of species interactions offers clues to preserve threatened ecosystems

image: A blue magpie (Urocissa ornate), native to the rainforests of Sri Lanka, was photographed near the Sinharaja Forest Reserve, a World Heritage site that was part a new study of ecosystem diversity by UC San Diego evolutionary biologist Professor Wills and his colleagues.

Image: 
Christopher Wills, UC San Diego

As the health of ecosystems in regions around the globe declines due to a variety of rising threats, scientists continue to seek clues to help prevent future collapses.

A new analysis by scientists from around the world, led by a researcher at the University of California San Diego, is furthering science's understanding of species interactions and how diversity contributes to the preservation of ecosystem health.

A coalition of 49 researchers examined a deep well of data describing tree species in forests located across a broad range of countries, ecosystems and latitudes. Information about the 16 forest diversity plots in Panama, China, Sri Lanka, Puerto Rico and other locations--many in remote, inaccessible areas--had been collected by hundreds of scientists and students over decades.

Lead researcher Christopher Wills, an evolutionary biologist and professor emeritus in the UC San Diego Division of Biological Sciences, says the new study addresses large questions about these complex ecosystems--made up of trees, animals, insects and even bacteria and viruses--and how such stunning diversity is maintained to support the health of the forest.

The new analysis, believed to be the most detailed study of such an enormous set of ecological data, is published in the journal PLOS Computational Biology.

"Observational and experimental evidence shows that all ecosystems are characterized by strong interactions between and among their many species. These webs of interactions can be important contributors to the preservation of ecosystem diversity," said Wills.

The authors note, however, that many of these interactions--including those involving microscopic pathogens and the chemical defenses mounted by their prey--are not easy to identify and analyze in ecosystems that feature tens to hundreds of millions of inhabitants.

The researchers employed a detailed computational tool to extract hidden details from the forest census data. Their new "equal-area-annulus" method identifies pairs and groups of tree species that show unusually high or low levels of between-species interactions affecting their recruitment, mortality and growth. The authors found, unexpectedly, that closely-related pairs of tree species in a forest often interact weakly with each other, while distantly-related pairs can often interact with surprising strength. Such new information enables the design of further fieldwork and experiments to identify the many other species of organisms that have the potential to influence these interactions. These studies will in turn pave a path to understanding the roles of these webs of interactions in ecosystem stability.

Most of the thousands of significant interactions that the new analysis revealed were of types that give advantages to the tree species if they are rare. The advantages disappear, however, when those species become common. Some well-studied examples of such disappearing advantages involve diseases of certain species of tree. These specialized diseases are less likely to spread when their host trees are rare, and more likely to spread when the hosts are plentiful. Such interaction patterns can help to maintain many different host tree species simultaneously in an ecosystem.

"We explored how our method can be used to identify the between-species interactions that play the largest roles in the maintenance of ecosystems and their diversity," said Wills. "The interplay we have found between and among species helps to explain how the numerous species in these complex ecosystems can buffer the ecosystems against environmental changes, enabling the ecosystems themselves to survive."

Moving forward, the scientists plan to continue using the data to help tease out specific influences that are essential to ecosystem health.

"We want to show how we can maintain the diversity of the planet at the same time as we are preserving ecosystems that will aid our own survival," said Wills.

Credit: 
University of California - San Diego

Northern forest fires could accelerate climate change

image: The study focused on wooded peatland and forests like the landscape pictured here near the town of Wrigley in Canada's Northwest Territories. This drone imagery was taken by then-BU PhD candidate Jonathan Wang during June 2017.

Image: 
Courtesy of Jonathan Wang

New research indicates that the computer-based models currently used to simulate how Earth's climate will change in the future underestimate the impact that forest fires and drying climate are having on the world's northernmost forests, which make up the largest forest biome on the planet. It's an important understanding because these northern forests absorb a significant amount of Earth's carbon dioxide.

The finding, reached by studying 30 years of the world's forests using NASA satellite imaging data, suggests that forests won't be able to sequester as much carbon as previously expected, making efforts to reduce carbon emissions all the more urgent.

"Fires are intensifying, and when forests burn, carbon is released into the atmosphere," says Boston University environmental earth scientist Mark Friedl, senior author on the study published in Nature Climate Change. "But we're also seeing longer growing seasons, warmer temperatures, which draws carbon out of the atmosphere [and into plants]. More CO2 in the atmosphere acts as a fertilizer, increasing growth of trees and plants—so, scientifically, there's been this big question out there: What is happening on a global scale to Earth's forests? Will they continue to absorb as much carbon as they do now?"

Today's forests capture about 30 percent of all human-related CO2 emissions, which Friedl calls a "huge buffer on anthropogenic climate change." The new study, however, reveals that scientists have so far been underestimating the impact that fires and other disturbances—like timber harvests—are having on Earth's northern forests and, at the same time, have been overestimating the growth-enhancing effect of climate warming and rising atmospheric CO2 levels.

"Current Earth systems models appear to be misrepresenting a big chunk of the global biosphere. These models simulate the atmosphere, oceans, and biosphere, and our results suggest [the model-based simulation of northern forests] has been way off," says Friedl, a BU College of Arts & Sciences professor of earth and environment and interim director of BU's Center for Remote Sensing. He is an expert in utilizing satellite imaging data to monitor Earth's ecosystems on a global scale.

"It is not enough for a forest to absorb and store carbon in its wood and soils. For that to be a real benefit, the forest has to remain intact—an increasing challenge in a warming, more fire-prone climate," says Jonathan Wang, the paper's lead author. "The far north is home to vast, dense stores of carbon that are very sensitive to climate change, and it will take a lot of monitoring and effort to make sure these forests and their carbon stores remain intact."

Working on his PhD in Friedl's lab, Wang researched new ways to leverage the record of data collected from the long-standing Landsat program, a joint NASA/US Geological Survey mission that has been extensively imaging Earth's surface from satellites for decades, to understand how Earth's forests are changing. Wang says new computational and machine learning techniques for combining large remote sensing datasets have become much more advanced, "enabling the monitoring of even the most remote ecosystems with unprecedented detail."

He developed a method to gain richer information from 30 years of Landsat data by comparing it with more recent measurements from NASA's ICESat mission, a satellite carrying laser-based imaging technology, called LiDAR, that can detect the height of vegetation within a forest. Landsat, on the other hand, primarily detects forest cover but not how tall the trees are.

Comparing the newer LiDAR measurements with imaging data gathered from Landsat during the same time period, the team then worked backwards to calculate how tall and dense the vegetation was over the last three decades. They could then determine how the biomass in Earth's northern forests has changed over time—revealing that the forests have been losing more biomass than expected due to increasingly frequent and extensive forest fires.

Specifically, Friedl says, the forests are losing conifers, trees that are emblematic of Earth's northern forests, and for good reason. "Fires come in and burn, and then the most opportunistic types of species grow back first—like hardwoods—which then get replaced by conifers such as black spruce," he says. "But over the last 30 years, which isn't a long time frame in the context of climate change, we see fires taking out more forests, and we see hardwoods sticking around longer rather than being replaced by conifers."

Conifers are better adapted to cold climates than hardwoods, which could potentially be contributing to the dwindling overall biomass of the forests.

"An often-stated argument against climate action is the supposed benefits that far northern ecosystems and communities will enjoy from increased warmth," Wang says. He hopes the study's discovery will help people understand that the global climate crisis has serious issues for the far north, as well. "It may be greening, in some sense," he says, "but in reality the climate-driven increase in wildfires is undoing much of the potential benefits of a warming, greening north."

Wang and Friedl's findings shed light on a question that would have been difficult to answer without the help of NASA's "eyes in the sky."

"Fire regimens are changing because of climate, and many areas of the world's forests are in uninhabited areas where the effects of intense fires may not be easily noticed," Friedl says. "When big chunks of real estate in places like California go up in flames, that gets our attention. But northern forests, which hold some of the largest stocks of carbon in the world, are being impacted by fires more than we realized until now."

Credit: 
Boston University

Battery parts can be recycled without crushing or melting

image: Electrodes removed from the batteries. Electrodes consist of a film made from aluminum or copper, for example, which is covered with a thin layer of active material. Photo: Aalto University

Image: 
Aalto University

The proliferation of electric cars, smartphones, and portable devices is leading to an estimated 25 percent increase globally in the manufacturing of rechargeable batteries each year. Many raw materials used in the batteries, such as cobalt, may soon be in short supply. The European Commission is preparing a new battery decree, which would require the recycling of 95 percent of the cobalt in batteries. Yet existing battery recycling methods are far from perfect.

Researchers at Aalto University have now discovered that electrodes in lithium batteries containing cobalt can be reused as is after being newly saturated with lithium. In comparison to traditional recycling, which typically extracts metals from crushed batteries by melting or dissolving them, the new process saves valuable raw materials, and likely also energy.

'In our earlier study of how lithium cobalt oxide batteries age, we noticed that one of the main causes of battery deterioration is the depletion of lithium in the electrode material. The structures can nevertheless remain relatively stable, so we wanted to see if they can be reused,' explains Professor Tanja Kallio at Aalto University.

Rechargeable lithium-ion batteries have two electrodes between which electrically charged particles move. Lithium cobalt oxide is used in one electrode and, in most of the batteries, the other is made of carbon and copper.

In traditional battery recycling methods, some of batteries' raw materials are lost and lithium cobalt oxide turns into other cobalt compounds, which require a lengthy chemical refinement process to turn them back into electrode material. The new method sidesteps this painstaking process: by replenishing the spent lithium in the electrode through an electrolysis process, commonly used in industry, the cobalt compound can be directly reused.

The results show that the performance of electrodes newly saturated with lithium is almost as good as that of those made of new material. Kallio believes that with further development the method would also work on an industrial scale.

'By reusing the structures of batteries we can avoid a lot of the labour that is common in recycling and potentially save energy at the same time. We believe that the method could help companies that are developing industrial recycling,' Kallio says.

The researchers next aim to see if the same method could also be used with the nickel-based batteries of electric cars.

Credit: 
Aalto University

Partially sighted may be at higher risk of dementia

Older people with vision loss are significantly more likely to suffer mild cognitive impairment, which can be a precursor to dementia, according to a new study published in the journal Ageing Clinical and Experimental Research.

The research by Anglia Ruskin University (ARU) examined World Health Organisation data on more than 32,000 people and found that people with loss in both near and far vision were 1.7 times more likely to suffer from mild cognitive impairment.

People with impairment of their near vision were 1.3 times more likely to suffer from mild cognitive impairment than someone with no vision impairment.

However, people who reported only loss of their far vision did not appear to have an increased risk.

Dr Lee Smith, Reader in Physical Activity and Public Health at ARU, said: "Our research shows for the first time that vision impairment increases the chances of having mild cognitive impairment. Although not everyone with mild cognitive impairment will go on to develop it, there is a likelihood of progression to dementia, which is one of the major causes of disability and dependency in the older population."

Co-author Shahina Pardhan, Director of the Vision and Eye Research Institute at ARU, said: "Research now needs to focus on whether intervention to improve quality of vision can reduce the risk of mild cognitive impairment, and ultimately dementia. More work needs to be done to examine any possible causation, and what the reasons might be behind it."

The researchers examined population data from China, India, Russia, South Africa, Ghana and Mexico from the WHO's Study on Global Ageing and Adult Health (SAGE). The overall prevalence of mild cognitive impairment was 15.3% in the study sample of 32,715 people, while around 44% of the total number of people surveyed had vision impairment.

Credit: 
Anglia Ruskin University

Two studies demonstrate new PCI approaches offer benefits to patients and physicians

Washington, D.C., April 29, 2021 - Two studies related to percutaneous coronary intervention (PCI) evaluating the use of risk-avoidance strategies and robotic-assisted technology, respectively, are being presented as late-breaking clinical science at the Society for Cardiovascular Angiography & Interventions (SCAI) 2021 Scientific Sessions. An analysis of strategically avoiding high-risk PCI cases indicates systematic risk-avoidance does not improve, and may worsen, the quality of hospital PCI programs. A study of a robotic-assisted PCI shows the technology is safe and effective for the treatment of both simple and complex lesions; this has the potential to address the occupational hazards associated with radiation exposure and procedure-related orthopedic injuries for physicians.

PCI is a nonsurgical procedure that uses a stent to open up narrowed blood vessels, improving blood flow to the heart. PCI is performed more than 500,000 times every year across the U.S. and is a critical procedure to treat the most common form of heart disease, coronary artery disease (CAD) - making it top of mind for researchers to analyze and improve how the interventional cardiology community performs PCI.

Prioritizing Patient Outcomes: Evidence for Eliminating Risk-Avoidance Strategies

Patients may be considered high-risk for complications related to PCI based on their age, history of disease or other risk factors specific to their condition. In these cases, operators may avoid performing PCI as to limit negative effects on performance metrics placed on hospital cath labs. However, the actual impact of these strategies on individual hospital performance had not been previously determined. New data indicates PCI should be offered to all eligible candidates, regardless of risk - highlighting the need to prioritize patient outcomes over performance metrics.

The analysis conducted through a collaboration between the Hospital of the University of Pennsylvania and Duke University Medical Center included all adult patients who underwent PCI at a participating hospital in the National Cardiovascular Data Registry CathPCI registry between January 1, 2017 and December 21, 2017. Risk-adjusted mortality rates were calculated for each hospital. To simulate a systematic risk-avoidance strategy, the highest predicted risk cases (top 10%) were eliminated for each hospital and risk-adjusted mortality rates were recalculated.

The findings showed that after removing the riskiest PCI procedures, there is no guarantee that the measured quality of a PCI program will improve. Of 1,565 hospitals were included in the analysis, 883 (56.4%) reduced their risk-adjusted mortality rate, but 610 (39.0%) hospitals increased their risk-adjusted mortality rate. Hospitals changed their risk-adjusted mortality rate by -0.14% on average with this strategy. There were no significant differences in the patient or procedural characteristics among hospitals which improved compared with those that worsened.

"Through our simulation, we saw that avoiding risky PCI cases is not always beneficial, and may take away from patient care. This signals the need to move away from performance metrics, as to provide each unique patient with the individualized care they need," said Ashwin Nathan, M.D., Hospital of the University of Pennsylvania. "We hope this will relieve some pressure and empower clinicians to refocus their approach in the cath lab."

Elevating Physician Safety: Benefits of Robotic-Assisted Technology

For interventional cardiologists performing PCI, there are known hazards including radiation exposure that can cause skin injuries and cancer. A new study demonstrates the safety and efficacy of a second generation robotic-assisted system for PCI, allowing physicians to control the procedure from a distance, rather than tableside. The final results of the PRECISION GRX Study reinforce the advantages of a robotic approach to protect physicians from medical imaging-related radiation, with excellent clinical and technical success.

The prospective international multicenter registry study enrolled patients with obstructive CAD with clinical indications for PCI treated with robotic PCI. The co-primary endpoints were clinical success - complete perfusion (final TIMI 3 flow) and less than 30% residual stenosis without in-hospital major adverse cardiac event - and technical success defined by robotic clinical success without the need for unplanned manual conversion.

Findings show the second generation CorPath GRX System for robotic-assisted PCI is safe, effective and achieves high rates of clinical and technical success across the spectrum of lesion complexity. The study enrolled 980 subjects (65.4 ±11.6 years 73.5% male; 1233 lesions) across 20 centers. Of the patients, 31.6% had acute coronary syndrome and 68.8% had ACC/AHA type B2/C lesions. Clinical success was achieved in 97.8% (955/976) of subjects and 98.2% of lesions treated robotically. Device technical success was achieved in 86.5% (848/980) of subjects and 89.8% of lesions treated robotically. Technical success was higher for type A/B1 lesions as compared to type B2/C lesions (95.5% vs 87.2%, p

“With the second generation of the robotic system, we are able to grow the field with a safe and effective PCI approach for not just simple lesions, but very complex lesions as well,” said Ehtisham Mahmud, M.D., FSCAI, of The University of California, San Diego. “These findings will help fill a significant gap for physicians, by providing an opportunity for radiation protection – potentially allowing more physicians to extend their career. Moving forward, we plan to study the impacts of robotic PCI in a remote setting, to better reach rural areas with advanced care.”

Credit: 
Society for Cardiovascular Angiography and Interventions

The new study of emerging materials helping in detection of COVID-19

image: Panagiotis Tsiakaras is a co-author of more than 300 scientific papers, books, and patents, with ~14000 citations and Hirsch index 62.

Image: 
UrFU

The SARS-CoV-2 virus is still causing a dramatic loss of human lives worldwide, constituting an unprecedented challenge for society, public health, and economy, to overcome. Currently, SARS-CoV-2 can be diagnosed in two different ways: i) antigen tests (point-of-care, POC) and ii) molecular tests (nucleic acid, RNA, or PCR-polymerase chain reaction). Antigen tests can detect parts of SARS-CoV-2 proteins, known as antigens, via a nasopharyngeal or nasal swab sampling method. The main advantages of POC-test include the high specificity, quick response (less than an hour), and portability, with no need of fixed laboratory facilities. On the other hand, in a molecular diagnostic test, a reverse transcriptase polymerase chain reaction (RT-PCR) is evolved, also known as nucleic acid amplification method, which requires expensive laboratory equipment, hours of analysis, and special staff.

Despite the great efforts of the scientific community towards the development of diagnostic tools and the achievement of high specificity and sensitivity of the molecular tests, the concern about the control and detection of the SARS-CoV-2 remains.

The scientist of Ural Federal University, Prof. Panagiotis Tsiakaras with his colleagues in international research groups focused on reviewing the materials used for the design and development of electrochemical biosensors for the SARS-CoV-2 detection, highlighting the significant role the electrochemistry could play in controlling COVID disease. This kind of biosensors could be a successful virus diagnostic tool of high sensitivity, specificity, low cost, quick response, requiring no special personnel and offering the advantage of the portability. The Paper was published in the Journal of Electroanalytical Chemistry.

Up to date, two main groups of materials have been thoroughly explored as transducer electrodes: i) the Au (gold)-based and ii) the carbon or graphene-based ones. Both present faster response time (within a few seconds) along with higher accuracy than the current detection methods, and most of them has also higher sensitivity. Moreover, many of them have the possibility of being portable and miniaturized.

Comparing the above two groups of materials the researchers concluded that the carbon or graphene-based ones can compete the Au-based electrodes, as they have similar or better operational characteristics, also offering the advantage of lower cost.

In the current review, scientists recognize that in the case of the Au-based electrodes, the Au was mainly used in the form of nanoparticles onto alternative support (polymer-based or other) or supported onto reduced graphene oxide before being deposited onto the basic platform. The inclusion of the r-GO (reduced graphene oxide) to the Au nanoparticles significantly ameliorates SARS-CoV-2 sensor characteristics as it expands mainly the detection area that the virus binds.

In the case of carbon or graphene-based electrodes, surface functionalization constitutes the main strategy that was followed. Especially graphene and its derivatives, which are considered the most promising materials, does not contain chemically reactive functional groups that could help immobilizing analyte biomolecules. Thus, its surface or structure alteration was investigated by: i) doping graphene with another (bio)element, or ii) creating structure defects, or iii) being used as they are to modify screen-printed carbon electrodes.

Among the applied detection techniques electrochemical impedance spectroscopy, amperometry, and differential pulse voltammetry were the most used ones. Meanwhile, in the case of the amperometric technique, there is a concern about the 'real' current response of the sensor when being in an environment with high virus concentrations, as may diffusion phenomena prevail.

Electrochemical impedance spectroscopy, square wave voltammetry, and differential pulse voltammetry detection techniques are more sensitive and reliable, especially to the very low concentration values of the target analyte. However, for acquiring the 'real' charge the optimum operational conditions are set each time (Hz or voltage step, or scan rate, etc) according to the virus concentration.

Concluding their review, Prof. Panagiotis Tsiakaras and his colleagues report that: among the explored electrode materials, Au-based and carbon or graphene-based electrodes are the two main material groups, while the nanomaterial-based electrochemical biosensors could enable a fast, accurate, and without special cost, virus detection. However, as they state, it is necessary to further research to be done in terms of various nanomaterials and novel synthesis strategies in order the SARS-CoV-2 biosensors to be commercialized.

Credit: 
Ural Federal University

Methane release rapidly increases in the wake of the melting ice sheets

image: Massive lumps of carbonate litter the seafloor where large quantities of methane are leaking from the sediments and rocks below, marking the spot Dessandier and colleagues targeted to drill deep sediment cores.

Image: 
Giuliana Panieri

Ice ages are not that easy to define. It may sound intuitive that an ice age represents a frozen planet, but the truth is often more nuanced than that.

An ice age has constant glaciations and deglaciations, with ice sheets pulsating with the rhythm of changing climate. These giants have been consistently waxing and waning, exerting, and lifting pressure from the ocean floor.

Several studies also show that the most recent deglaciation, Holocene (approximately 21ka-15ka ago) of the Barents Sea has had a huge impact on the release of methane into the water. A most recent study in Geology looks even further into the past, some 125 000 years ago, and contributes to the conclusion: Melting of the Arctic ice sheets drives the release of the potent greenhouse gas methane from the ocean floor.

"In our study, we expand the geological history of past Arctic methane release to the next to last interglacial, the so-called Eemian period. We have found that the similarities between the events of both Holocene and Eemian deglaciation advocate for a common driver for the episodic release of geological methane - the retreat of ice sheets." says researcher Pierre-Antoine Dessandier, who conducted this study as a postdoctoral fellow at CAGE Centre for Arctic Gas Hydrate Environment and Climate at UiT The Arctic University of Norway.

Seeing thousands of years of methane release in tiny shells

The study is based on measurements of different isotopes found in sediment cores collected from the Arctic Ocean. Isotopes are variations of chemical elements, such as carbon and oxygen, in this case. Different isotopes of the same element have different weight and interact with other chemical elements in the environment in specific ways. This means that the composition of certain isotopes is correlated to the environmental changes - such as temperature or amount of methane in the water column or within the sediment. Isotopes are taken up and stored in the shells of tiny organisms called foraminifera and in that way get archived in the sediments for thousands of years as the tiny creatures die. Also, if methane was released for longer periods of time, the archived shells get an overgrowth of carbonate which in itself also can be tested for isotopes.

"The isotopic record showed that as the ice sheet melted and pressure on the seafloor lessened during the Eemian, methane was released in violent spurts, slow seeps, or a combination of both. By the time the ice disappeared completely, some thousands of years later, methane emissions had stabilized." says Dessandier.

Where did the methane come from?

Arctic methane reservoirs consist of gas hydrates and free gas. Gas hydrates are solids, usually methane gas, frozen in a cage with water, and extremely susceptible to pressure and temperature changes in the ocean. These reservoirs are potentially large enough to raise atmospheric methane concentrations if released during the melting of glacial ice and permafrost. The Geology study reinforces the hypothesis that the release of this greenhouse gas strongly correlates with the melting of the ice sheets. It is also an example of the past showing what the future may hold.

"The present-day acceleration of Greenlands ice melt is an analogue to our model. We believe that the future release of methane from below and nearby these ice sheets is likely." Says Dessandier

Increasing methane emissions are a major contributor to the rising concentration of greenhouse gases in Earth's atmosphere, and are responsible for up to one-third of near-term global heating. During 2019, about 60% (360 million tons) of methane released globally was from human activities, while natural sources contributed about 40% (230 million tons).

How much methane eventually made it to the atmosphere during the Eemian and Holocene deglaciations remains uncertain. Part of the problem in quantifying this are the microbial communities that live on the seafloor and in the water and use methane to survive.

But both those past deglaciations happened over thousands of years, while the current retreat of the ice sheets is unprecedentedly rapid according to the geological record.

"The projections of future climate change should definitely include the release of methane following in the wake of diminishing ice sheets. Past can be used to better inform the future."

Credit: 
UiT The Arctic University of Norway

Obesity, high-salt diet pose different cardiovascular risks in females, males

image: Obesity and a high-salt diet are both bad for our hearts but they are bigger, seemingly synergistic risks for females, scientists report.

Image: 
Kim Ratliff, Augusta University

Obesity and a high-salt diet are both bad for our hearts but they are bigger, seemingly synergistic risks for females, scientists report.

"We see younger and younger women having cardiovascular disease and the question is: What is the cause?" says Dr. Eric Belin de Chantemele, physiologist in the Vascular Biology Center and Department of Medicine at the Medical College of Georgia at Augusta University. "We think the fact that females are more salt sensitive and more sensitive to obesity are among the reasons they have lost the natural protection youth and estrogen are thought to provide."

His message to women based on the sex differences they are finding: "First reduce your consumption of salt, a message the American Heart Association has been pushing for years, which should also result in a reduction in your intake of highly processed, high-calorie food and drink."

Belin de Chantemele, whose research team has been exploring why so many young women are now getting cardiovascular disease, is presenting their findings during the Henry Pickering Bowditch Award Lectureship at the American Physiological Society Annual Meeting at Experimental Biology 2021 this week. The award, which honors the scientist who created the first physiology lab in the country and was the American Physiological Society's first president, recognizes original and outstanding accomplishments in the field of physiology by a young investigator.

The sex hormone estrogen, which has some protective powers like keeping blood vessels more flexible, is considered a natural protection for premenopausal women yet, along with soaring rates of severe obesity in young women, heart disease is now the third leading cause of death in females between the ages 20-44 -- fourth for males in that age group -- then moves up to second place for the next 20 years in both sexes, and is the number one killer for both men and women looking at all ages, according to the National Vital Statistics Reports.

While he refers to bad nutrition as the "world's biggest killer" and obesity as a major risk factor for hypertension in both sexes, his lab has mounting evidence that obesity and high salt intake are even bigger risks for females, who have naturally higher levels of two additional hormones, leptin and aldosterone, setting the stage for the potentially deadly cardiovascular disparities.

Many of us likely think of leptin as the "satiety hormone" that sends our brain cues to stop eating when our stomach is full, but in obesity, the brain typically stops listening to the full message but the cardiovascular system of women starts getting unhealthy cues.

Belin de Chantemele has shown that in females leptin prompts the adrenal glands, which make aldosterone, to make even more of this powerful blood vessel constrictor. Like leptin, females, regardless of their weight, already have naturally higher levels of aldosterone and actually bigger adrenal glands as well.

A result: Obesity actually produces larger blood pressure increases in females, and studies indicate that females also are more prone to obesity associated vascular dysfunction -- things like more rigid blood vessels that are not as adept as dilating. On the other hand, leptin actually increases production of the vasodilator nitric oxide -- which reduces blood pressure -- in the male mice, one of many cardiovascular differences they are finding between males and females.

Here's another. "The major role of aldosterone is to regulate your blood volume," Belin de Chantemele says. Increased salt intake should suppress aldosterone, and it does work that way in males, Belin de Chantemele says. But in females it appears to set them up for more trouble.

Aldosterone is the main mineralocorticoid, a class of hormones that helps maintain salt balance, and Belin de Chantemele and his team reported in 2019 in the journal Hypertension that the hormone progesterone, which enables pregnancy, also enables high levels of these mineralocorticoid receptors for aldosterone in the endothelial cells that line blood vessels in both female lab animals and human blood vessels.

When they removed the ovaries, which make estrogen and progesterone, from the female lab animals it equalized the mineralocorticoid receptor number, helping confirm that progesterone regulates the expression of the receptor in the females' blood vessels. When they deleted either the mineralocorticoid or progesterone receptor in the females, it prevented the blood vessel dysfunction that typically follows, and just knocking out the progesterone receptor also suppressed the aldosterone receptor.

The bottom line is that progesterone is key to the sex difference in aldosterone receptor expression on endothelial cells, which predisposes females to obesity associated, high-leptin driven endothelial dysfunction and likely high blood pressure, Belin de Chantemele says.

They reported a few years before in the same journal that higher leptin levels produced by more fat prompts the adrenal glands to make more aldosterone in females. "If you have higher aldosterone levels you will retain sodium and your blood volume will be higher," he says.

They've also reported, as have others, that females are more salt sensitive than males. High sodium intake is known to raise blood pressure, by increasing fluid retention, and both pre- and postmenopausal females are more salt sensitive than males, Black females even more so, he says.

They've shown, for example, that in just seven days on a high-salt diet, the ability of female mice to relax blood vessels decreased as blood pressure increased. Treatment with the aldosterone agonist eplerenone helped correct both.

Because females already make more aldosterone, and the normal response of the body when you eat a lot of salt is to make even more aldosterone to help eliminate some of it, his team now proposes that females appear to have an impaired ability to reduce both the levels of the enzyme that makes aldosterone and the hormone itself, which makes them more salt sensitive.

One thing that means is that salt raises females' blood pressure without them actually retaining more salt than the males. It also means that they think that blood vessels are more important in blood pressure regulation in females than males, which means they may need different treatment than males. To further compound the scenario, high salt increases the adrenal leptin receptor in the females, providing more points of action for leptin, which probably helps explain why aldosterone levels don't decrease in females like they do in males.

A new $2.6 million grant (1R01HL155265-01) from the National Heart, Lung and Blood Institute is enabling them to further investigate, in both lab animals and human tissue, the female's unique responses to a high-salt diet, include the specific contributions of the failure of aldosterone levels to drop, along with the increased expression of aldosterone and leptin receptors.

While trends in being overweight in about the last 50 years have held pretty steady for men and women, with decreases for men in the last handful of years, rates of severe obesity have been climbing, with women far outpacing men.

"We want to continue to put the puzzle together with the goal of helping restore protection from cardiovascular disease to young women, when a healthy diet and increased physical activity do not," Belin de Chantemele says.

His research team includes Galina Antonova, research assistant; Dr. Reem Atawia, postdoctoral fellow; Simone Kennard, research associate; Taylor Kress and Candee Barris, graduate students; Vinay Mehta, undergraduate student at AU, Laszlo Kovacs, assistant research scientist; and Dr. Jessica Faulkner, postdoctoral fellow.

Credit: 
Medical College of Georgia at Augusta University

Rock humidity in Spain's dehesas: An additional source of water for vegetation

A study by the Hydrology and Agricultural Hydraulics group at the University of Cordoba analyses the potential of rock in dehesas as a source of water for vegetation

Soil is an essential reservoir of the water cycle, not so much because of the volume it represents, but rather due to its continuous renewal and because of humanity's ability to take advantage of it. Although the evolution of the climate in the medium and long term may modify current conditions, thevariability of precipitation can cause notable changes in the natural systems of arid and semi-arid areas, accelerating their degradation, especially due to erosion, and, with it, the loss of soil. Thus, the challenge is not just to conserve the soil, but rather to accelerate its formation.

Soil formation processes depend on hydrological ones, so it is important to study the evolution of its humidity as an indicator. Researcher Vanesa García Gamero, together with professors at the University of Córdoba's Agronomic and Mountain Engineering School (ETSIAM) Adolfo Peña, Ana Laguna, Juan Vicente Giráldez and Tom Vanwalleghem, researchers associated with the AGR-127 Hydrology and Agricultural Hydraulics group at said institution, have measured soil moisture for 3 years in a dehesa located on granite materials in Cordoba's Sierra Morena mountain range to explore asymmetries in the basin and the factors that control this variable.

Between 2016 and 2019 the team analysed the evolution of soil moisture through a network of 32 sensors on two opposite slopes, one facing north and the other facing south. They examined the vegetation dynamics by estimating biomass and the Normalized Difference Vegetation Index (NDVI) obtained from the Sentinel-2 satellite of Europe's Copernicus mission, which indicates the greenness of the plant, together with the dynamics of the water table, which was only detected on the north-facing slope.

While the soil moisture dynamics are similar on both slopes, the results reveal a difference in terms of vegetation. The estimated biomass on the north-facing slope is 29% higher than on the south-facing slope. The NDVI pattern reflects opposite trends on the two slopes, with the minimum values being reached at different times of year: on the south-facing slope it occurs in the summer, while on the north-facing slope it occurs at the end of winter.

The orientation of the slope has a hydrological effect through the vegetation and the subsurface structure of the "critical zone" (the layer that extends from the canopy of the trees to the bottom of the groundwater), thus termed for its importance to the natural systems on which humanity depends. The soil on the south-facing slope is shallow, while on the north-facing slope a 9.50 m layer of fragmented and weathered material lies under the soil profile, which augments the water storage capacity on the north-facing slope, not only as soil moisture but in the form of "rock moisture", water stored in the underlying zone up to the saturated area, highly weathered, and as underground water, only detected on this slope. The vegetation on the north-facing slope could be sustained not only by the soil moisture, but also by that of the rock and the saturated zone, while the only source of water that the vegetation on the south-facing slope has is the soil moisture.

These conclusions are of interest, especially for semi-arid climates such as the Mediterranean's, as rock humidity could provide an additional source of water to sustain vegetation during periods of drought.

Credit: 
University of Córdoba

Data from China's Fengyun meteorological satellites available to global Earth system science applications

image: Fengyun meteorological satellites. Green: retired satellites; White: satellites in orbit; Yellow: satellites to-be-launched

Image: 
National Satellite Meteorological Center of China Meteorological Administration

Many meteorological satellite networks are constantly scanning Earth, providing vital research data and real-time life-saving weather information. Since China began its initial development in 1970, the Fengyun (FY) series of meteorological satellites have advanced considerably throughout more than 50 years. While FY satellites primarily focus on the atmosphere, they are capable of observing complex variables within the Earth-atmosphere system. Since the initial FY dispatch, China has successfully launched 17 FY satellites, seven of which are currently operating in orbit.

The Fengyun Meteorological Satellite Ground Application System generates more than 90 Earth observation products every day, producing more than 10TB of daily data volume. The FY Satellite Data Center has continuously stored Earth observation data, beginning when FY-1A successfully launched in 1988, to today. More than 12PB of archived satellite data exists in the database through more than 30 years.

"Several approaches for FY satellite data access have been developed for real-time users, scientific researchers, and public users." said Dr. Peng Zhang, the deputy director of National Satellite Meteorological Center of China Meteorological Administration. "All FY satellite data products are open to the world users and free to download."

Dr. Zhang is also the corresponding author of a data description article recently published in Advances in Atmospheric Sciences. The article showcases FY data products used to observe wildfires, lightning, vegetation indices, aerosol products, soil moisture, and precipitation estimation. All of these metrics have been validated with in-situ observations and have been cross referenced with other well-known satellite products.

The European Centre for Medium-Range Weather Forecasts (ECMWF) and weather forecasting agencies in China have assimilated the wide array of FY data into many numerical weather prediction (NWP) models. Since the 1990s, coupled meteorological satellites and numerical models have changed the way scientists understand the Earth. As numerical weather prediction and Earth system models continue to evolve, meteorological satellites will play a more important role in the future of Earth sciences.

"FY is included in the World Meteorological Organization's global operational meteorological satellite sequence. It provides data and products to more than 110 countries and regions as well as 2,700 broadcasting users." Dr. Zhang added. "We have been working with scientists from ECMWF, UK MetOffice, and other countries to improve data verification and application. We welcome scientists and forecasters all over the world to use FY data."

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Blueprint for a robust quantum future

image: Staff scientist Joseph Heremans working in lab at Argonne used to synthesize ultrapure diamond crystals and engineer electron spins that carry quantum information.

Image: 
(Image by Argonne National Laboratory.)

Claiming that something has a defect normally suggests an undesirable feature. That’s not the case in solid-state systems, such as the semiconductors at the heart of modern classical electronic devices. They work because of defects introduced into the rigidly ordered arrangement of atoms in crystalline materials like silicon. Surprisingly, in the quantum world, defects also play an important role.

Researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, the University of Chicago and scientific institutes and universities in Japan, Korea and Hungary have established guidelines that will be an invaluable resource for the discovery of new defect-based quantum systems. The international team published these guidelines in Nature Reviews Materials.

“We are especially proud of our guidelines because intended users extend from veteran quantum scientists to researchers in other fields and graduate students hoping to join the quantum workforce.” — Gary Wolfowicz, assistant scientist in Argonne’s Center for Molecular Engineering and Materials Science division, along with the University of Chicago Pritzker School of Molecular Engineering

Such systems have possible applications in quantum communications, sensing and computing and thereby could have a transformative effect on society. Quantum communications could distribute quantum information robustly and securely over long distances, making a quantum internet possible. Quantum sensing could achieve unprecedented sensitivities for measurements with biological, astronomical, technological and military interest. Quantum computing could reliably simulate the behavior of matter down to the atomic level and possibly simulate and discover new drugs.

The team derived their design guidelines based on an extensive review of the vast body of knowledge acquired over the last several decades on spin defects in solid-state materials.

“The defects that interest us here are isolated distortions in the orderly arrangement of atoms in a crystal,” explained Joseph Heremans, a scientist in Argonne’s Center for Molecular Engineering and Materials Science division, as well as the University of Chicago Pritzker School of Molecular Engineering.

Such distortions might include holes or vacancies created by the removal of atoms or impurities added as dopants. These distortions, in turn, can trap electrons within the crystal. These electrons have a property called spin, which acts as an isolated quantum system.

“Spin being a key quantum property, spin defects can hold quantum information in a form that physicists call quantum bits, or qubits, in analogy with the bit of information in classical computing,” added Gary Wolfowicz, assistant scientist in Argonne’s Center for Molecular Engineering and Materials Science division, along with the University of Chicago Pritzker School of Molecular Engineering.

For several decades, scientists have been studying these spin defects to create a broad array of proof-of-concept devices. However, previous research has only focused on one or two leading candidate qubits.

“Our field has had a somewhat narrow focus for many years,” said Christopher Anderson, a postdoctoral scholar in the University of Chicago Pritzker School of Molecular Engineering. “It was like we only had a few horses in the quantum race. But now we understand that there are many other quantum horses to back, and exactly what to look for in those horses.”

The team’s guidelines encompass the properties of both the defects and the material selected to host them. The key defect properties are spin, optical (for example, how light interacts with the spin of the trapped electrons), and charge state of the defect.

Possible solid-state materials include not only the already well-studied few like silicon, diamond and silicon carbide but other more recent entries like various oxides. All these materials have different advantages and disadvantages laid out in the guidelines. For example, diamond is clear and hard, but expensive. On the other hand, silicon is easy to make devices with at low cost, but is more affected by free charges and temperature.

“Our guidelines are there for quantum scientists and engineers to assess the interplay between the defect properties and the selected host material in designing new qubits tailored to some specific application,” Heremans noted.

“Spin defects have a central role to play in creating new quantum devices, whether they be small quantum computers, the quantum internet, or nanoscale quantum sensors,” continued Anderson. “By drawing upon the extensive knowledge on spin defects to derive these guidelines, we have laid the groundwork so that the quantum workforce — now and in the future — can design from the ground up the perfect qubit for a specific use.”

“We are especially proud of our guidelines because intended users extend from veteran quantum scientists to researchers in other fields and graduate students hoping to join the quantum workforce,” said Wolfowicz.

The work also establishes the groundwork for designing scalable semiconductor quantum devices and dovetails well with Q-NEXT, a DOE-funded quantum information science research center led by Argonne. Q-NEXT’s goal includes establishing a semiconductor quantum “foundry” for developing quantum interconnects and sensors.

“Our team’s guidelines will act as a blueprint to help direct the Q-NEXT mission in designing the next generation of quantum materials and devices,” said David Awschalom, senior scientist in Argonne’s Materials Science division, Liew Family Professor of Molecular Engineering at the University of Chicago Pritzker School of Molecular Engineering, and director of both the Chicago Quantum Exchange and Q-NEXT. “When it comes to quantum technologies with spins, this work sets the stage and informs the field how to move forward.”

The team’s guidelines appear in a paper entitled “Qubit guidelines for solid-state spin defects” and published in Nature Reviews Materials. In addition to Heremans, Wolfowicz, Anderson and Awschalom, authors include Shun Kanai (Tohoku University, Japan), Hosung Seo (Ajou University, Korea), Adam Gali (Budapest University of Technology and Economics, Hungary) and Giulia Galli (Argonne and University of Chicago).

This research was primarily supported by the DOE Office of Science.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.

Credit: 
DOE/Argonne National Laboratory

Avocado discovery may point to leukemia treatment

image: A compound in avocados may ultimately offer a route to better leukemia treatment, says a new University of Guelph study.

Image: 
University of Guelph

A compound in avocados may ultimately offer a route to better leukemia treatment, says a new University of Guelph study.

The compound targets an enzyme that scientists have identified for the first time as being critical to cancer cell growth, said Dr. Paul Spagnuolo, Department of Food Science.

Published recently in the journal Blood, the study focused on acute myeloid leukemia (AML), which is the most devastating form of leukemia. Most cases occur in people over age 65, and fewer than 10 per cent of patients survive five years after diagnosis.

Leukemia cells have higher amounts of an enzyme called VLCAD involved in their metabolism, said Spagnuolo.

"The cell relies on that pathway to survive," he said, explaining that the compound is a likely candidate for drug therapy. "This is the first time VLCAD has been identified as a target in any cancer."

His team screened nutraceutical compounds among numerous compounds, looking for any substance that might inhibit the enzyme. "Lo and behold, the best one was derived from avocado," said Spagnuolo.

Earlier, his lab looked at avocatin B, a fat molecule found only in avocados, for potential use in preventing diabetes and managing obesity. Now he's eager to see it used in leukemia patients.

"VLCAD can be a good marker to identify patients suitable for this type of therapy. It can also be a marker to measure the activity of the drug," said Spagnuolo. "That sets the stage for eventual use of this molecule in human clinical trials."

Currently, about half of patients over 65 diagnosed with AML enter palliative care. Others undergo chemotherapy, but drug treatments are toxic and can end up killing patients.

"There's been a drive to find less toxic drugs that can be used."

Referring to earlier work using avocatin B for diabetes, Spagnuolo said, "We completed a human study with this as an oral supplement and have been able to show that appreciable amounts are fairly well tolerated."

Credit: 
University of Guelph