Tech

New bioprosthetic valve for TAVR fails to demonstrate non-inferiority

NEW YORK - October 15, 2020 - In a randomized clinical trial, SCOPE II, a new self-expanding bioprosthetic valve used in transcatheter aortic valve replacement (TAVR) failed to demonstrate non-inferiority compared to an existing self-expanding valve.

Findings were reported today at TCT Connect, the 32nd annual scientific symposium of the Cardiovascular Research Foundation (CRF). TCT is the world's premier educational meeting specializing in interventional cardiovascular medicine. The study was also published simultaneously in Circulation.

The SCOPE II trial was designed to compare the clinical outcomes of the ACURATE neo and CoreValve Evolut valves. A total of 796 patients aged 75 years or older with symptomatic severe aortic stenosis and an indication for transfemoral TAVR as agreed by the Heart Team were recruited at 23 tertiary heart valves centers in Denmark, France, Germany, Italy, Spain and the United Kingdom. Participants were randomly assigned (1:1) to receive treatment with the ACURATE neo (n=398) or the CoreValve Evolut devices (n=398).

The primary safety endpoint, powered for non-inferiority of the ACURATE neo valve using a noninferiority margin of 6%, was the composite of all-cause mortality or stroke at 12 months. The primary efficacy endpoint, powered for superiority, was new permanent pacemaker implantation at 30 days. Secondary endpoints included clinical efficacy and safety endpoints at 30 days and 12 months.

In the intention-to-treat analysis, death or stroke at one year was 15.8% in the ACURATE neo group compared to 13.9% in the CoreValve Evolut group, while in the per-protocol analysis it was 15.3% vs. 14.3%. Noninferiority of the ACURATE neo was not met for the primary endpoint in the intent-to-treat analysis, while it was met in the per-protocol analysis. Based on the prespecified statistical plan, due to these inconsistent results, non-inferiority was not established for the primary endpoint.

New pacemaker implantation at 30 days was 10.5% with ACURATE neo compared to 18.0% with CoreValve Evolut (Risk Difference -7.5%, 95% CI -12.4--2.60, P= 0.0027). Cardiac death at 30 days (2.8% vs 0.8%, p=0.03) and one year (8.4% vs 3.9%, p=0.01) was greater in the ACURATE neo group. The rate of moderate-severe aortic regurgitation was 9.6% vs 2.9% (P

"TAVR with the ACURATE neo valve did not meet noninferiority compared with the CoreValve Evolut bioprosthesis with respect to a composite of death or stroke at one year," said Corrado Tamburino, MD, PhD, Division of Cardiology at C.A.S.T. (Centro Alte Specialità e Trapianti) Policlinico - University of Catania, Catania, Italy. "In a secondary analysis with limited statistical power, cardiac death was increased at one year in patients who received the ACURATE neo valve. The two valves also differed with respect to technical characteristics such as degree of aortic regurgitation and need for new permanent pacemaker implantation."

Credit: 
Cardiovascular Research Foundation

Results from the REFLECT II Trial reported at TCT Connect

NEW YORK - October 15, 2020 - The REFLECT II randomized clinical trial evaluating the safety and efficacy of a device designed to reduce cerebral embolization and ischemic stroke, complications of transcatheter aortic valve replacement (TAVR), found that the device met the primary safety endpoint compared to historical controls but did not demonstrate superiority of the device for the primary hierarchical efficacy endpoint.

Findings were reported today at TCT Connect, the 32nd annual scientific symposium of the Cardiovascular Research Foundation (CRF). TCT is the world's premier educational meeting specializing in interventional cardiovascular medicine.

The REFLECT II trial evaluated the safety and effectiveness of the TriGuard 3 (TG3), a self-stabilizing cerebral embolic deflection filter, in patients undergoing TAVR. REFLECT II intended to randomize 295 patients 2:1 to TAVR with TG3 vs. control. The primary safety endpoint was a composite of all-cause mortality, stroke, life-threatening or disabling bleeding, stage 2/3 acute kidney injury, coronary artery obstruction requiring intervention, major vascular complication, and valve-related dysfunction requiring intervention (VARC 2 defined) at 30 days. The endpoint was compared with a Performance Goal (PG) of 34.4%. The primary efficacy endpoint was a hierarchical composite of all-cause mortality or stroke at 30 days, NIHSS worsening, absence of diffusion-weighted magnetic resonance imaging (DWI) lesions post-procedure, and total volume of cerebral lesions (TLV) by DWI. Cumulative scores derived by the Finkelstein-Schoenfeld method were summed for each patient and compared between groups.

The REFLECT II analysis population included 283 patients [41 roll-in, 121 randomized to TG3 and 121 controls (58 randomized in phase II and 63 pooled from REFLECT phase I)]. TG3 was delivered and positioned in the aortic arch prior to TAVR in 100% of cases and retrieved intact in all cases.

After enrollment of 179 of the 225 planned randomized patients, the sponsor suspended trial enrollment with the concurrence of the FDA and DMC. After limited unblinding and review of the data, Keystone Heart decided to formally close the study and proceed with the marketing application (510(k)).

TG3 met the primary safety endpoint (22.5% vs 34.4% PG, pnon-inferiority=0.0001). However, superiority for the primary efficacy endpoint was not met, with similar win-ratios and win% (TG3 0.84 (45.7%) vs 1.19 (54.3%), p=0.857) between groups. Median TLV was not different with TG3 protection (215.39 mm3 vs 188.09 mm3, p=0.405).

"Compared to controls, the primary 30-day safety endpoint was higher with TriGuard 3 due primarily to TAVR related vascular and bleeding complications," said Jeffrey W. Moses, MD. Dr. Moses is a Professor of Cardiology at Columbia University Vagelos College of Physicians and Surgeons, Director of Interventional Cardiovascular Therapeutics, NewYork-Presbyterian/Columbia University Irving Medical Center and Director of Advanced Cardiac Interventions, St. Francis Hospital and Heart Center. "While the study did not demonstrate superiority of TriGUARD 3 compared to pooled controls for the primary hierarchical efficacy endpoint, a post hoc DW-MRI analysis suggests that TG3 may reduce larger ischemic lesions. Improved device stability to achieve reliable, complete cerebral coverage may improve outcomes."

Credit: 
Cardiovascular Research Foundation

Laser technology measures biomass in world's largest trees

image: Laser technology has been used to measure the volume and biomass of giant Californian redwood trees for the first time, records a new study by UCL researchers.

Image: 
Mat Disney

Laser technology has been used to measure the volume and biomass of giant Californian redwood trees for the first time, records a new study by UCL researchers.

The technique, published in Scientific Reports, offers unprecedented insights into the 3D structure of trees, helping scientists to estimate how much carbon they absorb and how they might respond to climate change.

Professor Mat Disney (UCL Geography), lead author on the study, said: "Large trees are disproportionately important in terms of their above ground biomass (AGB) and carbon storage, as well as their wider impact on ecosystem structure. They are also very hard to measure and so tend to be underrepresented in measurements and models of AGB.

"We show the first detailed 3D terrestrial laser scanning (TLS) estimates of the volume and AGB of large coastal redwood trees (Sequoia sempervirens) from three sites in Northern California, representing some of the highest biomass ecosystems on Earth."

The research contributes to an aspect of climate change research with increasing focus.

Professor Disney add: "Big questions within climate science in response to rising CO2 levels are whether and where more trees should be planted and how best to conserve existing forests. In order to answer these questions, scientists first need to understand how much carbon is stored in different tree species."

Estimating the size and mass of very large trees is an extremely difficult task. Previously, trees could only be weighed by cutting them down or by using other indirect methods such as remote sensing or scaling up from manual measurements of trunk diameter, both of which have potentially large margins of error.

Working with colleagues at NASA, and with support from the NASA Carbon Monitoring System programme, the researchers used ground-based laser measurements to create detailed 3D maps of the redwoods. NASA's new space laser mission, GEDI, is mapping forest carbon from space, and the GEDI team are using Professor Disney's work to test and improve the models they use to do this.

The trees scanned include the 88-metre tall Colonel Armstrong tree, with a diameter-at-breast height of 3.39 m, which they estimate weighs around 110 tons, the equivalent of almost 10 double-decker buses.

The researchers compared the TLS estimates with other methods and found that their estimates correlated with those of 3D crown mapping, a technique pioneered by American botanist Stephen C. Sillett that involves expert climbers scaling giant redwoods to manually record fine details about their height and mass.

Professor Disney's team found that their AGB estimates agreed to within 2% of the records from crown mapping. Crucially, they also found that both these 3D methods show that these large trees are more than 30% heavier than current estimates from more indirect methods.

The researchers recommend that laser technology and 3D crown mapping could be used to provide complementary, independent 3D measures.

Assistant Professor Laura Duncanson (NASA Earth Sciences & University of Maryland), last author on the study and member of the NASA GEDI science team, said: "Estimating the biomass of large trees is critical to quantifying their importance to the carbon cycle, particularly in Earth's most carbon rich forests. This exciting proof of concept study demonstrates the potential for using this new technology on giant trees - our next step will be to extend this application to a global scale in the hopes of improving GEDI's biomass estimates in carbon dense forests around the world."

Credit: 
University College London

Nudges with machine learning triples advanced care conversations among cancer patients

PHILADELPHIA--An electronic nudge to clinicians--triggered by an algorithm that used machine learning methods to flag patients with cancer who would most benefit from a conversation around end-of-life goals--tripled the rate of those discussions, according to a new prospective, randomized study of nearly 15,000 patients from Penn Medicine and published today in JAMA Oncology.

Early and frequent conversations with patients suffering from serious illness, particularly cancer, have been shown to increase satisfaction, quality of life, and care that's consistent with their values and goals. However, today many do not get the opportunity to have those discussions with a physician or loved ones because their disease has progressed too far and they're too ill.

"Within and outside of cancer, this is one of the first real-time applications of a machine learning algorithm paired with a prompt to actually help influence clinicians to initiate these discussions in a timely manner, before something unfortunate may happen," said co-lead author Ravi B. Parikh, MD, an assistant professor of Medical Ethics and Health Policy and Medicine in the Perelman School of Medicine at the University of Pennsylvania and a staff physician at the Corporal Michael J. Crescenz VA Medical Center. "And it's not just high-risk patients. It nearly doubled the number of conversations for patients who weren't flagged--which tells us it's eliciting a positive cultural change across the clinics to have more of these talks."

Christopher Manz, MD, of the Dana Farber Cancer Institute, who was a fellow in the Penn Center for Cancer Care Innovation at the time of the study, serves as co-lead author.

In a separate JAMA Oncology published in September, the research team validated the Penn Medicine-developed machine learning tool's effectiveness at predicting short-term mortality in patients in real-time using clinical data from the electronic health record (EHR). The algorithm considers more than 500 variables--age, hospitalizations, and co-morbidities, for example--from patient records, all the way up until their appointment. That's one of the advantages of using the EHR to identify patients who may benefit from a timely conversation. It's in real time, as opposed to using claims or other types of historical data to make predictions.

This latest trial combined that algorithm with a behavioral nudge, including texts, emails, or notifications to the clinical team, to determine its ability to both identify patients and prompt conversations around end-of-life planning. The study--which included 14,607 patients and 78 physicians across nine oncology clinics in the University of Pennsylvania Health System--was conducted between June 2019 and November 2019.

Among patients with a high-predicted mortality risk, conversations in the intervention group occurred in 304 out of 1,999 patient encounters (15.2 percent) compared to 77 out of 2,125 in the control group (3.6 percent). Even when patients were not flagged as high-risk, clinicians in the trial engaged more in these conversations. Among all patient encounters, serious illness conversations occurred in 155 out of 12,170 encounters (1.3 percent) in the control group, while conversations in the intervention group occurred in 632 out of 13,889 encounters (4.6 percent).

"We've taken an algorithm from retrospective validation to real-time validation to actually testing it in the clinic to see if it can shape patient care," said Parikh, who is also part of the Penn Center for Cancer Care Innovation. "Because of its success, I think we've provided of a road map for other institutions that may be thinking of using analytics to drive important behaviors."

The machine learning tool continues to be utilized in Penn Medicine oncology clinics, and further proved its value during the COVID-19 pandemic. The rates of serious illness conversations continued to remain high after the trial ended, despite many of those conversations taking place online through much of 2020, when many clinical visits had to occur through telemedicine to ensure patient safety.

"This is one of the first applications of combing behavioral nudges with machine learning methods in clinical care," said senior author Mitesh S. Patel, MD, director of the Penn Medicine Nudge Unit and an associate professor of Medicine in the Perelman School of Medicine at the University of Pennsylvania and a staff physician at the Corporal Michael J. Crescenz VA Medical Center. "There are many opportunities build upon this work and apply it to other aspects of cancer care and to other areas of medicine."

Credit: 
University of Pennsylvania School of Medicine

Monash engineers improve fatigue life of high strength aluminium alloys by 25 times

Monash University engineers have demonstrated improvements in the fatigue life of high strength aluminium alloys by 25 times - a significant outcome for the global transport industry.

Aluminium alloys are light, non-magnetic and have great corrosion resistance. But, their fatigue properties are poor.

Researchers were able to make aluminium alloy microstructures that can heal while in operation.

A world-first study by Monash University engineers has demonstrated improvements in the fatigue life of high strength aluminium alloys by 25 times - a significant outcome for the transport manufacturing industry.

Published today (Thursday 15 October 2020) in the prestigious journal Nature Communications, researchers demonstrated that the poor fatigue performance of high strength aluminium alloys was because of weak links called 'precipitate free zones' (PFZs).

The team led by Professor Christopher Hutchinson, a Professor of Materials Science and Engineering at Monash University in Australia, was able to make aluminium alloy microstructures that can heal the weak links while in operation (i.e. a form of self-healing).

The improvement in the lifetime of high strength aluminium alloys could be 25 times compared to current state-of-the-art alloys.

Aluminium alloys are the second most popular engineering alloy in use today. Compared to steel, they are light (1/3 of the density), non-magnetic and have excellent corrosion resistance.

Aluminium alloys are important for transport applications because they are light, which improves fuel efficiency. But, their fatigue properties are notoriously poor compared to steel of similar strength.

Professor Hutchinson said when using aluminium alloys for transport, the design must compensate for the fatigue limitations of aluminium alloys. This means more material is used than manufacturers would like and the structures are heavier than we would like.

"Eighty per cent of all engineering alloy failures are due to fatigue. Fatigue is failure due to an alternating stress and is a big deal in the manufacturing and engineering industry," Professor Hutchinson said.

"Think of taking a metal paperclip in your hands and trying to break the metal. One cannot. However, if you bend it one way, then the other, and back and forth a number of times, the metal will break.

"This is 'failure by fatigue' and it's an important consideration for all materials used in transport applications, such as trains, cars, trucks and planes."

Failure by fatigue occurs in stages. The alternative stress leads to microplasticity (undergoing permanent change due to stress) and the accumulation of damage in the form of a localisation of plasticity at the weak links in the material.

The plastic localisation catalyses a fatigue crack. This crack grows and leads to final fracture.

Using commercially available AA2024, AA6061 and AA7050 aluminium alloys, researchers used the mechanical energy imparted into the materials during the early cycles of fatigue to heal the weak points in the microstructure (the PFZs).

This strongly delayed the localisation of plasticity and the initiation of fatigue cracks, and saw enhanced fatigue lives and strengths.

Professor Hutchinson said these findings could be significant for the transport manufacturing industry as the demand for fuel efficient, lightweight and durable aircraft, cars, trucks and trains continues to grow.

"Our research has demonstrated a conceptual change in the microstructural design of aluminium alloys for dynamic loading applications," he said.

"Instead of designing a strong microstructure and hoping it remains stable for as long as possible during fatigue loading, we recognised that the microstructure will be changed by the dynamic loading and, hence, designed a starting microstructure (that may have lower static strength) that will change in such a way that its fatigue performance is significantly improved.

"In this respect, the structure is trained and the training schedule is used to heal the PFZs that would otherwise represent the weak points. The approach is general and could be applied to other precipitate hardened alloys containing PFZs for which fatigue performance is an important consideration."

Credit: 
Monash University

Remembering novelty

image: Layers of the hippocampus. The mossy cell bodies are red, interneuron cells connecting neurons are green, and other neurons are blue.

Image: 
© Ryuichi Shigemoto

Imagine going to a café you have never been to. You will remember this new environment, but when you visit it again and again fewer new memories about the environment will be formed, only the things that changed will be really memorable. How this long-term memory are regulated is still not fully understood. Ryuichi Shigemoto from the Institute of Science and Technology Austria (IST Austria) in cooperation with researchers from Aarhus University and the National Institute for Physiological Sciences in Japan now have uncovered a new keystone in the formation of memories. In their study published in Current Biology, they investigated a signaling path in the hippocampus area of the brain and showed how it controls making new memories about experiencing new environments.

The hippocampus is a central area in the brain that plays an important role in transferring information from the short-term memory to the long-term memory. Of the many interlocking parts of the hippocampus, the researchers focused on the connection between the so-called mossy cells that receive novelty signals of sensory input about the environment and the so-called granule cells to which this information is relayed. In diseases like Alzheimer's this part of the brain is one of the first ones affected.

Watching Neurons

The scientists used four different approaches for this study in order to rigorously investigate these new findings. First, they put the hippocampus under the microscope and studied the structure of how the mossy cells are connected to the granule cells showing their many complex connections.

Second, they used the technique of calcium imaging that allows live monitoring of neuron activity as these genetically modified cells light up when activated. When exposing the animals a new environment for several days, the activity of the mossy cells sending signals to the granule cells first was high and then became less and less. When they then put the mice into another new environment, the activity sprung up again, therefore showing that these neurons are specifically relevant to process new environmental input.

Third, the researchers followed traces in the neuron left by the signals. Neural activity in these cells triggers the expression of a certain gene, meaning the production of the corresponding protein that is encoded in it. The more activity there was, the more of this protein they can find afterwards. In the granule cells they found production of this protein, which correlated with activity of the mossy cells.

Dreading New Places

And lastly, the scientists used behavioral studies to see the effects of this pathway in the hippocampus on memory formation. This is especially important, because the connection between memory formation and behavior can tell them a lot about the brain's functions. They combined a negative sensory stimulus, a small electric shock, with putting the animals in a new environment. The mice then quickly learned to associate the new environment with unpleasant feelings and their negative reaction of freezing on the spot was measured.

When the researches used drugs to inhibit the activity of the mossy cells--the ones receiving the signals about the new environment--and then did the negative conditioning, the mice did not remember the connection between the new environment and the unpleasant feeling. Additionally, when the animals were first accustomed to the new environment and then conditioned, there was also no activation of the mossy cells, and therefore no association between the environment and the shocks.

On the other hand, if the mossy cells were artificially activated, this association could be formed even after the animals were already used to the new environment. This clearly shows how the mossy cells in the hippocampus react to novel input and trigger the formation of new long-term memories in mice.

Next Steps in Understanding

Whether the exact same processes happen in the human brain is still an open question, but these new findings are an important first step in understanding this part of our most complex organ. Ryuichi Shigemoto and his collaborators are conducting fundamental research that may one day help to address degenerative brain diseases that affect memory formation, but this is still a while away.

He cautiously states: "This research field is very competitive and new findings arise quickly. There are many disputed mechanisms on memory formation, but our findings corroborate an existing hypothesis and are very robust, thus opening up a new field of neuroscience research and furthering our understanding of the brain."

Animal welfare

Understanding how the brain stores and processes information is only possible by studying the brains of animals while they carry out specific behaviors. No other methods, such as in vitro or in silico models, can serve as alternatives. The animals were raised, kept and treated according to the strict regulations of Austrian law.

Credit: 
Institute of Science and Technology Austria

Star clusters are only the tip of the iceberg

image: A panoramic view of the nearby Alpha Persei star cluster and its corona. The member stars in the corona are invisible. These are only revealed thanks to the combination of precise measurements with the ESA Gaia satellite and innovative machine learning tools

Image: 
© Stefan Meingast, made with Gaia Sky

"Clusters form big families of stars that can stay together for large parts of their lifetime. Today, we know of roughly a few thousand star clusters in the Milky Way, but we only recognize them because of their prominent appearance as rich and tight groups of stars. Given enough time, stars tend to leave their cradle and find themselves surrounded by countless strangers, thereby becoming indistinguishable from their neighbours and hard to identify" says Stefan Meingast, lead author of the paper published in Astronomy & Astrophysics. "Our Sun is thought to have formed in a star cluster but has left its siblings behind a long time ago" he adds.

Thanks to the ESA Gaia spacecraft's precise measurements, astronomers at the University of Vienna have now discovered that what we call a star cluster is only the tip of the iceberg of a much larger and often distinctly elongated distribution of stars.

"Our measurements reveal the vast numbers of sibling stars surrounding the well-known cores of the star clusters for the first time. It appears that star clusters are enclosed in rich halos, or coronae, more than 10 times as large as the original cluster, reaching far beyond our previous guesses. The tight groups of stars we see in the night sky are just a part of a much larger entity" says Alena Rottensteiner, co-author and master student at the University of Vienna. "There is plenty of work ahead revising what we thought were basic properties of star clusters, and trying to understand the origin of the newfound coronae."

To find the lost star siblings, the research team developed a new method that uses machine learning to trace groups of stars which were born together and move jointly across the sky. The team analyzed 10 star clusters and identified thousands of siblings far away from the center of the compact clusters, yet clearly belonging to the same family. An explanation for the origin of these coronae remains uncertain, yet the team is confident that their findings will redefine star clusters and aid our understanding of their history and evolution across cosmic time.

"The star clusters we investigated were thought to be well-known prototypes, studied for more than a century, yet it seems we have to start thinking bigger. Our discovery will have important implications for our understanding of how the Milky Way was built, cluster by cluster, but also implications for the survival rate of proto-planets far from the sterilizing radiation of massive stars in the centers of clusters", says João Alves, Professor of Stellar Astrophysics at the University of Vienna and a co-author of the paper. "Dense star clusters with their massive but less dense coronae might not be a bad place to raise infant planets after all."

Credit: 
University of Vienna

Scientists propose potential method for imaging-guided synergistic cancer therapy

image: Graphic of CNQD-CN for NIR imaging and combined chemotherapy and phototherapy to treat cancer

Image: 
LIU Hongji

A joint research team led by Prof. WANG Hui and Prof. LIN Wenchu from the High Magnetic Field Laboratory of the Hefei Institutes of Physical Science developed a synthesis of metal-free multifunctional therapeutic reagents, called graphitic carbon nitride quantum dots embedded in carbon nanosheets (CNQD-CN), via a one-step hydrothermal treatment.

Metal-free multifunctional nanomaterials have broad application prospects in integrated cancer diagnosis and treatment.

The team took an organic solvent (formamide) as the carbon and nitrogen source and then developed CNQD-CN.

CNQD-CN can be utilized as a near-infrared (NIR)/pH dual-responsive drug delivery system to improve the response to chemotherapy. It possesses both light-to-heat conversion and singlet oxygen generation capabilities under a single NIR excitation wavelength for combined photodynamic and photothermal therapy.

"The combination of graphitic carbon nitride quantum dots and two-dimensional carbon-based nanomaterials might be a potential candidate for realizing imaging-guided synergistic cancer therapy due to its excellent performance, including optical properties, efficient light-to-heat conversion capability, and near-infrared (NIR)-induced singlet oxygen generation," said WANG Hui, who designed the project.

However, synthesis of related nanocomposites, including multiple reaction precursors, complex synthesis processes, potential weak interaction, produced large amounts of waste, thus limiting their scalable production and reproducibility.

This study was a further step in realizing the advantages of imaging-guided synergistic cancer therapy.

Credit: 
Chinese Academy of Sciences Headquarters

NTU Singapore scientists develop 'mini-brains' to help robots recognize pain and to self-repair

video: Scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed a way for robots to recognise pain and to self-repair when damaged. This video demonstrates how the robot is taught to recognise pain and learn damaging stimuli.

Image: 
NTU Singapore

Using a brain-inspired approach, scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed a way for robots to have the artificial intelligence (AI) to recognise pain and to self-repair when damaged.

The system has AI-enabled sensor nodes to process and respond to 'pain' arising from pressure exerted by a physical force. The system also allows the robot to detect and repair its own damage when minorly 'injured', without the need for human intervention.

Currently, robots use a network of sensors to generate information about their immediate environment. For example, a disaster rescue robot uses camera and microphone sensors to locate a survivor under debris and then pulls the person out with guidance from touch sensors on their arms. A factory robot working on an assembly line uses vision to guide its arm to the right location and touch sensors to determine if the object is slipping when picked up.

Today's sensors typically do not process information but send it to a single large, powerful, central processing unit where learning occurs. As a result, existing robots are usually heavily wired which result in delayed response times. They are also susceptible to damage that will require maintenance and repair, which can be long and costly.

The new NTU approach embeds AI into the network of sensor nodes, connected to multiple small, less-powerful, processing units, that act like 'mini-brains' distributed on the robotic skin. This means learning happens locally and the wiring requirements and response time for the robot are reduced five to ten times compared to conventional robots, say the scientists.

Combining the system with a type of self-healing ion gel material means that the robots, when damaged, can recover their mechanical functions without human intervention.

The breakthrough research by the NTU scientists was published in the peer-reviewed scientific journal Nature Communications in August.

Co-lead author of the study, Associate Professor Arindam Basu from the School of Electrical & Electronic Engineering said, "For robots to work together with humans one day, one concern is how to ensure they will interact safely with us. For that reason, scientists around the world have been finding ways to bring a sense of awareness to robots, such as being able to 'feel' pain, to react to it, and to withstand harsh operating conditions. However, the complexity of putting together the multitude of sensors required and the resultant fragility of such a system is a major barrier for widespread adoption."

Assoc Prof Basu, who is a neuromorphic computing expert added, "Our work has demonstrated the feasibility of a robotic system that is capable of processing information efficiently with minimal wiring and circuits. By reducing the number of electronic components required, our system should become affordable and scalable. This will help accelerate the adoption of a new generation of robots in the marketplace."

Robust system enables 'injured' robot to self-repair

To teach the robot how to recognise pain and learn damaging stimuli, the research team fashioned memtransistors, which are 'brain-like' electronic devices capable of memory and information processing, as artificial pain receptors and synapses.

Through lab experiments, the research team demonstrated how the robot was able to learn to respond to injury in real time. They also showed that the robot continued to respond to pressure even after damage, proving the robustness of the system.

When 'injured' with a cut from a sharp object, the robot quickly loses mechanical function. But the molecules in the self-healing ion gel begin to interact, causing the robot to 'stitch' its 'wound' together and to restore its function while maintaining high responsiveness.

First author of the study, Rohit Abraham John, who is also a Research Fellow at the School of Materials Science & Engineering at NTU, said, "The self-healing properties of these novel devices help the robotic system to repeatedly stitch itself together when 'injured' with a cut or scratch, even at room temperature. This mimics how our biological system works, much like the way human skin heals on its own after a cut.

"In our tests, our robot can 'survive' and respond to unintentional mechanical damage arising from minor injuries such as scratches and bumps, while continuing to work effectively. If such a system were used with robots in real world settings, it could contribute to savings in maintenance."

Associate Professor Nripan Mathews, who is co-lead author and from the School of Materials Science & Engineering at NTU, said, "Conventional robots carry out tasks in a structured programmable manner, but ours can perceive their environment, learning and adapting behaviour accordingly. Most researchers focus on making more and more sensitive sensors, but do not focus on the challenges of how they can make decisions effectively. Such research is necessary for the next generation of robots to interact effectively with humans.

"In this work, our team has taken an approach that is off-the-beaten path, by applying new learning materials, devices and fabrication methods for robots to mimic the human neuro-biological functions. While still at a prototype stage, our findings have laid down important frameworks for the field, pointing the way forward for researchers to tackle these challenges."

Building on their previous body of work on neuromorphic electronics such as using light-activated devices to recognise objects, the NTU research team is now looking to collaborate with industry partners and government research labs to enhance their system for larger scale application.

Credit: 
Nanyang Technological University

What was responsible for the hottest spring in eastern China in 2018?

image: Schematic diagram of possible causes of the hottest spring in eastern China in 2018

Image: 
Chunhui Lu

The spring of 2018 was the hottest on record since 1951 over eastern China. This record-breaking temperature event caused drought, warm winds and serious impacts on agriculture, plant phenology, electricity transmission systems and human health. Both human-induced global warming and anomalous circulation increased the chance of this extreme high-temperature event, according to a new attribution analysis by Dr. Chunhui Lu and Prof. Ying Sun of China Meteorological Administration and their collaborators from the UK Met Office.

The authors published their results in Advances of Atmospheric Sciences, with a call to action against the increasing extreme high-temperature events in China. During this hottest spring, over 900 stations reached their record spring mean temperature; the daily maximum temperatures at 900 stations were higher than 35°C, with the maximum value (41.7°C) observed in Zhejiang Province; and tropical nights (daily minimum temperature > 25°C) appeared in May at 62 stations over eastern China for the first time since meteorological observations began in the early 1950s.

The researchers used a relatively new attribution method to estimate the quantitative contributions from anthropogenic forcing and circulation to the probability of 2018-like high-temperature events. The newly available data from the Hadley Centre event attribution system provided large-ensemble runs and allowed the researchers to estimate the probabilities of a 2018-like event: (1) for springs with high and low correlation circulation patterns relative to the spring of 2018; and (2) with and without the effect of anthropogenic influence.

"Quantitative estimates of the probability ratio show that anthropogenic forcing may have increased the chance of this event by ten-fold, while the anomalous circulation increased it by approximately two-fold," says Prof. Sun. "The persistent anomalous anticyclonic circulation located on the north side of China blocked the air with lower temperature from high latitudes into eastern China, which is the direct dynamical cause. Global warming provides a favorable climate background for the occurrence of this extreme high temperature event."

The researchers also compared the AMIP-based results with those derived from a CMIP-type model--the Canadian Earth System Model. The results showed similar quantitative contributions from human activities and circulation influences, suggesting a robustness to their findings.

"Future work on the event attribution system may be needed to allow us to conduct attribution studies of extreme events more promptly and efficiently," Prof. Sun adds.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Automatic decision-making prevents us harming others - new study

The processes our brains use to avoid harming other people are automatic and reflexive - and quite different from those used when avoiding harm to ourselves, according to new research.

A team based in the Universities of Birmingham and Oxford in the UK and Yale University in the US investigated the different approaches to avoiding pain for the first time. They found that when learning to avoid harming ourselves, our decision-making tends to be more forward-looking and deliberative.

The findings, published in Proceedings of the National Academy of Sciences, could shed light on disorders such as psychopathy where individuals experience problems learning or making choices to avoid harming others.

"The ability to learn which of our actions help to avoid harming others is fundamental for our well-being - and for societal cohesion," said Dr Patricia Lockwood, Senior Research Fellow in the Centre for Human Brain Health at the University of Birmingham. "Many of our decisions have an impact on other people, and we are often faced with choices where we need to learn and decide what will help others and stop them from being harmed."

The experiment carried out by the team involved scanning the brains of a cohort of 36 participants (18 men and 16 women), while they were asked to make a series of decisions. Participants had to learn which decisions would lead to a painful electric shock being delivered either to themselves or to another individual.

Researchers found a striking difference between the two decision-making processes. They found that individuals made automatic, efficient choices when learning to avoid harming others. However, when learning to avoid harming themselves, choices were more deliberative. People were willing to repeat choices that had previously led to harm if they thought it would produce better results in the future.

The team was also able to identify specific areas of the brain that are involved in these different decision-making processes. They found the thalamus - a small, structure located just above the brain stem that has a role in pain processing - was more active when people were successfully avoiding harm to others. In contrast connections elsewhere in the brain, that are important for learning, became stronger when people choose to repeat an action that harmed someone else. The same connections were not present when people repeated an action that harmed themselves, suggesting different brain systems.

Senior author Dr Molly Crockett, Assistant Professor of Psychology at Yale University, added: "Our findings suggest that the brain's learning systems are primed to avoid directly harming others. In the modern world, of course, many social harms are indirect: our choices might support the manufacture of unethical products or accelerate climate change. How people learn about the more distant moral consequences of their actions is an important question for future study."

Credit: 
University of Birmingham

LiU researchers first to develop an organic battery

image: Mikhail Vagin, principal research engineer and PhD student Penghui Ding, Laboratory of Organic Electronic.

Image: 
Thor Balkhed

Researchers at the Laboratory of Organic Electronics, Linköping University, have for the first time demonstrated an organic battery. It is of a type known as a "redox flow battery", with a large capacity that can be used to store energy from wind turbines and solar cells, and as a power bank for cars.

Redox flow batteries are stationary batteries in which the energy is located in the electrolyte, outside of the cell itself, as in a fuel cell. They are often marketed with the prefix "eco", since they open the possibility of storing excess energy from, for example, the sun and wind. Further, it appears to be possible to recharge them an unlimited number of times. However, redox flow batteries often contain vanadium, a scarce and expensive metal. The electrolyte in which energy is stored in a redox flow battery can be water-based, which makes the battery safe to use, but results in a lower energy density.

Mikhail Vagin, principal research engineer, and his colleagues at the Laboratory of Organic Electronics, Campus Norrköping, have now succeeded in producing not only a water-based electrolyte but also electrodes of organic material, which increases the energy density considerably. It is possible in this way to manufacture completely organic redox flow batteries for the storage of, for example, energy from the sun and wind, and to compensate for load variation in the electrical supply grid.

They have used the conducting polymer PEDOT for the electrodes, which they have doped to transport either positive ions (cations) or negative ions (anions). The water-based electrolyte they have developed consists of a solution of quinone molecules, which can be extracted from forest-based materials.

"Quinones can be derived from wood, but here we have used the same molecule, together with different variants of the conducting polymer PEDOT. It turns out that they are highly compatible with each other, which is like a gift from the natural world", says Viktor Gueskine, principal research engineer in the Laboratory of Organic Electronics, and one of the authors of the article now published in Advanced Functional Materials.

The high compatibility means that the PEDOT electrodes help the quinone molecules switch between their oxidised and their reduced states, and in this way create a flow of protons and electrons.

"It is normally difficult to control the ion process, but we have managed it here. We also use a fundamental phenomenon within electrocatalysis in which one special ion in solution, in this case quinone ions, is converted to electricity. The phenomenon is conceptualised by us as ion-selective electrocatalysis, and probably exists in other types of membrane storage devices such as batteries, fuel cells and supercapacitors. This effect has never previously been discussed. We showed it for the first time in redox flow batteries", says Mikhail Vagin.

The organic redox flow batteries still have a lower energy density than batteries that contain vanadium, but they are extremely cheap, completely recyclable, safe, and perfect for storing energy and compensating for load variations in the electrical supply grid. Maybe in the future we will have an organic redox flow battery at home, as a power bank for the electric car.

Credit: 
Linköping University

Sprinkled with power: How impurities enhance a thermoelectric material at the atomic level

image: HAXPES helps directly investigate electronic states deep within the bulk of the material without unwanted influence from surface oxidation.

Image: 
Dr Masato Kotsugi, Tokyo University of Science

In the search for solutions to ever-worsening environmental problems, such as the depletion of fossil fuels and climate change, many have turned to the potential of thermoelectric materials to generate power. These materials exhibit what is known as the thermoelectric effect, which creates a voltage difference when there is a temperature gradient between the material's sides. This phenomenon can be exploited to produce electricity using the enormous amount of waste heat that human activity generates, such as that from automobiles and thermal power plants, thereby providing an eco-friendly alternative to satisfy our energy needs.

Magnesium silicide (Mg2Si) is a particularly promising thermoelectric material with a high "figure of merit" (ZT)--a measure of its conversion performance. Though scientists previously noted that doping Mg2Si with a small amount of impurities improves its ZT by increasing its electrical conductivity and reducing its thermal conductivity, the underlying mechanisms behind these changes were unknown--until now.

In a recent joint study published as a featured article in Applied Physics Letters, scientists from Tokyo University of Science (TUS), the Japan Synchrotron Radiation Research Institute (JASRI), and Shimane University, Japan, teamed up to uncover the mysteries behind the improved performance of Mg2Si doped with antimony (Sb). Dr Masato Kotsugi from TUS, who is corresponding author of the study, explains their motivation: "Although it has been found that Sb impurities increase the ZT of Mg2Si, the resulting changes in the local structure and electronic states that cause this effect have not been elucidated experimentally. This information is critical to understanding the mechanisms behind thermoelectric performance and improving the next generation of thermoelectric materials."

But how could they analyze the effects of Sb impurities on Mg2Si at the atomic level? The answer lies in extended X-ray absorption fine structure (EXAFS) analysis and hard X-ray photoelectron spectroscopy (HAXPES), as Dr Masato Kotsugi and Mr Tomoyuki Kadono, who is first author of the study, explain: "EXAFS allows us to identify the local structure around an excited atom and has strong sensitivity toward dilute elements (impurities) in the material, which can be precisely identified through fluorescence measurements. On the other hand, HAXPES lets us directly investigate electronic states deep within the bulk of the material without unwanted influence from surface oxidation." Such powerful techniques, however, are not performed using run-of-the-mill equipment. The experiments were conducted at SPring-8, one of the world's most important large X-ray synchrotron radiation facilities, with the help of Dr Akira Yasui and Dr Kiyofumi Nitta from JASRI.

The scientists complemented these experimental methods with theoretical calculations to shed light on the exact effects of the impurities in Mg2Si. These theoretical calculations were carried out by Dr Naomi Hirayama of Shimane University. "Combining theoretical calculations with experimentation is what yielded unique results in our study," she says.

The scientists found that Sb atoms take the place of Si atoms in the Mg2Si crystal lattice and introduce a slight distortion in the interatomic distances. This could promote a phenomenon called phonon scattering, which reduces the thermal conductivity of the material and in turn increases its ZT. Moreover, because Sb atoms contain one more valence electron than Si, they effectively provide additional charge carriers that bridge the gap between the valence and conduction bands; in other words, Sb impurities unlock energy states that ease the energy jump required by electrons to circulate. As a result, the electrical conductivity of doped Mg2Si increases, and so does its ZT.

This study has greatly deepened our understanding of doping in thermoelectric materials, and the results should serve as a guide for innovative materials engineering. Dr Tsutomu Iida, lead scientist in the study, says: "In my vision of the future, waste heat from cars is effectively converted into electricity to power an environment-friendly society." Fortunately, we might just be one step closer to fulfilling this dream.

Credit: 
Tokyo University of Science

Researchers step toward understanding how toxic PFAS chemicals spread from release sites

PROVIDENCE, R.I. [Brown University] -- A study led by Brown University researchers sheds new light on how pollutants found in firefighting foams are distributed in water and surface soil at release sites. The findings could help researchers to better predict how pollutants in these foams spread from the spill or release sites -- fire training areas or airplane crash sites, for example -- into drinking water supplies.

Firefighting foams, also known as aqueous film forming foams (AFFF), are often used to combat fires involving highly flammable liquids like jet fuel. The foams contain a wide range of per- and polyfluoroalkyl substances (PFAS) including PFOA, PFOS and FOSA. Many of these compounds have been linked to cancer, developmental problems and other conditions in adults and children. PFAS are sometimes referred to as "forever chemicals" because they are difficult to break down in the environment and can lead to long-term contamination of soil and water supplies.

"We're interested in what's referred to as the fate and transport of these chemicals," said Kurt Pennell, a professor in Brown's School of Engineering and co-author of the research. "When these foams get into the soil, we want to be able to predict how long it's going to take to reach a water body or a drinking water well, and how long the water will need to be treated to remove the contaminants."

It had been shown previously that PFAS compounds tend to accumulate at interfaces between water and other substances. Near the surface, for example, PFAS tend to collect at the air-water interface -- the moist but unsaturated soil at the top of an aquifer. However, prior experiments showing this interface activity were conducted only with individual PFAS compounds, not with complex mixtures of compounds like firefighting foams.

"You can't assume that PFOS or PFOA alone are going to act the same way as a mixture with other compounds," said Pennell, who is also a fellow at the Institute at Brown for Environment and Society. "So this was an effort to try to tease out the differences between the individual compounds, and to see how they behave in these more complex mixtures like firefighting foams."

Using a series of laboratory experiments described in the journal Environmental Science and Technology, Pennell and his colleagues showed that the firefighting foam mixture does indeed behave much differently than individual compounds. The research showed that the foams had a far greater affinity for the air-water interface than individual compounds. The foams had more than twice the interface activity of PFOS alone, for example.

Pennell says that insights like these can help researchers to model how PFAS compounds migrate from contaminated sites.

"We want to come up with the basic equations that describe the behavior of these compounds in the lab, then incorporate those equations into models that can be applied in field," Pennell said. "This work is the beginning of that process, and we'll scale it up from here."

Ultimately, the hope is that a better understanding of the fate and transport of these compounds could help to identify wells and waterways at risk for contamination, and aid in cleaning those sites up.

Credit: 
Brown University

Removal of dairy cows from the United States may reduce essential nutrient supply with little effect on greenhouse gas emissions

Philadelphia, October 15, 2020 - The US dairy industry contributes roughly 1.58 percent of the total US greenhouse gas emissions; however, it also supplies the protein requirements of 169 million people, calcium requirements of 254 million people, and energy requirements of 71.2 million people. A suggested solution to increasing food production worldwide while reducing greenhouse gas emissions has been to eliminate or reduce animal production in favor of plant production. In an article appearing in the Journal of Dairy Science, scientists from Virginia Tech and the US Dairy Forage Research Center studied the effects of dairy product removal on greenhouse gas emissions and nutrient availability in US diets under various removal scenarios.

The authors of this study assessed three removal scenarios--depopulation, current management (export dairy), and retirement. In depopulation, consumers would stop consuming dairy products, resulting in depopulation of the animals; in current management (export dairy), the cattle management would remain the same and milk produced would be used for products other than human food or exported for human consumption; in retirement, the cattle would be retired to a pasture-based system but reduced to numbers that could be supported by available pastureland.

"Land use was a focus in all animal removal scenarios because the assumptions surrounding how to use land made available if we remove dairy cattle greatly influence results of the simulations," said lead investigator Robin R. White, PhD, Department of Animal and Poultry Science, Virginia Tech, Blacksburg, VA, USA. "If dairy cattle are no longer present in US agriculture, we must consider downstream effects such as handling of pasture and grain land previously used for producing dairy feed, disposition of byproduct feeds, and sourcing fertilizer."

Greenhouse gas emissions were unchanged in the current management (export dairy) scenario, with a decrease in nutrient supplies, as expected. Emissions declined 11.97 percent for the retired scenario and 7.2 percent for the depopulation scenario compared to current emissions. All 39 nutrients considered in human diet quality were decreased for the retired scenario, and although 30 of 39 nutrients increased for the depopulation scenario, several essential nutrients declined.

The results of the study suggest that the removal of dairy cattle from US agriculture would only reduce greenhouse gas emissions by 0.7 percent and lower the available supply of essential nutrients for the human population.

Professor White added, "Production of some essential nutrients, such as calcium and many vitamins, decreased under all reallocation scenarios that decreased greenhouse gas emissions, making the dairy removal scenarios suboptimal for feeding the US population."

This study illustrates the difficulties in increasing supplies of critically limiting nutrients while decreasing greenhouse gas emissions.

Credit: 
Elsevier