Tech

The ultimate conditions to get the most out of high-nickel batteries

image: The effects of ambient air storage on the surface of NMC-811

Image: 
WMG, University of Warwick

The automotive industry has become increasingly interested in the use of high-Ni (nickel) batteries for electric vehicles. However high-Ni cathodes, which make the batteries, are prone to reactivity and instability when exposed to humidity

Researchers from WMG, University of Warwick have researched the best way to store Ni cathodes to mitigate degradation and improve performance

Humid or ambient conditions for batteries result in premature capacity fade of the battery, whereas the best condition are dry storage at dew points of around -45oC.

It is common knowledge in battery manufacturing that many cathode materials are moisture sensitive. However, as the popularity of high nickel-based battery components increases, researchers from WMG, University of Warwick have found that the drier the conditions that these cathodes are stored and processed in, then significant improvement in performance of the battery is gained.

High-Ni (Nickel) batteries are becoming increasingly popular worldwide, with more automotive companies investigating the use of high-Ni batteries for electric vehicles. However, high-Ni cathode materials are prone to reactivity and instability is exposed to humidity, therefore how they are stored in order to offer the best performance is crucial.

In the paper, 'The effects of Ambient Storage Conditions on the Structural and Electrochemical Properties of NMC-811 Cathodes for Li-ion batteries,' published in the journal Electrochimica Acta,, researchers from WMG, University of Warwick propose the best way to store high-nickel cathodes in order to mitigate premature degradation.

Researchers exposed NMC-811 (high-Ni cathode material) to different temperatures and humidities, then measured the material's performance and degradation in a battery over a 28 day period, analysing them using a combination of physical, chemical and electrochemical testing. This included high-resolution microscopy to identify the morphological and chemical changes that occurred at the micron and sub-micron scale during the batteries charging and discharging.

The storage conditions included vacuum oven-dried, as exposed (to humidity) and a control measure. Researchers looked for surface impurities, which include carbonates and H2O, and found there were three processes that can be responsible for impurities, including:

Residual impurities emanating from unreacted precursors during synthesis

Higher equilibrium coverage of surface carbonates/hydroxides (present to stabilise the surface of Ni-rich materials after the synthesis process)

Impurities formed during ambient storage time

They found that in all conditions, (oven dried and as-exposed) showed inferior first discharged specific capacity and cycling performance, compared to the control. However the as-exposed measure showed that after 28 days of ambient moisture exposure the H2O and CO2 react with the Li+ ions in the battery cell, resulting in the formation of lithium carbonate and hydroxide species.

The formation of carbonates and oxides on the surface of NMC-811 contribute to the loss of the electrochemical performance during ageing of the materials, due to the inferior ionic and electronic conductivity, as well as the electrical isolation of the active particles. This means that they can no longer reversibly store lithium ions to convey "charge". SEM analysis confirmed the inter-granular porosity and micro-cracks on these aggregate particles, following the 28 days of ambient exposure.

They can therefore conclude that the driest conditions, at dew points of around -45 oC, are the best for storing AND processing the materials, in order to then produce the best battery performance. Humidity conditions and exposure at junctions along the manufacturing process will cause the materials and components to experience; this results in shorter battery lifespan.

Dr Mel Loveridge from WMG at the University of Warwick comments:

"Whilst moisture is well known to be problematic here, we set about to determine the optimal storage conditions that are required to mitigate unwanted, premature degradation in battery performance. Such measures are critical to improve processing capability, and ultimately maintain performance levels. This is also of relevance to other Ni-rich systems e.g. NCA materials."

Professor Louis Piper from WMG at the University of Warwick adds:

"Considerable global research effort will continue to focus on these materials, including how to protect their surfaces to eliminate risks of parasitic reactions prior to incorporation into electrodes. In the UK, leading research by the Faraday Institution has a project consortium entirely devoted to unravelling the degradation mechanisms of such industry-relevant materials."

Credit: 
University of Warwick

The gut microbiota forms a molecule that can contribute to diabetes progression

image: Prof. Fredrik Backhed, University of Gothenburg and Prof. Karine Clement, Sorbonne University

Image: 
Photo: Johan Wingborg

It is the bacterial changes in the gut that increase the levels of imidazole propionate, the molecule that makes the body's cells resistant to insulin in type 2 diabetes. This result emerges from a European study, MetaCardis.

The gut and its bacteria are considered important in many diseases and several studies have shown that the gut microbiota affects the breakdown of several different parts of our diet. In previous research on gut microbiota and type 2 diabetes, the focus has often been on butyric acid-producing dietary fibers and their possible effects on blood sugar regulation and insulin resistance.

In previous research led by Fredrik Bäckhed, Professor of molecular medicine at the University of Gothenburg, demonstrated that diabetes can be linked to a changes in the composition of intestinal bacteria, which increases the production of molecules that may contribute to the disease.

His group has shown that the altered intestinal microbiota leads to altered metabolism of the amino acid histidine, which in turn leads to increased production of imidazole propionate, the molecule that prevents the blood sugar lowering effects of insulin.

An article published in the journal Nature Communications now confirms the initial findings in a large European study with 1,990 subjects, which shows that patients with type 2 diabetes from Denmark, France and Germany also had increased levels of imidazole propionate in their blood.

"Our study clearly shows that imidazole propionate is elevated in type 2 diabetes in other populations as well" says Fredrik Bäckhed, and continues:

"The study also shows that the levels of imidazole propionate are elevated even before the diabetes diagnosis is established, in so-called prediabetes. This may indicate that imidazole propionate may contribute to disease progression."

The altered gut microbiota observed in people with type 2 diabetes has fewer species than normal glucose tolerant individuals, which is also linked to other diseases. The researchers speculate that this leads to an altered metabolism of the amino acid histidine.

The EU-funded research collaboration MetaCardis has been led by Karine Clément, Professor of Nutrition at Sorbonne University and Assistance Publique Hôpitaux de Paris, a direction of an INSERM group in Paris.

"Interestingly enough, our findings suggest that it is the altered intestinal microbiota rather than the histidine intake in the diet that affects the levels of imidazole propionate". She continuous:

"An unhealthy diet also associates with increased imidazole propionate in individuals with type 2 diabetes".

One problem with research on microbiota and various diseases has been limited reproducibility. By studying the products that the bacteria produce, the metabolites, one focuses on the function of the bacteria rather than on the exact species in the intestine. Fredrik Bäckhed:

"The collaboration gave us unique opportunities to confirm preliminary findings that imidazole propionate can be linked to type 2 diabetes. Here we had the opportunity to analyze almost 2,000 samples and can thus determine that elevated levels of imidazole propionate can be linked to type 2 diabetes. As the levels are elevated even in prediabetes, imidazole propionate may also cause the disease in some cases, he says.

Credit: 
University of Gothenburg

New semiconductor coating may pave way for future green fuels

image: Photoelectrochemical cell that is used in the study to investigate semiconductor performance under rays of simulated sun.

Image: 
Sascha Ott

Hydrogen gas and methanol for fuel cells or as raw materials for the chemicals industry, for example, could be produced more sustainably using sunlight, a new Uppsala University study shows. In this study, researchers have developed a new coating material for semiconductors that may create new opportunities to produce fuels in processes that combine direct sunlight with electricity. The study is published in Nature Communications.

"We've moved a step closer to our goal of producing the fuel of the future from sunlight," says Sascha Ott, Professor at the Department of Chemistry, Uppsala University.

Today, hydrogen gas and methanol are produced mainly from fossil sources like oil or natural gas. An environmentally sounder, climate-friendlier option is to make these substances from water and carbon dioxide, using sustainable electricity, in what are known as electrolysers. This process requires electrical energy in the form of applied voltage.

The scientists have devised a new material that reduces the voltage needed in the process by using sunlight to supplement the electricity.

To capture the sunlight, they used semiconductors of the same type as those found in solar cells. The novel aspect of the study is that the semiconductors were covered with a new coating material that extracts electrons from the semiconductor when the sun is shining. These electrons are then available for fuel-forming reactions, such as production of hydrogen gas.

The coating is a "metal-organic framework" - a three-dimensional network composed of individual organic molecules that are held in place, on the sub-nanometre scale, by tiny metal connectors. The molecules capture the electrons generated by the sunlight and remove them from the semiconductor surface, where undesired chemical reactions might otherwise take place. In other words, the coating prevents the system from short-circuiting, which in turn allows efficient collection of electrons.

In tests, the researchers were able to show that their new design greatly reduces the voltage required to extract electrons from the semiconductor.

"Our results suggest that the innovative coatings can be used to improve semiconductor performance, leading to more energy-efficient generation of fuels with lower electrical input requirements," Sascha Ott says.

Credit: 
Uppsala University

Green chemistry: Politecnico di Milano publishes in Chem

image: Solid-state synthesis of a molecular crystal with a Borromean topology

Image: 
Chem

The prestigious journal Chem (Cell Press, impact factor: 19.735) publishes the first mechanosynthesis of a molecular crystal with a Borromean topology. The research has been carried out by an international team led by Prof. Pierangelo Metrangolo, Giuseppe Resnati, and Giancarlo Terraneo at the Department of Chemistry, Materials and Chemical Engineering "Giulio Natta" of the Politecnico di Milano.

The results obtained by the Politecnico di Milano group have shown that mechanosynthesis can be applied to the self-assembly of complex multi-component supramolecular structures such as the Borromean rings, demonstrating, in detail, the mechanism of formation of this complex topology. These findings open up new perspectives in the design of complex chemical systems such as the mechanosynthesis of diamonds, the development of absorbent materials that enable the storage of hydrogen to be used in the advanced automotive, ultra-light composites for aeronautics, and the development of new drugs.

Mechanochemistry studies the application of mechanical energy to a chemical reaction carried out in the solid-state, to influence its rate and trajectory. The origins of mechanochemistry can be traced back to the Stone Age, where the use of mortar and pestle for the preparation of food or dyes represented a process to induce chemical transformations by mechanical forces.

From an environmental point of view, the mechanochemical processes are particularly sustainable since, taking place in the solid state, they do not use toxic or flammable solvents. For this reason, their use is widely spreading in numerous industrial sectors of green chemistry and sustainable engineering including pharmaceuticals, polymer chemistry, and composites.

Despite this, the mechanisms by which mechanical energy contributes to the breaking and formation of new chemical bonds are not fully understood, yet, as well as the general applicability of mechanochemistry to diverse chemical processes.

BOX

The Borromean knot consists of a knot formed by three rings in which two rings are parallel to each other and it is only the third ring that interlocks them keeping all the three together. Simply by cutting anyone of the three rings, the Borromean knot falls apart.

The etymology of the name dates back to Federico Borromeo, cardinal and archbishop of Milano who chose the Borromean Knot as his emblem. The symbol of the Borromean dynasty is the three entangled rings and represents the Trinity. The Borromean Rings indicate "strength in unity". This symbol has been also adopted by other cultures in various ages, including the Scots and the Vikings.

From a mathematical point of view, the Borromean topology is one of the most complex and fascinating.

Credit: 
Politecnico di Milano

The role of drones in 5G network security

The introduction of the fifth generation mobile network, or 5G, will change the way we communicate, multiply the capacity of the information highways, and allow everyday objects to connect to each other in real time. Its deployment constitutes a true technological revolution not without some security hazards. Until 5G technology has definitively expanded, some challenges remain to be resolved, including those concerning possible eavesdropping, interference and identity theft.

Unmanned Aerial Vehicles (UAV), also known as drones, are emerging as enablers for supporting many applications and services, such as precision agriculture, search and rescue, or in the field of communications, for temporary network deployment and their coverage extension and security.

Giovanni Geraci, a researcher with the Department of Information and Communication Technologies (DTIC) at UPF, points out in a recent study: "On the one hand, it is important to protect the network when it is disturbed by a drone that has connected and generates interference. On the other, in the future, the same drones could assist in the prevention, detection, and recovery of attacks on 5G networks". The paper was published in August in the journal IEEE Wireless Communications together with Aly Sabri Abdalla, Keith Powell and Vuk Marojevic, researchers from the Department of Electronic Engineering and Computer Science at the University of the State of Mississippi (USA).

The study poses two different cases. First, the use of UAVs to prevent possible attacks, still in its early stages of research, and, secondly, how to protect the network when disturbed by a drone, a much more realistic, as Geraci explains: "A drone could be the source of interference to users. This can happen if the drone is very high up and when its transmissions travel a long distance because there are no obstacles in the way, such as buildings".

The integration of UAV devices in future mobile networks may expose the latter to potential risks of attack based on UAVs. UAVs with cellular connection may experience radio propagation characteristics that are probably different from those experienced by a terrestrial user. Once a UAV flies well above the base stations, they can create interference or even rogue applications, such as a mobile phone connected to a UAV without authorization.

Based on the premise that 5G terrestrial networks will never be 100% secure, the authors of this study also suggest using UAVs to improve 5G network security and beyond wireless access. "In particular, in our research we have considered jamming, identity theft, or 'spoofing', eavesdropping, and the mitigation mechanisms that are enabled by the versatility of UAVs", the researchers explain.

The study shows several areas in which the diversity and 3D mobility of UAVs can effectively improve the security of advanced wireless networks against eavesdropping, interference and 'spoofing', before they occur or for rapid detection and recovery. "The article raises open questions and research directions, including the need for experimental evaluation and a research platform for prototyping and testing the proposed technologies", Geraci explains.

Credit: 
Universitat Pompeu Fabra - Barcelona

Astronomers' success: seven new cosmic masers

image: Dr Pawe? Wolak at the radio telescope RT-4 in Piwnice

Image: 
Andrzej Roma?ski

The publication is the result of many months of observations of radiation coming from the plane of the Milky Way, namely from the spiral arms of our galaxy, where a lot of matter, dust and gas accumulate. It is under such conditions that massive stars are born.

Complex process

At the beginning it is worth noting that the formation of high mass stars is a complex process, less recognised by scientists than the formation of solar type stars. A massive star in its early stage of evolution cannot be seen - scientists do not have the tools of appropriate resolution. So only radio telescopes are at the astronomers' disposal.

A young star, or only the emerging one, is surrounded by a cocoon of matter, so we can simply say that it is a real chemical "factory". We can find a huge number of molecules, including methanol, the most basic alcohol, whose observations we have been focusing on,' explains Prof. Anna Bartkiewicz.

In the cocoon of dusts and gases, there is a maser emission. This can be compared to a diode indicator - a laser. Except that the laser is amplified by light and the maser by microwaves. And it is the radiation that astronomers are able to observe.

- 'Different types of particles send out radio waves at their own frequencies and this is how we can recognise them. For example, particles of methanol and water vapour illuminate at 6.7 GHz and 22 GHz respectively, which corresponds to wavelengths of 4.5 cm and 1.3 cm. We can say that we see colours', explains Micha? Durjasz. - We set the appropriate frequency for a given matter and then we are able to observe the only one that interests us. In our last research, we set the frequency at 6,031 GHz i 6,035 GHz'.

Previously, the method of searching for methanol was different - you scanned the Milky Way 'centimetre by centimetre', and if you spotted the detection, then the astronomers stopped their observations in that particular area for a longer time.

Months of observations

- 'Today, we already recognize star-forming areas, so we can focus on searching for the molecule we are interested in at the right frequency,' explains Prof. Bartkiewicz.

The scientists from Toru? had spent many months observing these areas, looking for even the smallest methanol masers. Then, an idea came from Prof. Marian Szymczak.

Similar analyses of the sky have been carried out all over the world - there are several teams that have been dealing with this, for example in Southern Africa, Great Britain and Australia. It should be noted that the centre in Toru? earned a great deal of merit in this area - it was at the NCU Institute of Astronomy in Piwnice that many sources have been detected in the northern sky which had not previously been discovered. Recently, however, no one has undertaken such a comprehensive and detailed review of all available sources.

- 'We used our 32-metre radio telescope rt4 to collect data. A new receiver was used to pick up waves of this frequency. It is worth noting that it was built in Piwnice, in the former Department of Radio Astronomy, where our engineers built it. Special merit should be attributed to Eugeniusz Pazderski, who designed it,' says Dr Pawe? Wolak: - Receivers on our radio telescopes partly resemble those used in home radios, the main difference is that we don't cool these home appliances to very low temperatures - even to -265°C. Such a procedure definitely improves their efficiency.'

Astronomers began by compiling a list of all available sources in the northern sky. Then, those that could be observed through the radio telescope in Piwnice were selected out of the database of about a thousand areas. In total, 445 objects were studied in detail.

- 'It was a really hard, systematic, often repetitive work, taking a lot of time and requiring patience,' says mgr Durjasz. - 'Not only was it time that was needed, but also the right conditions.'

Months of observations of 445 areas of star formation have been successful - astronomers have discovered that 37 of them show emissions, which means they have found the OH molecule there.

'It turned out that seven sources are completely new - nobody had seen and registered them before.' - says Prof. Bartkiewicz. - 'Overall, our detection success was 6.9%. It might seem very little, for some such an effect could be discouraging. Our work with the radio telescope can be compared to listening to a mosquito buzzing during a loud concert.'

Further exploration of young massive stars, especially the newly discovered ones, awaits the Toru? astronomers. They also plan to create precise maps of the areas where the stars are formed. The planned activities, and the data already collected, will be important for a better understanding of the physical conditions of these objects and will provide a lot of information about their magnetic fields.

- 'In some time, massive stars will become supernovae, black holes, the nuclei of the next generation of stars, or massive elements which give life as we know it. And we still do not know how such a star is born, we do not know its origins. Of course, there are a lot of theories, but it is difficult to examine them, which is why we use all the tools available to us, and so far, radio telescopes have proved their worth,' explains Dr Wolak.

Credit: 
Nicolaus Copernicus University in Torun

Xenophobia in Germany is declining, but old resentments are paired with new radicalism

The study, which also explores people's belief in conspiracy theories, was conducted in cooperation with the Heinrich Böll Foundation and the Otto Brenner Foundation.

"This time we have some good news, but we must also point out that xenophobia and extreme right-wing attitudes are still at a high level - and that authoritarian and anti-democratic attitudes are a constant threat to our open, liberal society. What's more, certain ideologies are becoming entrenched," said Professor Oliver Decker.

According to the study, the percentage of people with "manifestly xenophobic" attitudes has fallen from 23.4 to 16.5 per cent compared to 2018. "What is striking here is the difference in this decline between western and eastern Germany," said Decker. In the west, the share dropped from 21.5 to 13.7 per cent, and in the east only from 30.7 to 27.8 per cent. Overall, 28.4 per cent (two years ago: 36 per cent) of respondents agreed with the statement that "foreigners only come here to take advantage of the welfare state" (east: 43.9 per cent, west: 24.5 per cent). Around 26 per cent of those surveyed consider the Federal Republic of Germany to be "dangerously swamped by foreigners" - a drop of ten percentage points. While the proportion of people with firmly right-wing attitudes continued to fall in western Germany (to 3 per cent), it rose again in eastern Germany. Almost one in ten people questioned there had a narrow, extremely right-wing world view.

The current study also shows that acceptance of traditional anti-Semitism has declined slightly nationwide, as has prejudice against Muslims. "But we must not delude ourselves, we are still seeing an alarmingly high level of agreement on some issues," said Professor Elmar Brähler. More than one in four respondents agreed with the statement that Muslims should be banned from immigrating to Germany. More than half of the people who took part in the study agreed with the statement that Sinti and Roma tend to commit crimes. Some 47 per cent of those surveyed claimed that they sometimes felt like foreigners in their own country because of large numbers of Muslims (2018: 55 per cent). The situation is similar for certain manifestations of anti-Semitism. For example, ten per cent of those surveyed were understanding of the fact that "some people have something against Jews" and 41 per cent believed that "the payment of reparations merely serves a Holocaust industry" (2018: 36 per cent).

Right-wing extremist attitudes and bridges to far-right ideology

The researchers found that 4.3 per cent of respondents had "manifest right-wing extremist attitudes" (9.5 per cent in the east, 3 per cent in the west) - with a slight increase in the east and a slight decrease in the west. In the researchers' view, authoritarianism as a personality trait is one of the main causes of right-wing extremist attitudes. "People with an authoritarian character tend to have rigid ideologies that allow them to simultaneously submit to authority, share in its power, and promote prejudice against others in the name of that system," said Elmar Brähler. "Around a third of Germans display authoritarian-type characteristics."

The tenth round of the study also includes an analysis of how the results have changed over time. "It has become apparent that over the years we have shifted the focus of our authoritarianism studies, away from right-wing extremism and towards a study of anti-modern milieus that are not necessarily manifestly far-right, but are always anti-democratic," said Oliver Decker. "Furthermore, elements of the extreme right-wing world view are shared. And there we see that such shared motifs act like a bridge, even between different cultural and social milieus. These include anti-feminism, anti-Semitism that focuses on Israel, and the belief in conspiracy theories. It is these bridges that constitute the danger to democracy."

Conspiracy theories, including about COVID-19

The topic of conspiracy theories was included in the Leipzig study for the fifth time, and this time also with questions related to the coronavirus pandemic. Levels of agreement with the statement "The coronavirus crisis has been blown out of proportion so that a few people could benefit from it" were 33 per cent ("very strong") and 15.4 per cent ("strong"), while agreement with the statement "The reasons behind the coronavirus pandemic will never come to light" was at 47.8 per cent ("very strong") and 14.6 per cent ("strong"). "Our survey has shown that belief in conspiracy theories has increased among the population since 2018. We would also say that this can act as a kind of gateway drug for an anti-modern world view," said Professor Decker.

About the Leipzig Authoritarianism Study

Since 2002, researchers at Leipzig University have been observing changes in authoritarian and far-right attitudes in Germany. From 2006 until 2012, the so-called "Mitte" Studies were carried out in cooperation with the Friedrich Ebert Foundation. The Leipzig studies are now published in cooperation with the Heinrich Böll Foundation and the Otto Brenner Foundation.

In the tenth wave, 2503 people were surveyed nationwide between 2 May 2020 and 19 June 2020 using a paper-and-pencil method. The respondents filled out a paper questionnaire themselves. The data was thus collected during the phase of the COVID-19 pandemic in which the severe restrictions to protect against infection were gradually relaxed. Social distancing and hygiene rules were observed during the interviews. Participants were selected using stratified sampling. As with the previous surveys in the series, this year's survey was also conducted by the Berlin market research institute USUMA on behalf of Leipzig University. The questionnaire used for the study consisted of two parts. In the first part, respondents were asked to provide socio-demographic information about themselves and their household in accordance with the demographic standards of the Federal Statistical Office. Afterwards, the respondents were given the second, main part of the questionnaire, which they were asked to answer on their own due to the at times highly personal information requested.

Credit: 
Universität Leipzig

Scientists develop a magnetic switch with lower energy consumption

image: Schematic representation of the magnetic switch

Image: 
UAB

Magnetic materials are ubiquitous in modern society, present in nearly all the technological devices we use every day. In particular, personal electronics like smartphones/watches, tablets, and desktop computers all rely on magnetic material to store information. Information in modern devices is stored in long chains of 1's and 0's, in the binary number system used as the language of computers.

« If you imagine a bar magnet, the same that many of us played with as a kid (and perhaps still do), you may recall that they were labeled with a "north" side and a "south" side (or had two different colors on each end). If two magnets were brought next to each other, the same sides would repel, and the opposite sides would attract - two distinct halves which can be easily identified. In this way, a "1" and a "0" can be assigned to a magnet's orientation, so that a long chain of magnets can be arranged in a computer to store data », explains ICREA researcher at the UAB Jordi Sort, one of the research coordinators.

Currently, changing the orientation of a magnet (essentially writing or rewriting data) in electronics has relied on using current, the same current needed to power the outlets in your house and charge your phone. But therein lies a problem: when you run current through a material, the material heats up. This heat is a form of energy that is lost to the environment, essentially wasted. The demand to store more and more data increases each year, and necessitates creating smaller and smaller devices, which exponentially worsens this heating effect, leading to huge energy losses. It is no surprise, then, that government and private research has turned to developing new, energy-efficient materials and technologies to solve this issue.

One possible solution to this problem is to use magnetic materials which can rely on voltage to reorient magnetic material, studied in a field of research called voltage-controlled magnetism, using voltage in place of current to significantly reduce the energy needed to alter the magnetic orientation. There are several approaches, but a promising and popular research branch in the field explores magneto-ionics, where non-magnetic atoms are moved in and out of a magnetic material using voltage, and so altering its magnetic properties.

A recent collaborative study between the UAB, Georgetown University, HZDR Dresden, CNM's Madrid and Barcelona, University of Grenoble, and ICN2, and published in the journal Nature Communications has shown that it is possible to switch magnetism ON and OFF in metals containing nitrogen (that is, to generate or remove all magnetic features of this material) with voltage. One simple analogy would be that we are able to increase or completely remove the strength with which a magnet attracts to, for example, the door of a fridge, simply by connecting it to a battery and applying a certain voltage polarity. In this project, cobalt-nitride is shown to be non-magnetic on its own, but when nitrogen is removed with voltage, it forms a cobalt-rich structure which is magnetic (and vice versa). This process is shown to be repeatable and durable, suggesting that such a system is a promising means to write and store data in a cyclable manner. Interestingly, it is also shown to require less energy and it is faster than systems using alternative non-magnetic atoms, such as oxygen, elevating the possible energy savings.

Credit: 
Universitat Autonoma de Barcelona

New electronic chip delivers smarter, light-powered AI

image: The light-powered AI chip - prototype technology that brings together imaging, processing, machine learning and memory.

Image: 
RMIT University

Researchers have developed artificial intelligence technology that brings together imaging, processing, machine learning and memory in one electronic chip, powered by light.

The prototype shrinks artificial intelligence technology by imitating the way that the human brain processes visual information.

The nanoscale advance combines the core software needed to drive artificial intelligence with image-capturing hardware in a single electronic device.

With further development, the light-driven prototype could enable smarter and smaller autonomous technologies like drones and robotics, plus smart wearables and bionic implants like artificial retinas.

The study, from an international team of Australian, American and Chinese researchers led by RMIT University, is published in the journal Advanced Materials.

Lead researcher Associate Professor Sumeet Walia, from RMIT, said the prototype delivered brain-like functionality in one powerful device.

"Our new technology radically boosts efficiency and accuracy by bringing multiple components and functionalities into a single platform," Walia who also co-leads the Functional Materials and Microsystems Research Group said.

"It's getting us closer to an all-in-one AI device inspired by nature's greatest computing innovation - the human brain.

"Our aim is to replicate a core feature of how the brain learns, through imprinting vision as memory.

"The prototype we've developed is a major leap forward towards neurorobotics, better technologies for human-machine interaction and scalable bionic systems."

Total package: advancing AI

Typically artificial intelligence relies heavily on software and off-site data processing.

The new prototype aims to integrate electronic hardware and intelligence together, for fast on-site decisions.

"Imagine a dash cam in a car that's integrated with such neuro-inspired hardware - it can recognise lights, signs, objects and make instant decisions, without having to connect to the internet," Walia said.

"By bringing it all together into one chip, we can deliver unprecedented levels of efficiency and speed in autonomous and AI-driven decision-making."

The technology builds on an earlier prototype chip from the RMIT team, which used light to create and modify memories.

New built-in features mean the chip can now capture and automatically enhance images, classify numbers, and be trained to recognise patterns and images with an accuracy rate of over 90%.

The device is also readily compatible with existing electronics and silicon technologies, for effortless future integration.

Seeing the light: how the tech works

The prototype is inspired by optogenetics, an emerging tool in biotechnology that allows scientists to delve into the body's electrical system with great precision and use light to manipulate neurons.

The AI chip is based on an ultra-thin material - black phosphorous - that changes electrical resistance in response to different wavelengths of light.

The different functionalities such as imaging or memory storage are achieved by shining different colours of light on the chip.

Study lead author Dr Taimur Ahmed, from RMIT, said light-based computing was faster, more accurate and required far less energy than existing technologies.

"By packing so much core functionality into one compact nanoscale device, we can broaden the horizons for machine learning and AI to be integrated into smaller applications," Ahmed said.

"Using our chip with artificial retinas, for example, would enable scientists to miniaturise that emerging technology and improve accuracy of the bionic eye.

"Our prototype is a significant advance towards the ultimate in electronics: a brain-on-a-chip that can learn from its environment just like we do."

Credit: 
RMIT University

Patients strongly favor banning bacon in hospitals, according to new survey

WASHINGTON--A majority of hospitalized patients favor eliminating processed meats--including bacon, deli meat, and sausage--from hospital menus to reduce cancer risk, according to a new survey published in the Journal of Hospital Management and Health Policy.

Researchers with the Physicians Committee for Responsible Medicine surveyed a total of 200 patients in two Washington, D.C., hospitals and found that:

83% of patients are in favor of hospitals eliminating processed meat in order to reduce cancer risk.

69% of patients feel it is not important that hospitals have bacon or sausage on the menu.

The World Health Organization has determined that processed meat is a major contributor to colorectal cancer, classifying it as "carcinogenic to humans." A 50-gram serving a day--one hot dog or two strips of bacon--increases colorectal cancer risk by 18%. Processed meat is also linked to stomach, pancreatic, prostate, and breast cancers, along with cardiovascular disease and type 2 diabetes.

In 2017, the American Medical Association passed a resolution urging hospitals to eliminate processed meat to promote a healthy food environment.

"Health experts have called on hospitals to eliminate processed meat from their menus to reduce the risk of cancer and cardiovascular disease," says Neal Barnard, MD, study author and president of the Physicians Committee. "Now it's clear that patients overwhelmingly agree that they'd like to see healthy food on their hospital trays."

In 2019, D.C. Councilmember Mary Cheh introduced a groundbreaking bill that would require hospitals in the district to improve the nutritional quality of their menus by eliminating processed meat such as bacon and hot dogs, making plant-based options available, and reducing sugar-sweetened beverages. If passed, The Healthy Hospitals Amendment Act would become the first bill in the United States to require the removal of processed meat from health care facilities.

"It's not uncommon for patients to wake up from surgery to be greeted with bacon and sausage--the very foods that may have contributed to their health problems in the first place," adds Dr. Barnard. "It's time to create a healthier food environment."

D.C. is projected to have 131,194 cases of heart disease in 2030, nearly four times the number in 2010. Recent government statistics show that type 2 diabetes takes an extraordinarily high toll in Wards 7 and 8, where United Medical Center (UMC), one of the surveyed hospitals, is located. Statistics also show disproportionate colorectal cancer incidence in these same wards.

Credit: 
Physicians Committee for Responsible Medicine

A 2D perspective: stacking materials to realize a low power consuming future

image: Dr. Myoung-Jae Lee, DGIST

Image: 
DGIST

2D materials have been popular among materials scientists owing to their lucrative electronic properties, allowing their applications in photovoltaics, semiconductors, and water purification. In particular, the relative physical and chemical stability of 2D materials allow them to be "stacked" and "integrated" with each other. In theory, this stability of 2D materials enables the fabrication of 2D material-based structures like coupled "quantum wells" (CQWs), a system of interacting potential "wells," or regions holding very little energy, which allow only specific energies for the particles trapped within them. CQWs can be used to design resonant tunneling diodes, electronic devices that exhibit a negative rate of change of voltage with current and are crucial components of integrated circuits. Such chips and circuits are integral in technologies that emulate neurons and synapses responsible for memory storage in the biological brain.

Proving that 2D materials can indeed be used to create CQWs, a research team led by Myoung-Jae Lee Ph.D. of Daegu Gyeongbuk Institute of Science and Technology (DGIST) designed a CQW system that stacks one tungsten disulfide (WS2) layer between two hexagonal boron nitride (hBN) layers. "hBN is a nearly ideal 2D insulator with high chemical stability. This makes it a perfect choice for integration with WS2, which is known to be a semiconductor in 2D form", explains Prof. Lee. Their findings are in published in ACS Nano.

The team measured the energy of excitons--bound systems comprising an electron and an electron hole (absence of electron)--and trions (electron-bound exciton) for the CQW and compared them to that for bilayer WS2 structures to identify the effect of WS2-WS2 interaction. They also measured the current-voltage characteristics of a single CQW to characterize its behavior.

They observed a gradual decrease in both the exciton and trion energy with an increase in the number of stakes, and an abrupt decrease in the bilayer WS2. They attributed these observations to a long-range inter-well interaction and strong WS2-WS2 interactions in absence of hBN, respectively. The current-voltage characteristics confirmed that it behaves like a resonant tunneling diode.

So, what implications do these results have for the future of electronics? Prof. Lee summarizes, "We can use resonant tunneling diodes for making multivalued logic devices that will reduce circuit complexity and computing power consumptions considerably. This, in turn, can lead to the development of low-power electronics."

These findings are sure to revolutionize the electronics industry with extreme low power semiconductor chips and circuits, but what is more exciting is where these chips can take us, as they can be employed in applications that mimic neurons and synapses, which play a role in memory storage in the biological brain. This "2D perspective" may thus be the next big thing in artificial intelligence!

Credit: 
DGIST (Daegu Gyeongbuk Institute of Science and Technology)

Community helps scientists evaluate smoke forecasts

During the smoky summer of 2018, two wildfires in Utah County burned a combined 121,000 acres, sending smoke pouring into the valleys of the Wasatch Front. Atmospheric scientists are always working to better forecast how smoke moves from fires, just as they work to forecast hurricanes and snowstorms.

But the fires in 2018 provided a unique opportunity for scientists. Across the Wasatch Front, both researchers and community members maintain enough air quality sensors to provide a high-resolution picture of how the smoke moved through the valley--perfect for testing and refining smoke forecast models.

"This forecast would be similar to how we would forecast rainy weather or clear conditions," says Derek Mallia, research assistant professor in the Department of Atmospheric Sciences, "except we can now do it for smoke."

Mallia and his colleagues, including researchers from the Department of Chemical Engineering and School of Computing, published their results in the Journal of Geophysical Research-Atmospheres.

An air quality network

Air quality is a high-priority topic for Utahns. Because of the Salt Lake Valley's mountainous geography, the area experiences wintertime temperature inversions that trap air pollution and emissions, often resulting in unhealthy air conditions. Researchers, particularly those at the U, have focused on understanding and measuring the air conditions in the valley through a network of research-grade sensors. They've also placed sensors on vehicles that move through the valley: TRAX light rail, Google StreetView cars and a van affectionately named the "NerdMobile."

Members of the community also maintain their own sensors. Kerry Kelly, assistant professor of chemical engineering, and colleagues have developed low-cost interconnected particulate matter sensors that are maintained by homeowners throughout the Salt Lake Valley, improving the resolution of measurements. The low-cost sensor network is called Air Quality and yoU, or AQ&U. Air pollution is not distributed evenly, and all of these sensors together help researchers understand the where, when and why of polluted air.

Modeling smoke

In Utah's summers, however, a temperature inversion isn't a problem. But smoke from Western wildfires is.

"From a practical standpoint, smoke is yet another variable that we need to account for in a weather forecast," Mallia says. "Similar to how unsettled weather such as snow or thunderstorms can impact our everyday activities, smoke can also play an important role." Particularly vulnerable, he says, are people with asthma or other respiratory or cardiovascular diseases. Smoke can also impact recreation. "Who wants to go sightseeing when Utah is blanketed in smoke?"

Models that predict the movement of smoke need to be validated, or compared with observations, to make sure they're simulating smoke conditions with reasonable accuracy. But when wildfires occur in remote locations, the limited number of air quality sensors is usually not sufficient to evaluate models. That's where the Salt Lake Valley's air quality network comes in.

What the network saw

In the 2018 fire season, nearly 60,000 fires burned nearly 9 million acres, including more than 18,000 homes, across the United States. Following months of hot and dry conditions, the Pole Creek and Bald Mountain Fires combined to burn nearly 121,000 acres in central Utah. The smoke-filled the valleys of the Wasatch Front, which were fortunately well-equipped with air quality instrumentation. When the fires were safely contained, the researchers saw a scientific opportunity.

Mallia, Kelly, U research assistant professor Logan Mitchell and colleagues including professor Adam Kochanski of San Jose State University and professor Jan Mandel of CU Denver, looked at the data that came back from the sensors--both the research-grade sensors and the low-cost versions at people's homes. Their results showed that measurements of particulate matter in the air by the low-cost sensors were accurate to within 10% of the measurements at nearby research-grade sensors.

Smoke forecasts from the model captured the timing of smoke arrival, but not the amount--the researchers found that the forecast overestimated the amount of smoke by a factor of two. That result helps the scientists to then go back to the model and figure out why so that the next version can be more accurate.

The results also gave some insights into how the mountain valleys can disperse the smoke. "For example," Mallia says, "we found that canyon winds during the nighttime can filter cleaner mountain air into the valley, which is why we saw less polluted air near the valley benches."

The value of the community

Mallia says that the sensors on TRAX trains were invaluable in covering more ground than a stationary air monitor, and that the involvement of community members, with sensors around their homes, was key to expanding the study area even further.

"There is a lot of space in the valley that is privately owned, so getting the public involved is the only way that we can properly sample different areas across the Salt Lake Valley," he says. "This allows us to sample more points in the valley and gives us greater confidence towards identifying the strengths and weaknesses of our smoke forecasts."

Unfortunately, he adds, smoky days will become more frequent in the future. Climate change is projected to make hot and dry conditions more likely, which is the perfect recipe for more numerous and intense wildfires. Forecasts can't prevent the smoke from coming any more than weather forecasts can prevent blizzards or hurricanes--but they can help all of us stay informed and prepared.

Credit: 
University of Utah

The Lancet Infectious Diseases: Chinese vaccine candidate based on inactivated SARS-CoV-2 virus appears safe and induces an immune response in healthy volunteers, preliminary study finds

Phase 1/2 randomised controlled trial of an inactivated SARS-CoV-2 vaccine candidate (CoronaVac) involved more than 700 healthy volunteers aged 18-59 years recruited in China between 16 April and 5 May 2020.

The vaccine appeared to be safe and well tolerated at all tested doses. The most common reported side effect was pain at the injection site.

Within 14 days of the final dose, study detected robust antibody responses after two injections of the vaccine candidate were given 14 days apart, even at lowest dose tested (3μg).

Antibody levels induced by the vaccine were lower than those seen in people who have been infected by and recovered from COVID-19 but researchers say they still expect the vaccine could provide protection from the virus.

The primary objective was to evaluate the immune response and safety of the vaccine, and the study was not designed to assess how effective it is at preventing infection with SARS-CoV-2.

Results from an early-phase randomised clinical trial of a Chinese vaccine candidate based on the inactivated whole SARS-CoV-2 virus (CoronaVac) are published today in The Lancet Infectious Diseases journal, finding the formulation appears safe and induces an antibody response in healthy volunteers aged 18 to 59 years.

Antibody responses could be induced within 28 days of the first immunisation, by giving two doses of the vaccine candidate 14 days apart.

The study also identifies the optimum dose to generate the highest antibody responses, while taking account of side effects and production capacity, is 3μg, and this will be studied further in phase 3 trials that are already underway [1].

The mean neutralising antibody titres induced by CoronaVac ranged from 23.8 to 65.4, which were lower than levels seen in people who have previously had COVID-19 (average level of 163.7) [2]. However, the researchers still believe CoronaVac could provide sufficient protection against COVID-19 based on their experience with other vaccines and data from their preclinical studies with macaques.

The trial was not designed to assess efficacy, and findings from phase 3 studies will be crucial for determining if the immune response generated by CoronaVac is sufficient to protect from SARS-CoV-2 infection. Additionally, the persistence of the antibody response needs to be verified in future studies to determine how long-lived any protection might be. Finally, the study only included healthy adults aged 18 to 59 years and further studies will be needed to test the vaccine candidate in other age groups, as well as in people who have pre-existing medical conditions.

Professor Fengcai Zhu, joint lead author of the study, from the Jiangsu Provincial Center for Disease Control and Prevention, Nanjing, China, said: "Our findings show that CoronaVac is capable of inducing a quick antibody response within four weeks of immunisation by giving two doses of the vaccine at a 14 day interval. We believe that this makes the vaccine suitable for emergency use during the pandemic. In the longer term, when the risk of COVID-19 is lower, our findings suggest that giving two doses with a one month interval, rather than a two week interval, might be more appropriate for inducing stronger and potentially longer-lasting immune responses. However, further studies are needed to check how long the antibody response remains after either vaccination schedule." [3]

CoronaVac is one of 48 vaccine candidates for COVID-19 that are currently in clinical trials [4]. It is a chemically-inactivated whole virus vaccine based on a strain of SARS-CoV-2 that was originally isolated from a patient in China.

The phase 1/2 clinical trial was carried out in the Suining County of Jiangsu province, China. All participants were aged 18 to 59 years and only people who did not have any history of infection with COVID-19, had not travelled to areas with high incidence of the disease, and did not have signs of fever at the time of recruitment were included in the study.

In the first phase, 144 healthy volunteers were enrolled between 16 April and 25 April 2020. Participants were split into two groups to receive one of two vaccination schedules - either two injections given 14 days apart (day 0 and 14 schedule), or two injections given 28 days apart (day 0 and 28 schedule).

Within each of the two groups, participants were randomly assigned to receive either a low dose of the vaccine (3μg, 24 participants), a high dose (6μg, 24 participants) or placebo (24 participants). In total, in this first phase, 96 participants received two doses of CoronaVac and 47 received the placebo (one participant withdrew from the placebo group). Antibody levels were checked 14 days and 28 days after the final immunisation.

For the first seven days after each dose, participants used paper diary cards to record any side effects they experienced, such as pain or redness at the injection site, or body-wide symptoms such as fever or cough. Serious adverse events were collected throughout the study and until six months after the last dose.

In the phase 1 trial, the overall incidence of adverse reactions was similar in the low- and high-dose groups at both vaccination schedules, indicating no dose-related safety concerns (for the day 0 and 14 schedule: 7/24 [29%] participants given the 3μg dose experienced reactions; 6μg dose, 9/24 [38%); placebo group, 2/24 [8%]. Day 0 and 28 schedule: 3μg dose, 3/24 [13%]; 6μg dose, 4/24 [17%); placebo group, 3/23 [13%].)

Most of the reported side effects were mild and participants recovered within 48 hours, with the most common symptom being pain at the injection site (Day 0 and 14 schedule: 3μg dose, 4/24 [17%] participants; 6μg dose, 5/24 [21%); placebo group, 1/24 [4%]. Day 0 and 28 schedule: 3μg dose, 3/24 [13%]; 6μg dose, 3/24 [13%); placebo group, 3/23 [13%].)

There was one case of severe allergic reaction within 48 hours of receiving the first dose in the 6μg group of the day 0 and 14 vaccination schedule (1/24, 4%). This was considered to be possibly related to vaccination. However, the participant was treated and recovered within three days, and did not experience a similar reaction after the second dose of vaccine.

Phase 2 of the trial was initiated when all participants in phase 1 had finished a 7-day observation period after their first dose. Some 600 healthy volunteers were enrolled in the study between 3 May and 5 May 2020. Again, participants were separated into two groups for the 14-day and 28-day vaccination schedule and then randomly assigned to receive either a low dose of the vaccine (3μg, 120 participants), a high dose (6 μg, 120 participants) or placebo (60 participants). In total, in this first phase, 480 participants received at least one dose of CoronaVac. Five participants withdrew from the second phase of the study: 2 withdrew voluntarily, 2 withdrew owing to lockdown restrictions and 1 person moved location.

Between the phase 1 and phase 2 trials, the researchers changed the manufacturing process of the vaccine to increase production capacity. There was no difference in reported side effects, however, the immune responses were much stronger in the second phase of the study after the new process was introduced (Mean neutralising antibody titres 14 days after final immunisation with two 3μg doses on day 0 and 14 schedule: phase 1, 5.6; phase 2, 27.6). In addition, the proportion of participants producing antibodies to SARS-CoV-2 was higher after the new process was introduced (Number of participants producing antibodies to SARS-CoV-2 14 days after final immunisation of day 0 and 14 schedule with 3μg dose: phase 1, 11/24 [45.8%]; phase 2, 109/118 [92.4%]),

After investigation, the researchers found that the new manufacturing process resulted in greater number of intact spike proteins on the surface of the inactivated virus that makes up the vaccine, which is thought to be a key molecule that the immune system uses to recognise the virus.

Overall, the researchers found the 0 and 28 schedule induced the strongest antibody responses (Mean neutralising antibody titres 28 days after final immunisation: Day 0 and 14 schedule: 3 μg, 23.8; 6 μg, 30.1; Day 0 and 28 schedule: 3 μg, 44.1; 6 μg, 65.4).

However, even at the highest levels, antibodies induced by the CoronaVac vaccine candidate were lower than those that have been observed in patients who have recovered from COVID-19 (Mean neutralising antibody titres: CoronaVac, 65.4; COVID-19 patients, 163.7).

Dr Gang Zeng, one of the authors of the study, from Sinovac Biotech, Beijing, China, said: "CoronaVac is one of many COVID-19 vaccine candidates that are being explored in parallel. There are a multitude of different vaccine technologies under investigation, each with their own advantages and disadvantages. CoronaVac could be an attractive option because it can be stored in a standard refrigerator between 2 and 8 degrees centigrade, which is typical for many existing vaccines including flu. The vaccine may also remain stable for up to three years in storage, which would offer some advantages for distribution to regions where access to refrigeration is challenging. However, data from phase 3 studies will be crucial before any recommendations about the potential uses of CoronaVac can be made." [3]

The authors note several limitations to their study. The phase 2 trial did not assess T cell responses, which are another arm of the immune response to virus infections. This will be studied in ongoing phase 3 studies.

Writing in a linked Comment, Dr Naor Bar-Zeev, from Johns Hopkins University, who was not involved in the study, said: "Like all phase 2 trials, the results must be interpreted with caution until phase 3 results are published. But even then, after phase 3 trial completion and after licensure, we should prudently remain cautious."

Credit: 
The Lancet

AI tool may predict movies' future ratings

Movie ratings can determine a movie's appeal to consumers and the size of its potential audience. Thus, they have an impact on a film's bottom line. Typically, humans do the tedious task of manually rating a movie based on viewing the movie and making decisions on the presence of violence, drug abuse and sexual content.

Now, researchers at the USC Viterbi School of Engineering, armed with artificial intelligence tools, can rate a movie's content in a matter of seconds, based on the movie script and before a single scene is shot. Such an approach could allow movie executives the ability to design a movie rating in advance and as desired, by making the appropriate edits on a script and before the shooting of a single scene. Beyond the potential financial impact, such instantaneous feedback would allow storytellers and decision-makers to reflect on the content they are creating for the public and the impact such content might have on viewers.

Using artificial intelligence applied to scripts, Shrikanth Narayanan, University Professor and Niki & C. L. Max Nikias Chair in Engineering, and a team of researchers from the Signal Analysis and Interpretation Lab (SAIL) at USC Viterbi, have demonstrated that linguistic cues can effectively signal behaviors on violent acts, drug abuse and sexual content (actions that are often the basis for a film's ratings) about to be taken by a film's characters.

Method:

Using 992 movie scripts that included violent, substance-abuse and sexual content, as determined by Common Sense Media, a non-profit organization that rates and makes recommendations for families and schools, the SAIL research team trained artificial intelligence to recognize corresponding risk behaviors, patterns and language.

The AI tool created receives as input all the script, processes it through a neural network and scans it for semantics and sentiment expressed. In the process, it classifies sentences and phrases as positive, negative, aggressive and other descriptors. The AI tool automatically classifies words and phrases into three categories: violence, drug abuse and sexual content.

Victor Martinez, a doctoral candidate in computer science at USC Viterbi and the lead researcher on the study, which will appear in The Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing said, "Our model looks at the movie script, rather than the actual scenes, including e.g. sounds like a gunshot or explosion that occur later in the production pipeline. This has the benefit of providing a rating long before production to help filmmakers decide e.g. the degree of violence and whether it needs to be toned down."

The research team also includes Narayanan, a professor of electrical and computer engineering, computer science and linguistics, Krishna Somandepalli, a Ph.D. candidate in Electrical and Computing Engineering at USC Viterbi, and Professor Yalda T. Uhls of UCLA's Department of Psychology. They discovered many interesting connections between the portrayals of risky behaviors.

"There seems to be a correlation in the amount of content in a typical film focused on substance abuse and the amount of sexual content. Whether intentionally or not, filmmakers seem to match the level of substance abuse-related content with sexually explicit content," said Martinez.

Another interesting pattern also emerged. "We found that filmmakers compensate for low levels of violence with joint portrayals of substance abuse and sexual content," Martinez said.

Moreover, while many movies contain depictions of rampant drug-abuse and sexual content, the researchers found it highly unlikely for a film to have high levels of all three risky behaviors, perhaps because of Motion Picture Association (MPA) standards.

They also found an interesting connection between risk behaviors and MPA ratings. As sexual content increases, the MPA appears to put less emphasis on violence/substance-abuse content. Thus, regardless of violent and substance abuse content, a movie with a lot of sexual content will likely receive an R rating.

Narayanan whose SAIL lab has pioneered the field of media informatics and applied natural language processing in order to bring awareness in the creative community about the nuances of storytelling, calls media "a rich avenue for studying human communication, interaction and behavior, since it provides a window into society."

"At SAIL, we are designing technologies and tools, based on AI, for all stakeholders in this creative business - the writers, film-makers and producers - to raise awareness about the varied important details associated in telling their story on film," Narayanan said.

"Not only are we interested in the perspective of the storytellers of the narratives they weave," Narayanan said, "but also in understanding the impact on the audience and the 'take-away' from the whole experience. Tools like these will help raise societally-meaningful awareness, for example, through identifying negative stereotypes."

Added Martinez: "In the future, I'm interested in studying minorities and how they are represented, particularly in cases of violence, sex and drugs."

Credit: 
University of Southern California

Could your vacuum be listening to you?

image: Researchers repurposed the laser-based navigation system on a vacuum robot (right) to pick up sound vibrations and capture human speech bouncing off objects like a trash can placed near a computer speaker on the floor.

Image: 
Sriram Sami

A team of researchers demonstrated that popular robotic household vacuum cleaners can be remotely hacked to act as microphones.

The researchers--including Nirupam Roy, an assistant professor in the University of Maryland's Department of Computer Science--collected information from the laser-based navigation system in a popular vacuum robot and applied signal processing and deep learning techniques to recover speech and identify television programs playing in the same room as the device.

The research demonstrates the potential for any device that uses light detection and ranging (Lidar) technology to be manipulated for collecting sound, despite not having a microphone. This work, which is a collaboration with assistant professor Jun Han at the National University of Singapore was presented at the Association for Computing Machinery's Conference on Embedded Networked Sensor Systems (SenSys 2020) on November 18, 2020.

"We welcome these devices into our homes, and we don't think anything about it," said Roy, who holds a joint appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS). "But we have shown that even though these devices don't have microphones, we can repurpose the systems they use for navigation to spy on conversations and potentially reveal private information."

The Lidar navigation systems in household vacuum bots shine a laser beam around a room and sense the reflection of the laser as it bounces off nearby objects. The robot uses the reflected signals to map the room and avoid collisions as it moves through the house.

Privacy experts have suggested that the maps made by vacuum bots, which are often stored in the cloud, pose potential privacy breaches that could give advertisers access to information about such things as home size, which suggests income level, and other lifestyle-related information. Roy and his team wondered if the Lidar in these robots could also pose potential security risks as sound recording devices in users' homes or businesses.

Sound waves cause objects to vibrate, and these vibrations cause slight variations in the light bouncing off an object. Laser microphones, used in espionage since the 1940s, are capable of converting those variations back into sound waves. But laser microphones rely on a targeted laser beam reflecting off very smooth surfaces, such as glass windows.

A vacuum Lidar, on the other hand, scans the environment with a laser and senses the light scattered back by objects that are irregular in shape and density. The scattered signal received by the vacuum's sensor provides only a fraction of the information needed to recover sound waves. The researchers were unsure if a vacuum bot's Lidar system could be manipulated to function as a microphone and if the signal could be interpreted into meaningful sound signals.

First, the researchers hacked a robot vacuum to show they could control the position of the laser beam and send the sensed data to their laptops through Wi-Fi without interfering with the device's navigation.

Next, they conducted experiments with two sound sources. One source was a human voice reciting numbers played over computer speakers and the other was audio from a variety of television shows played through a TV sound bar. Roy and his colleagues then captured the laser signal sensed by the vacuum's navigation system as it bounced off a variety of objects placed near the sound source. Objects included a trash can, cardboard box, takeout container and polypropylene bag--items that might normally be found on a typical floor.

The researchers passed the signals they received through deep learning algorithms that were trained to either match human voices or to identify musical sequences from television shows. Their computer system, which they call LidarPhone, identified and matched spoken numbers with 90% accuracy. It also identified television shows from a minute's worth of recording with more than 90% accuracy.

"This type of threat may be more important now than ever, when you consider that we are all ordering food over the phone and having meetings over the computer, and we are often speaking our credit card or bank information," Roy said. "But what is even more concerning for me is that it can reveal much more personal information. This kind of information can tell you about my living style, how many hours I'm working, other things that I am doing. And what we watch on TV can reveal our political orientations. That is crucial for someone who might want to manipulate the political elections or target very specific messages to me."

The researchers emphasize that vacuum cleaners are just one example of potential vulnerability to Lidar-based spying. Many other devices could be open to similar attacks such as smartphone infrared sensors used for face recognition or passive infrared sensors used for motion detection.

"I believe this is significant work that will make the manufacturers aware of these possibilities and trigger the security and privacy community to come up with solutions to prevent these kinds of attacks," Roy said.

Credit: 
University of Maryland