Tech

Algorithm uses mass spectrometry data to predict identity of molecules

An algorithm designed by researchers from Carnegie Mellon University's Computational Biology Department and St. Petersburg State University in Russia could help scientists identify unknown molecules. The algorithm, called MolDiscovery, uses mass spectrometry data from molecules to predict the identity of unknown substances, telling scientists early in their research whether they have stumbled on something new or merely rediscovered something already known.

This development could save time and money in the search for new naturally occurring products that could be used in medicine.

"Scientists waste a lot of time isolating molecules that are already known, essentially rediscovering penicillin," said Hosein Mohimani, an assistant professor and part of the research team. "Detecting whether a molecule is known or not early on can save time and millions of dollars, and will hopefully enable pharmaceutical companies and researchers to better search for novel natural products that could result in the development of new drugs."

The team's work, "MolDiscovery: Learning Mass Spectrometry Fragmentation of Small Molecules," was recently published in Nature Communications. The research team included Mohimani; CMU Ph.D. students Liu Cao and Mustafa Guler; Yi-Yuan Lee, a research assistant at CMU; and Azat Tagirdzhanov and Alexey Gurevich, both researchers at the Center for Algorithmic Biotechnology at St. Petersburg State University.

Mohimani, whose research in the Metabolomics and Metagenomics Lab focuses on the search for new, naturally occurring drugs, said after a scientist detects a molecule that holds promise as a potential drug in a marine or soil sample, for example, it can take a year or longer to identify the molecule with no guarantee that the substance is new. MolDiscovery uses mass spectrometry measurements and a predictive machine learning model to identify molecules quickly and accurately.

Mass spectrometry measurements are the fingerprints of molecules, but unlike fingerprints there's no enormous database to match them against. Even though hundreds of thousands of naturally occurring molecules have been discovered, scientists do not have access to their mass spectrometry data. MolDiscovery predicts the identity of a molecule from the mass spectrometry data without relying on a mass spectra database to match it against.

The team hopes MolDiscovery will be a useful tool for labs in the discovery of novel natural products. MolDiscovery could work in tandem with NRPminer, a machine learning platform developed by Mohimani's lab, that helps scientists isolate natural products. Research related to NRPminer was also recently published in Nature Communications.

Credit: 
Carnegie Mellon University

Altered microstructure improves organic-based, solid state lithium EV battery

Only 2% of vehicles are electrified to date, but that is projected to reach 30% in 2030. A key toward improving the commercialization of electric vehicles (EVs) is to heighten their gravimetric energy density - measured in watt hours per kilogram - using safer, easily recyclable materials that are abundant. Lithium-metal in anodes are considered the "holy grail" for improving energy density in EV batteries compared to incumbent options like graphite at 240 Wh/kg in the race to reach more competitive energy density at 500 Wh/kg.

Yan Yao, Cullen Professor of electrical and computer engineering at the Cullen College of Engineering at the University of Houston, and UH post doctorate Jibo Zhang are taking on this challenge with Rice University colleagues. In a paper published June 17 in Joule, Zhang, Yao and team demonstrate a two-fold improvement in energy density for organic-based, solid state lithium batteries by using a solvent-assisted process to alter the electrode microstructure. Zhaoyang Chen, Fang Hao, Yanliang Liang of UH, Qing Ai, Tanguy Terlier, Hua Guo and Jun Lou of Rice University co-authored the paper.

"We are developing low-cost, earth-abundant, cobalt-free organic-based cathode materials for a solid-state battery that will no longer require scarce transition metals found in mines," said Yao. "This research is a step forward in increasing EV battery energy density using this more sustainable alternative." Yao is also Principal Investigator with the Texas Center for Superconductivity at UH (TcSUH).

Any battery includes an anode, also known as negative electrode, and a cathode, also known as positive electrode, that are separated in a battery by a porous membrane. Lithium ions flow through an ionic conductor - an electrolyte, which allows for the charging and discharging of electrons that generates electricity for, say, a vehicle.

Electrolytes are usually liquid, but that is not necessary - they can also be solid, a relatively new concept. This novelty, combined with a lithium-metal anode, can prevent short-circuiting, improve energy density and enable faster charging.

Cathodes typically determine the capacity and voltage of a battery and are subsequently the most expensive part of batteries due to usage of scarce materials like cobalt - set to reach a 65,000-ton deficit in 2030. Cobalt-based cathodes are almost exclusively used in solid-state batteries due to their excellent performance; only recently have organic compound-based lithium batteries (OBEM-Li) emerged as a more abundant, cleaner alternative that is more easily recycled.

"There is major concern surrounding the supply chain of lithium-ion batteries in the United States," said Yao. "In this work, we show the possibility of building high energy-density lithium batteries by replacing transition metal-based cathodes with organic materials obtained from either an oil refinery or biorefinery, both of which the U.S. has the largest capacity in the world."

Cobalt-based cathodes generate 800 Wh/kg of material-level specific energy, or voltage multiplied by capacity, as do OBEM-Li batteries, which was first demonstrated by the team in their earlier publication, but previous OBEM-Li batteries were limited to low mass fraction of active materials due to non-ideal cathode microstructure. This capped total energy density.

Yao and Zhang uncovered how to improve electrode-level energy density in OBEM-Li batteries by optimizing the cathode microstructure for improved ion transport within the cathode. To do this the microstructure was altered using a familiar solvent - ethanol. The organic cathode used was pyrene-4,5,9,10-tetraone, or PTO.

"Cobalt-based cathodes are often favored because the microstructure is naturally ideal but forming the ideal microstructure in an organic-based solid-state battery is more challenging," said Zhang.

On an electrode level, the solvent-assisted microstructure increased energy density to 300 Wh/kg compared to the dry-mixed microstructure at just under 180 Wh/kg by improving the utilization rate of active material significantly. Previously, the amount of active materials could be increased but the utilization percentage was still low, near 50%. With Zhang's contribution, that utilization rate improved to 98% and resulted in higher energy density.

"Initially I was examining the chemical properties of PTO, which I knew would oxidize the sulfide electrolyte," Zhang said. "This led to a discussion on how we might be able to take advantage of this reaction. Together with colleagues at Rice university, we investigated the chemical composition, spatial distribution and electrochemical reversibility of the cathode-solid electrolyte interphase, which can provide us hints as to why the battery could cycle so well without capacity decay," Zhang said.

Over the last ten years, the cost of EV batteries declined to nearly 10% of their original cost, making them commercially viable. So, a lot can happen in a decade. This research is a pivotable step in the process toward more sustainable EVs and a springboard for the next decade of research. At this rate, perhaps just as literally as euphemistically, the future looks much greener on the other side.

Credit: 
University of Houston

Managed retreat: A must in the war against climate change

video: New research from the University of Delaware's A.R. Siders and Katharine Mach, from the University of Miami, found that managed retreat can't be seen as a last resort -- it must be paired with existing (flood walls) or future (floating cities) measures to be effective.

Image: 
Jeffrey C. Chase/ University of Delaware

University of Delaware disaster researcher A.R. Siders said it's time to put all the options on the table when it comes to discussing climate change adaptation.

Managed retreat -- the purposeful movement of people, buildings and other assets from areas vulnerable to hazards -- has often been considered a last resort. But Siders said it can be a powerful tool for expanding the range of possible solutions to cope with rising sea levels, flooding and other climate change effects when used proactively or in combination with other measures.

Siders, a core faculty member in UD's Disaster Research Center, and Katharine J. Mach, associate professor at the University of Miami Rosenstiel School of Marine and Atmospheric Science, provide a prospective roadmap for reconceptualizing the future using managed retreat in a new paper published online in Science on June 17, 2021.

"Climate change is affecting people all over the world, and everyone is trying to figure out what to do about it. One potential strategy, moving away from hazards, could be very effective, but it often gets overlooked," said Siders, assistant professor in the Joseph R. Biden, Jr. School of Public Policy and Administration and the Department of Geography and Spatial Sciences. "We are looking at the different ways society can dream bigger when planning for climate change and how community values and priorities play a role in that."

Retreat does not mean defeat

Managed retreat has been happening for decades all over the United States at a very small scale with state and/or federal support. Siders pointed to Hurricanes Harvey and Florence as weather events that caused homeowners near the Gulf of Mexico to seek government support for relocation. Locally, towns such as Bowers Beach, near the Delaware coast, have used buyouts to remove homes and families from flood-prone areas, an idea that Southbridge in Wilmington is also exploring.

People often oppose the idea of leaving their homes, but Siders said thinking seriously about managed retreat sooner and in context with other available tools can reinforce decisions by prompting difficult conversations. Even if communities decide to stay in place, identifying the things community members value can help them decide what they want to maintain and what they purposely want to change.

"If the only tools you think about are beach nourishment and building walls, you're limiting what you can do, but if you start adding in the whole toolkit and combining the options in different ways, you can create a much wider range of futures," she said.

In the paper, Siders and Mach argue that long-term adaptation will involve retreat. Even traditionally accepted visions of the future, like building flood walls and elevating threatened structures, will involve small-scale retreat to make space for levees and drainage. Larger-scale retreat may be needed for more ambitious transformations, such as building floating neighborhoods or cities, turning roads into canals in an effort to live with the water, or building more dense, more compact cities on higher ground.

Some, but not all these futures currently exist.

In the Netherlands, the municipality of Rotterdam has installed floating homes in Nassau harbor that move with the tides, providing a sustainable waterfront view for homeowners while making room for public-friendly green space along the water. In New York City, one idea under consideration is building into the East River to accommodate a floodwall. Both cities are using combination strategies that leverage more than one adaptation tool.

Adaptation decisions don't have to be either/or decisions. However, it is important to remember that these efforts take time, so planning should begin now.

"Communities, towns, and cities are making decisions now that affect the future," said Siders. "Locally, Delaware is building faster inside the floodplain than outside of it. We are making plans for beach nourishment and where to build seawalls. We're making these decisions now, so we should be considering all the options on the table now, not just the ones that keep people in place."

According to Siders, the paper is a conversation starter for researchers, policymakers, communities and residents that are invested in helping communities thrive amid changing climate. These discussions, she said, shouldn't focus solely on where we need to move from, but also where we should avoid building, where new building should be encouraged, and how we should build differently.

"Managed retreat can be more effective in reducing risk, in ways that are socially equitable and economically efficient, if it is a proactive component of climate-driven transformations," said Mach. "It can be used to address climate risks, along with other types of responses like building seawalls or limiting new development in hazard-prone regions."

Globally, Siders said the U.S. is in a privileged position, in terms of the available space, money and resources, relative to other countries facing more complicated futures. The Republic of Kiribati, a chain of islands in the central Pacific Ocean, for example, is expected to be under water in the future. Some of its islands already are uninhabitable.

The Kiribati government has bought land in Fiji for relocation and is developing programs with Australia and New Zealand to provide skilled workforce training so the Kiribati people can migrate with dignity when the time comes. Challenges remain, though, since not everyone is on board with moving.

In a recent special issue of the Journal of Environmental Studies and Sciences, edited and introduced by Siders and Idowu (Jola) Ajibade at Portland State University, researchers examined the social justice implications of managed retreat in examples from several countries, including the U.S., Marshall Islands, New Zealand, Peru, Sweden, Taiwan, Austria and England. The scientists explored how retreat affects groups of people and, in the U.S., specifically considered how retreat affects marginalized populations.

So, how can society do better? According to Siders, it starts with longer-term thinking.

"It's hard to make good decisions about climate change if we are thinking 5-10 years out," said Siders. "We are building infrastructure that lasts 50-100 years; our planning scale should be equally long."

Siders will give a keynote address and research presentation on the topic at a virtual managed retreat conference at Columbia University, June 22-25, 2021.

Credit: 
University of Delaware

Sacred natural sites protect biodiversity in Iran

image: How much do traditional practices contribute to the protection of local biodiversity? Why and how are sacred groves locally valued and protected, and how can this be promoted and harnessed for environmental protection? Working together with the University of Kurdistan, researchers at the University of Göttingen and the University of Kassel have examined the backgrounds of this form of local environmental protection in Iran.

Image: 
Zahed Shakeri

How much do traditional practices contribute to the protection of local biodiversity? Why and how are sacred groves locally valued and protected, and how can this be promoted and harnessed for environmental protection? Working together with the University of Kurdistan, researchers of the University of Göttingen and the University of Kassel have examined the backgrounds of this form of local environmental protection in Baneh County, Iran.

"Around the world, local communities are voluntarily protecting certain parts of their surroundings due to religious reasons - be it in Ethiopia, Morocco, Italy, China or India", reports Professor Tobias Plieninger, head of the section Social-ecological Interactions in Agricultural Systems at the universities of Kassel and Göttingen. Sacred natural sites are places where traditional myths and stories meet local ecological knowledge and environmental protection. Beyond state-based protection programs, these form a network of informal nature reserves.

In the contested border areas between Iran and Iraq, state-run environmental protection programs are often failing, while natural resources are under a lot of pressure. Even in such areas of conflict, patches of highly biodiverse woodlands still exist thanks to informal conservation traditions - in the form of decades-old sacred natural sites, some of which are known as the 'sacred groves'.

In the Middle East, sacred groves are quite common, but there has been very little research into these biocultural hotspots. They usually belong to a Mosque and serve as village cemeteries, the use of which is strictly regulated. Even though they usually cover only a small area - 1 hectare on average - they are comparatively rich in biodiversity, provide numerous ecosystem services and are of great cultural and spiritual importance to local communities.

Local people regard them as the abodes of their ancestors. Dr Zahed Shakeri, who accompanied the project as a post-doc researcher and grew up in the region himself, reports on the numerous myths and legends that surround these sites and demand a careful maintenance as well as respectful behavior. "Our research group developed a fascination for the botanical treasures of these sites," Plieninger tells. In a vegetation study, they found out that the taxonomic diversity in sacred groves is much higher than in neighboring cultivated lands. The vegetation composition, too, is fundamentally different here.

"The 22 sacred groves examined comprised 20% of the flora of the whole region. Moreover, they host multiple rare and endangered plants, and represent complex niches for threatened animals", Shakeri reports. "Due to this taxonomic diversity, sacred groves can serve as an important complement to formally protected areas in the region, and as baselines in their reconstruction." Today, due to changes in customary rights, population growth and the loss of traditional faiths, the number and condition of such sacred natural sites are decreasing around the world. Thus, local people's perceptions regarding sacred groves as well as the reasons for their relatively good condition in the region were also subject of this research.

On the basis of interviews with 205 residents from 25 villages, the research group identified people's key motivations for the areas' protection: in particular spiritual values, the preservation of cultural and spiritual heritage as well as of local biodiversity played a role here. Furthermore, the importance of taboos became clear, which particularly prohibit the use of natural resources (for instance forest clearance, hunting and livestock grazing) and road construction, but also regulate the general behavior within these sites.

Even though these social values and taboos are considered relatively stable in the province of Kurdistan, the interviewees repeatedly referred to the threatened situation of the groves in the region. Especially elderly and rural people, women and people with traditional lifestyles were regarded as the holders of these values and taboos. "Protection programs could support these groups to defend and revive their customs. At the same time, young and urban people with modern lifestyles represent an important target group for awareness-raising," Shakeri summarizes.

The example of sacred groves demonstrates that social dynamics and especially cultural values deserve greater attention in environmental protection: "Such a biocultural approach to conservation that considers different worldviews and knowledge systems, could translate social taboos and the related land-use practices into socially acceptable and environmentally effective conservation outcomes", Plieninger concludes.

Credit: 
University of Göttingen

Numerical study first to reveal origin of 'motion of the ocean' in the straits of Florida

image: Animation shows the formation of eddies in the Straits of Florida.

Image: 
Florida Atlantic University/Harbor Branch Oceanographic Institute

Ocean currents sometimes pinch off sections that create circular currents of water called "eddies." This "whirlpool" motion moves nutrients to the water's surface, playing a significant role in the health of the Florida Keys coral reef ecosystem.

Using a numerical model that simulates ocean currents, researchers from Florida Atlantic University's Harbor Branch Oceanographic Institute and collaborators from the Alfred-Wegener-Institute in Germany and the Institut Universitaire Europeen De La Mer/Laboratoire d'Océonographie Physique et Spatiale in France are shedding light on this important "motion of the ocean." They have conducted a first-of-its-kind study identifying the mechanisms behind the formation of sub-mesoscale eddies in the Straits of Florida, which have important environmental implications.

Despite the swift flow of the Florida Current, which flows in the Straits of Florida and connects the Loop Current in the Gulf of Mexico to the Gulf Stream in the Western Atlantic Ocean, eddies provide a mechanism for the retention of marine organisms such as fish and coral larvae. Since they trap the nutrient rich West Florida Shelf waters, they provide habitat to many reef and pelagic species within the region of the Florida Keys Reef Track, which sustains the very high productivity of this region.

Moreover, despite the tendency of the West Florida Shelf to overflow into the Straits of Florida, the formation of eddies provides a mechanism that limits the cross shelf transport of nutrient-laden waters. As a result, the formation of eddies stops the export of the West Florida Shelf waters across the Straits of Florida, preventing events such as red tides from crossing over to Cuba or the Bahamas. Conversely, toxic red tide waters emanating from the shelf remain longer in the vicinity of the Florida Keys Reef Tract coral reef ecosystem, adversely affecting the ecosystem's health.

These small-scale frontal eddies are frequently observed and present a wide variety of numbers, shapes, and sizes, which suggest different origins and formation mechanisms. Their journey through the Straits of Florida is at time characterized by the formation and presence of mesoscale, but mostly sub-mesoscale frontal eddies on the cyclonic side of the current.

The study, published in the Journal of Physical Oceanography, provides a comprehensive overview and understanding of the Straits of Florida shelf slope dynamics based on a realistic two-way nested high-resolution Regional Oceanic Modeling System (ROMS) simulation of the South Florida oceanic region. The full two-way nesting allowed the interaction of multiscale dynamics across the nest boundaries.

Results showed that the formation of the sub-mesoscale frontal eddies in the Straits of Florida are associated with the sloshing of the Florida Current, which consists of the oscillation of the distance of the current core from the shelf. When the Florida Current core is pushed up against the shelf, the shear on the shelf increases and sub-mesoscale frontal eddies can be formed by barotropic instability. When this position is relaxed, baroclinic instability instead is likely to form sub-mesoscale eddies. Unlike barotropic instability, which is shear driven, baroclinic instability is driven by changes in density anomalies.

"In the Straits of Florida, eddies smaller than their open ocean relative are formed. Those eddies, called sub-mesoscale eddies, are common and can be easily observed in ocean color imagery," said Laurent Chérubin, Ph.D., senior author and an associate research professor, FAU Harbor Branch. "Unlike the larger open ocean mesoscale eddies, they are not in geostrophic balance, meaning that their circulation is not sustained by the balance between the pressure gradient and the Coriolis forces. Instead, some of the frontal eddies in the Straits of Florida are in gradient wind balance, which indicates that a third force, the centrifugal force, is large enough to modify the geostrophic balance."

The Florida Current is part of the western branch of the wind driven north Atlantic anti-cyclonic gyre, which is intensified on the western side of the North Atlantic basin in comparison to its eastern side. Similar types of currents also are found on the western side of ocean basins such as the Agulhas current in the southern Indian Ocean or the Kuroshio in the northern Pacific Ocean. They are called boundary currents because they impinge on the continental shelf and as such, they undergo a significant amount of friction on the ocean floor. This friction, which acts vertically and horizontally on the boundary current, contributes to the formation of a sheared boundary layer.

"Our study shows that this shear layer can become unstable and form eddies. This process is in fact a pathway for the dissipation of wind energy injected in the ocean. Therefore, in the Straits of Florida, eddies smaller than their open ocean relative are formed," said Chérubin.

In addition to sub-mesoscale eddies formed locally in the Straits of Florida, there are incoming mesoscale eddies that transit in the Straits of Florida, such as the Tortugas Gyre.

"Findings from our research also show that mesoscale eddies can be squeezed on the shelf and transformed into sub-mesoscale eddies when the Florida Current is in its protracted position or remains relatively unaffected if the Florida Current is retracted from the shelf," said Chérubin.

Credit: 
Florida Atlantic University

Prototype may diagnose common pregnancy complications by monitoring placental oxygen

image: The prototype oxygen sensor next to a diagram of a fetus with forward-facing placenta.

Image: 
Thien Nguyen, Ph.D., Post Doctoral Fellow, Section on Translational Biophotonics, Eunice Kennedy Shriver National Institute of Child Health and Human Development, NIH

Researchers at the National Institutes of Health have developed a prototype device that could potentially diagnose pregnancy complications by monitoring the oxygen level of the placenta. The device sends near-infrared light through the pregnant person's abdomen to measure oxygen levels in the arterial and venous network in the placenta. The method was used to study anterior placenta, which is attached to the front wall of the uterus. The researchers described their results as promising but added that further study is needed before the device could be used routinely.

The small study was conducted by Amir Gandjbakhche, Ph.D., of the Section on Translational Biophotonics at NIH's Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), and colleagues. It appears in Biomedical Optics Express.

The researchers devised mathematical methods to study the passage of light through the skin, abdominal wall and uterine tissue to reach the placenta and calculate its oxygen levels. Currently, the device cannot monitor oxygen in women with a posterior placenta, which is attached to the back wall of the uterus, because the distance is too far for the light to travel. However, anterior placenta is associated with a higher rate of complications than posterior placenta, such as postpartum hemorrhage and an increased need for labor induction or cesarean delivery.

The researchers enrolled 12 pregnant women with an anterior placenta in the study. Of these, five had a pregnancy complication, including hypertension, a short cervix and polyhydramnios (excess amniotic fluid). On average, the women with complications had a placental oxygen level of 69.6%, a difference statistically significant from the 75.3% seen in the healthy pregnancies in the study. The authors see their results as a first step in continuously monitoring placental oxygen levels to assess maternal and fetal health.

Credit: 
NIH/Eunice Kennedy Shriver National Institute of Child Health and Human Development

Vortex, the key to information processing capability: Virtual physical reservoir computing

image: A: Outline of the study. B: Typical fluid flow at each Reynolds number. C: Inputs along time sequence and the results of NARMA2 and NARMA3 models. Target values are in black while values by virtual physical reservoir computing using vortices are in red. D: Values of errors (normalized mean square errors, NMSE) at each Reynolds number in NARMA2 and NARMA3 models. The error is minimal with Reynolds number being around 40.

Image: 
Kanazawa University

[Background]

In recent years, physical reservoir computing*1), one of the new information processing technologies, has attracted much attention. This is a physical implementation version of reservoir computing, which is a learning method derived from recurrent neural network (RNN)*2) theory. It implements computation by regarding the physical system as a huge RNN, outsourcing the main operations to the dynamics of the physical system that forms the physical reservoir. It has the advantage of obtaining optimization instantaneously with limited computational resources by adjusting linear and static readout weightings between the output and a physical reservoir without requiring optimization of the weightings by back propagation. However, since the information processing capability depends on the physical reservoir capacity, it is important that this is investigated and optimized. Furthermore, when designing a physical reservoir with high information processing capability, it is expected that the experimental cost will be reduced by numerical simulation.
Well-known examples of physical reservoir computing include its application to soft materials, photonics, spintronics, and quanta, while in recent years, much attention has been paid to waves; neuromorphic devices that simulate functions of the brain by using non-linear waves have been proposed (see references 1-3). The fluid flow of water, air, etc. represents a physical system that is familiar but shows various and complicated patterns that have been thought to have high information processing capability. However, virtual physical reservoir computing using numerical simulation or investigation of information processing capability of fluid flow phenomena has not been realized due to its relatively high numerical computational cost. Therefore, the relationship between the flow vortex and information processing capability remained unknown.

[Results]

In this study, Prof. Hirofumi Notsu and a graduate student of Kanazawa University in collaboration with Prof. Kohei Nakajima of the University of Tokyo investigated fluid flow phenomena as a physical system, especially the fluid flow that occurs around a cylinder, which is well understood. It is known that this physical system is governed by the incompressible Navier-Stokes equations*3), which describe fluid flow, and also includes the Reynolds number*4), a parameter indicative of the system characteristics. This physical system was virtually implemented by spatial two-dimensional numerical simulation using the stabilized Lagrange-Galerkin method*5), and the dynamics of flow velocity and pressure at the selected points in the downstream region of the cylinder were used as the physical reservoir. The information processing capability was evaluated using the NARMA model*6) (see Figure).

It is known that in the flow of fluid around a cylinder, as the Reynolds number value increases, twin vortices formed in the downstream region of the cylinder gradually become larger and eventually form a Karman vortex street, the alternate shedding of vortices. In this study, it was clarified that at the Reynolds number where the twin vortices are maximal but just before the transition to a Karman vortex street, the information processing capability is the highest. In other words, before the transition to a Karman vortex street, the information processing capability increases as the size of the twin vortices increases. On the other hand, since the echo state property*7) that guarantees the reproducibility of the reservoir computing cannot be maintained when the transition to the Karman vortex street takes place, it becomes clear that the Karman vortex street cannot be used for computing.

[Future prospects]

It is expected that these findings concerning fluid flow vortices and information processing capability will be useful when, in future, the information processing capability of the physical reservoir can be expanded using fluid flow, e.g. in the development of wave-based neuromorphic devices recently reported. Although the numerical computational cost of fluid flow phenomena is relatively high, this study has made it possible to handle macroscopic vortices that are physically easy to understand and has clarified the relationship between vortices and information processing capabilities by virtually implementing physical reservoir computing with spatial two-dimensional numerical simulation. Virtual physical reservoir computing, which used to be applied to a relatively large number of physical systems described as one-dimensional systems, has been expanded to include physical systems with two or more spatial dimensions. It is expected that the results of this study will allow the virtual investigation of the information processing capabilities of a wider range of physical systems. In addition, since it is revealed that vortices are the key to information processing capability, it is expected that research for creating or maintaining vortices will be further promoted.

Credit: 
Kanazawa University

Study identifies trigger for 'head-to-tail' axis development in human embryo

image: Professor Zernicka-Goetz in the lab.

Image: 
University of Cambridge

Scientists have identified key molecular events in the developing human embryo between days 7 and 14 - one of the most mysterious, yet critical, stages of our development.

The second week of gestation represents a critical stage of embryo development, or embryogenesis. Failure of development during this time is one of the major causes of early pregnancy loss. Understanding more about it will help scientists to understand how it can go wrong, and take steps towards being able to fix problems.

The pre-implantation period, before the developing embryo implants into the mother's womb, has been studied extensively in human embryos in the lab. On the seventh day the embryo must implant into the womb to survive and develop. Very little is known about the development of the human embryo once it implants, because it becomes inaccessible for study.

Pioneering work by Professor Magdalena Zernicka-Goetz and her team developed a technique, reported in 2016, to culture human embryos outside the body of the mother beyond implantation. This enabled human embryos to be studied up to day 14 of development for the first time.

In a new study, the team collaborated with colleagues at the Wellcome Sanger Institute to reveal what happens at the molecular level during this early stage of embryogenesis. Their findings provide the first evidence that a group of cells outside the embryo, known as the hypoblast, send a message to the embryo that initiates the development of the head-to-tail body axis.

When the body axis begins to form, the symmetrical structure of the embryo starts to change. One end becomes committed to developing into the head end, and the other the 'tail'.

The new results, published today in the journal Nature Communications, reveal that the molecular signals involved in the formation of the body axis show similarities to those in animals, despite significant differences in the positioning and organisation of the cells.

"We have revealed the patterns of gene expression in the developing embryo just after it implants in the womb, which reflect the multiple conversations going on between different cell types as the embryo develops through these early stages," said Professor Magdalena Zernicka-Goetz in the University of Cambridge's Department of Physiology, Development and Neuroscience, and senior author of the report.

She added: "We were looking for the gene conversation that will allow the head to start developing in the embryo, and found that it was initiated by cells in the hypoblast - a disc of cells outside the embryo. They send the message to adjoining embryo cells, which respond by saying 'OK, now we'll set ourselves aside to develop into the head end.'"

The study identified the gene conversations in the developing embryo by sequencing the code in the thousands of messenger RNA molecules made by individual cells. They captured the evolving molecular profile of the developing embryo after implantation in the womb, revealing the progressive loss of pluripotency (the ability of the embryonic cells to give rise to any cell type of the future organism) as the fates of different cells are determined.

"Our goal has always been to enable insights to very early human embryo development in a dish, to understand how our lives start. By combining our new technology with advanced sequencing methods we have delved deeper into the key changes that take place at this incredible stage of human development, when so many pregnancies unfortunately fail," said Zernicka-Goetz.

Credit: 
University of Cambridge

AI system-on-chip runs on solar power

video: CSEM engineers have developed an integrated circuit that can carry out complicated artificial-intelligence operations like face, voice and gesture recognition and cardiac monitoring. Powered by either a tiny battery or a solar panel, it processes data at the edge and can be configured for use in just about any type of application

Image: 
CSEM

AI is used in an array of extremely useful applications, such as predicting a machine's lifetime through its vibrations, monitoring the cardiac activity of patients and incorporating facial recognition capabilities into video surveillance systems. The downside is that AI-based technology generally requires a lot of power and, in most cases, must be permanently connected to the cloud, raising issues related to data protection, IT security and energy use.

CSEM engineers may have found a way to get around those issues, thanks to a new system-on-chip they have developed. It runs on a tiny battery or a small solar cell and executes AI operations at the edge - i.e., locally on the chip rather than in the cloud. What's more, their system is fully modular and can be tailored to any application where real-time signal and image processing is required, especially when sensitive data are involved. The engineers will present their device at the prestigious 2021 VLSI Circuits Symposium in Kyoto this June.

The CSEM system-on-chip works through an entirely new signal processing architecture that minimizes the amount of power needed. It consists of an ASIC chip with a RISC-V processor (also developed at CSEM) and two tightly coupled machine-learning accelerators: one for face detection, for example, and one for classification. The first is a binary decision tree (BDT) engine that can perform simple tasks but cannot carry out recognition operations.

"When our system is used in facial recognition applications, for example, the first accelerator will answer preliminary questions like: Are there people in the images? And if so, are their faces visible?" says Stéphane Emery, head of system-on-chip research at CSEM. "If our system is used in voice recognition, the first accelerator will determine whether noise is present and if that noise corresponds to human voices. But it can't make out specific voices or words - that's where the second accelerator comes in."

The second accelerator is a convolutional neural network (CNN) engine that can perform these more complicated tasks - recognizing individual faces and detecting specific words - but it also consumes more energy. This two-tiered data processing approach drastically reduces the system's power requirement, since most of the time only the first accelerator is running.

As part of their research, the engineers enhanced the performance of the accelerators themselves, making them adaptable to any application where time-based signal and image processing is needed. "Our system works in basically the same way regardless of the application," says Emery. "We just have to reconfigure the various layers of our CNN engine."

The CSEM innovation opens the door to an entirely new generation of devices with processors that can run independently for over a year. It also sharply reduces the installation and maintenance costs for such devices, and enables them to be used in places where it would be hard to change the battery.

Credit: 
Swiss Center for Electronics and Microtechnology - CSEM

New invention keeps qubits of light stable at room temperature

As almost all our private information is digitalized, it is increasingly important that we find ways to protect our data and ourselves from being hacked.

Quantum Cryptography is the researchers' answer to this problem, and more specifically a certain kind of qubit - consisting of single photons: particles of light.

Single photons or qubits of light, as they are also called, are extremely difficult to hack.

However, in order for these qubits of light to be stable and work properly they need to be stored at temperatures close to absolute zero - that is minus 270 C - something that requires huge amounts of power and resources.

Yet in a recently published study, researchers from University of Copenhagen, demonstrate a new way to store these qubits at room temperature for a hundred times longer than ever shown before.

"We have developed a special coating for our memory chips that helps the quantum bits of light to be identical and stable while being in room temperature. In addition, our new method enables us to store the qubits for a much longer time, which is milliseconds instead of microseconds - something that has not been possible before. We are really excited about it," says Eugene Simon Polzik, professor in quantum optics at the Niels Bohr Institute.

The special coating of the memory chips makes it much easier to store the qubits of light without big freezers, which are troublesome to operate and require a lot of power.

Therefore, the new invention will be cheaper and more compatible with the demands of the industry in the future.

"The advantage of storing these qubits at room temperature is that it does not require liquid helium or complex laser-systems for cooling. Also it is a much more simple technology that can be implemented more easily in a future quantum internet," says Karsten Dideriksen, a UCPH-PhD on the project.

A special coating keeps the qubits stable

Normally warm temperatures disturb the energy of each quantum bit of light.

"In our memory chips, thousands of atoms are flying around emitting photons also known as qubits of light. When the atoms are exposed to heat, they start moving faster and collide with one another and with the walls of the chip. This leads them to emit photons that are very different from each other. But we need them to be exactly the same in order to use them for safe communication in the future," explains Eugene Polzik and adds:

"That is why we have developed a method that protects the atomic memory with the special coating for the inside of the memory chips. The coating consists of paraffin that has a wax like structure and it works by softening the collision of the atoms, making the emitted photons or qubits identical and stable. Also we used special filters to make sure that only identical photons were extracted from the memory chips".

Even though the new discovery is a breakthrough in quantum research, it stills needs more work.

"Right now we produce the qubits of light at a low rate - one photon per second, while cooled systems can produce millions in the same amount of time. But we believe there are important advantages to this new technology and that we can overcome this challenge in time," Eugene concludes.

Credit: 
University of Copenhagen - Faculty of Science

Simple urine test may help early detection of brain tumors

image: Nanowire scaffolds for the screening of microRNAs from patient-derived tumor-organoid and urine in patients with central nervous system tumors

Image: 
Takao Yasui & Atsushi Natsume

A recent study by Nagoya University researchers revealed that microRNAs in urine could be a promising biomarker to diagnose brain tumors. Their findings, published in the journal ACS Applied Materials & Interfaces, have indicated that regular urine tests could help early detection and treatment of brain tumors, possibly leading to improved patient survival.

Early diagnosis of brain tumors is often difficult, partly because most people undergo a brain CT or MRI scan only after the onset of neurological deficits, such as immobility of limbs, and incapability of speech. When brain tumors are detected by CT or MRI, in many cases, they have already grown too large to be fully removed, which could lower patients' survival rate. From this perspective, accurate, easy, and inexpensive methods of early brain tumor detection are strongly desired.

As a diagnostic biomarker of cancerous tumors, microRNAs (tiny molecules of nucleic acid) have recently received considerable attention. MicroRNAs are secreted from various cells, and exist in a stable and undamaged condition within extracellular vesicles in biological fluids like blood and urine. Nagoya University researchers focused on microRNAs in urine as a biomarker of brain tumors. "Urine can be collected easily without putting a burden on the human body," says Nagoya University Associate Professor Atsushi Natsume, a corresponding author of the study.

"Urine-based liquid biopsy hadn't been fully investigated for patients with brain tumors, because none of the conventional methodologies can extract microRNAs from urine efficiently in terms of varieties and quantities. So, we decided to develop a device capable of doing it."

The new device they developed is equipped with 100 million zinc oxide nanowires, which can be sterilized and mass-produced, and is therefore suitable for actual medical use. The device can extract a significantly greater variety and quantity of microRNAs from only a milliliter of urine compared to conventional methods.

Their analysis of microRNAs collected using the device from the urine of patients with brain tumors and non-cancer individuals revealed that many microRNAs derived from brain tumors actually exist in urine in a stable condition.

Next, the researchers examined whether urinary microRNAs can serve as a biomarker of brain tumors, using their diagnostic model based on the expression of microRNAs in urine samples from patients with brain tumors and non-cancer individuals. The results showed that the model can distinguish the patients from non-cancer individuals at a sensitivity of 100% and a specificity of 97%, regardless of the malignancy and size of tumors. The researchers thus concluded that microRNAs in urine is a promising biomarker of brain tumors.

The researchers hope that their findings will contribute to early diagnosis of aggressive types of brain cancer, like glioblastomas, as well as other types of cancer. Dr. Natsume says, "In the future, by a combination of artificial intelligence and telemedicine, people will be able to know the presence of cancer, whereas doctors will be able to know the status of cancer patients just with a small amount of their daily urine."

Credit: 
Nagoya University

Unitized regenerative fuel cells for improved hydrogen production and power generation

image: Schematic fabrication procedure of the amphiphilic Ti PTLs.

Image: 
Korea Institute of Science and Technology(KIST)

Green hydrogen, a source of clean energy that can be generated without using fossil fuels, has recently gained immense attention as it can be potentially used to promote carbon neutrality. Korean researchers have succeeded in improving the efficiency of unitized regenerative fuel cells that can be used to efficiently produce green hydrogen and generate power.

The unitized regenerative fuel cells boast of hydrogen production and fuel cell modes. They are eco-friendly, cost-effective, and independent energy storage and power generation devices that require less space for operation. A larger amount of space is required for the separate installation of electrolysis devices and fuel cells.

When the amount of electricity generated from renewable energy sources such as sunlight and wind power is larger than the amount of electricity in demand, the electrolysis cell mode is used to produce hydrogen to store energy. When the demand for electricity is higher, the fuel cell mode can be used to generate power.

The Korea Institute of Science and Technology (KIST) has released an announcement that a research team led by Dr. Hyun S. Park of Center for Hydrogen?Fuel Cell Research, Dr. Jong Min Kim of the Materials Architecturing Research Center, and a research team led by Prof. Yung-Eun Sung of the Seoul National University have collaborated to develop a component based on a novel concept to overcome the problems caused due to the mixing of water and gas inside a bifunctional device used for hydrogen production and power generation, that is the unitized regenerative fuel cells. This facilitates the efficient transport of water and gas and significantly improves the performance and round-trip efficiency of the devices.

For efficient hydrogen production under conditions of the electrolysis cell mode, water must reach the catalyst layer from the electrode within a short span of time. The generated hydrogen and oxygen should also be drained out as fast as possible. The produced hydrogen and oxygen must enter and the produced water should be drained out as fast as possible when the device operates under conditions of the opposite mode of the fuel cell. Therefore, the unitized device can operate with the same efficiency as the general electrolysis device or fuel cells when the input materials can be repeatedly allowed to cycle efficiently and water, hydrogen, and oxygen can be drained out efficiently

The research team led by Dr. Hyun S. Park identified the reasons behind the decreased efficiency of the unitized device under conditions of alternating electrolysis cell and fuel cell modes. The team attributed the problems to the repeated input and drainage of water and gas that potentially led to the trapping of the residual water and gas present inside the device. The researchers hypothesized that a hydrophilic electrode was needed to facilitate the transportation of water and a hydrophobic electrode was required to efficiently conduct gas phase reactions to address the problem. The surface of the electrode was firstly treated to have hydrophilic properties, then coated with a micropatterned plastic layer to fabricate an electrode that can potentially exhibit both hydrophilic and hydrophobic properties.

This facilitated the smooth transportation of water and gas. The rate of drainage of the gas (selective drainage from the surface of the developed electrode) could be increased 18 fold. The performance could be improved 4-fold under conditions of the fuel cell mode when the newly developed component was used to fabricate the unitized device. The efficiency of hydrogen production achieved using this method is twice the efficiency of hydrogen production achieved using the conventional component. In addition, the stability of the improved performance (with respect to hydrogen production and power generation) was verified over 160 h.

Dr. Hyun S. Park of KIST said, "This is the first time that amphiphilic electrodes exhibiting stable and high performance under conditions of both fuel cell power generation mode and electrolysis green hydrogen production mode have been used for the fabrication of unitized regenerative fuel cells. It is expected that the developed device can also be used for the fabrication of various other devices such as electrochemical carbon dioxide reduction and nitrogen reduction devices, where both gas and liquid enter the devices simultaneously."

Credit: 
National Research Council of Science & Technology

Hydrophobic copper catalyst to mitigate electrolyte flooding

The electroreduction of carbon dioxide (CO2) to produce value-added multicarbon compounds is an effective way to cut down CO2 emission. However, the low solubility of CO2 largely limits the application of related technology.

Although gas diffusion electrode (GDE) can accelerate the reaction rate, the instability of the catalysts caused by electrolyte flooding hinders further reaction.

Recently, inspired by setaria's hydrophobic leaves, Prof. GAO Minrui's team from University of Science and Technology of China developed Cu catalyst composed of sharp needles which possesses high level of hydrophobicity and stability.

The study was published in Journal of the American Chemical Society.

Nature has never failed to be a source of inspiration for scientists. This time, scientists turned to setaria's needle-like leaves to enhance the hydrophobicity of the catalyst in carbon dioxide reduction reaction (CO2RR).

They mimicked the sharp structures on Cu needles to assemble into hierarchical architectures. Such architectures, effectively preventing itself from being wetted just like the setaria's leaves do, enable electrode-electrolyte interface to trap more CO2 as well as help construct robust gas-liquid-solid three-phase boundary that mitigates flooding.

Compared with Cu particles, dendrites hold advantages ranging from stability to productivity.

The predominant hydrophobicity owing to its hierarchical structure doesn't fade appreciably after 10 min of electrochemical operation. Under even more harsh circumstances, like under a constant current density of 300 mA cm-2 for 10 h, the selectivity of the catalyst roughly remains the same with slight loss.

The hierarchical Cu electrode will trap CO2 once they reach the electrode surface, quite similar to the setaria, making mass transfer and accumulation of CO2 possible in practical use.

Furthermore, the selectivity of hierarchical Cu catalyst outstands Cu particles. Cu dendrites can generate the target product, C2+, at a range of applied potentials between -0.53 and -0.68V with overwhelmingly more remarkable C2+:C1+ selectivity of 15.4 at -0.68V over Cu particles.

"The bioinspired hierarchical Cu catalyst effectively mitigates electrolyte flooding by its remarkable hydrophobicity and largely enhances the productivity of CO2RR," said Prof. GAO.

Credit: 
University of Science and Technology of China

The vision: Tailored optical stimulation for the blind

Stimulation of the nervous system with neurotechnology has opened up new avenues for treating human disorders, such as prosthetic arms and legs that restore the sense of touch in amputees, prosthetic fingertips that provide detailed sensory feedback with varying touch resolution, and intraneural stimulation to help the blind by giving sensations of sight.

Scientists in a European collaboration have shown that optic nerve stimulation is a promising neurotechnology to help the blind, with the constraint that current technology has the capacity of providing only simple visual signals.

Nevertheless, the scientists' vision (no pun intended) is to design these simple visual signals to be meaningful in assisting the blind with daily living. Optic nerve stimulation also avoids invasive procedures like directly stimulating the brain's visual cortex. But how does one go about optimizing stimulation of the optic nerve to produce consistent and meaningful visual sensations?

Now, the results of a collaboration between EPFL, Scuola Superiore Sant'Anna and Scuola Internazionale Superiore di Studi Avanzati, published today in Patterns, show that a new stimulation protocol of the optic nerve is a promising way for developing personalized visual signals to help the blind - that also take into account signals from the visual cortex. The protocol has been tested for the moment on artificial neural networks known to simulate the entire visual system, called convolutional neural networks (CNN) usually used in computer vision for detecting and classifying objects. The scientists also performed psychophysical tests on ten healthy subjects that imitate what one would see from optic nerve stimulation, showing that successful object identification is compatible with results obtained from the CNN.

"We are not just trying to stimulate the optic nerve to elicit a visual perception," explains Simone Romeni, EPFL scientist and first author of the study. "We are developing a way to optimize stimulation protocols that takes into account how the entire visual system responds to optic nerve stimulation."

"The research shows that you can optimize optic nerve stimulation using machine learning approaches. It shows more generally the full potential of machine learning to optimize stimulation protocols for neuroprosthetic devices," continues Silvestro Micera, EPFL Bertarelli Foundation Chair in Translational Neural Engineering and Professor of Bioelectronics at the Scuola Superiore Sant'Anna.

Restoring sight, but with limited resolution

The idea is to stimulate the optic nerve to induce phosphenes, the sensation of light in a region of one's field of view. The EPFL scientists plan to use intraneural electrodes, ones that pierce through the nerve instead of being wrapped around it, but there are still tremendous constraints on the resulting perceived image.

The constraint comes from the physiology of the optic nerve compared to the dimensions of electrode technology. The intraneural electrode consists of stimulation sites, and these are few in number compared to the million axons bundled up in the optic nerve, the latter being no more than a few millimeters in diameter. In other words, a given stimulation site reaches hundreds to thousands of surrounding nerve fibers or axons coming from the retina, leading to very coarse electrical stimulation.

Tuning this coarse electrical stimulation is a major challenge for all neuroprosthetics in general, but even more so for optical signals which are extremely complex compared to signals providing sensory feedback from upper and lower limbs, for instance.

The scientists work is the first to feature automatic optimization of optic nerve stimulation protocols. "The most relevant conceptual advancement is linked to the fact that for the first time, we have defined the problem of optimizing nerve stimulation by 'closing the loop' on cortical activation patterns," explains Romeni. "In our model, the idea that we could exploit cortical signals to guide nerve stimulation produced results comparable to and better than the theoretical optimum for current approaches to nerve stimulation optimization."

"Our study shows that it is possible to elicit desired activity patterns in deep layers of a CNN that simulate cortical visual areas. The next step is to understand what patterns should be evoked in order to induce percepts of arbitrary visual objects," continues Davide Zoccolan, Professor of Neurophysiology and Head of SISSA Visual Neuroscience Lab. "To meet this challenge, we are now working on building predictive models of neuronal responses based on CNNs. These models will learn the "tuning" of visual cortical neurons based on their responses to a battery of visual images, thus uncovering the mapping between image space and response space that is central for sight restoration".

Clinical trials and the future ciphers

For the moment, the EPFL intraneural electrodes have not yet been tested in people.

With clinical trials planned within the next year in a collaboration with Italian partners at Policlinico Gemelli in Rome, the same place where implants for hand amputees were performed, the scientists wonder what the future volunteers will actually see.

"The translation to patients will require dealing with intersubject variability, a well-known problem in neuroprosthetics," says Romeni. "We are far from understanding everything about the nervous system and we know that the current technology has intrinsic limitations. Our method will help to tackle both and to deal with how the brain interprets stimulation, hopefully leading to more natural and effective protocols."

The challenges are tremendous, but the scientists are taking the steps to turn the vision into reality.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Quantum-nonlocality at all speeds

image: The new result proves that it is possible to design a Bell experiment for particles moving in a quantum superposition at very high speeds.

Image: 
© ALOOP; ÖAW

The phenomenon of quantum nonlocality defies our everyday intuition. It shows the strong correlations between several quantum particles some of which change their state instantaneously when the others are measured, regardless of the distance between them. While this phenomenon has been confirmed for slow moving particles, it has been debated whether nonlocality is preserved when particles move very fast at velocities close to the speed of light, and even more so when those velocities are quantum mechanically indefinite. Now, researchers from the University of Vienna, the Austrian Academy of Sciences and the Perimeter Institute report in the latest issue of Physical Review Letters that nonlocality is a universal property of the world, regardless of how and at what speed quantum particles move.

It is easy to illustrate how correlations can arise in everyday life. Imagine that each day of the month you send two of your friends, Alice and Bob, a toy engine of a set of two for their collection. You can choose each of the engines to be either red or blue or either electric or steam. Your friends are separated by a large distance and do not know about your choice. Once their parcels arrive, they can check the colour of their engine with a device that can distinguish between red and blue or check whether the engine is electric or steam using another device. They compare the measurements made over time to look for particular correlations. In our everyday world, such correlations obey two principles - "realism" and "locality". "Realism" means that Alice and Bob reveal only what colour or the mechanism of the engine you had chosen in the past, and "locality" means that Alice's measurement cannot change the colour or the mechanism of Bob's engine (or vice versa). Bell's theorem, published in 1964 and considered by some to be one of the most profound discoveries in the foundations of physics, showed that correlations in the quantum world are incompatible with the two principles - a phenomenon known as quantum non-locality.

Quantum nonlocality has been confirmed in numerous experiments, the so-called Bell tests, on atoms, ions and electrons. It not only has deep philosophical implications, but also underpins many of the applications such as quantum computation and quantum satellite communications. However, in all of these experiments, the particles were either at rest or moving at low velocities (scientists call this regime "non-relativistic"). One of the unsolved problems in this field, which still puzzles physicists, is whether nonlocality is preserved when particles are moving extremely fast, close to the speed of light (i.e., in the relativistic regime), or when they are not even moving at a well-defined speed.

For two quantum particles in a Bell's test which move at high speeds researchers predict that the correlations between the particles are, in principle, reduced. However, if Alice and Bob adapt their measurements in a way that depends on the speed of the particles the correlations between the results of their measurements are still nonlocal. Now imagine that not only are the particles moving very fast, but their velocity is also indefinite: each particle moves in a so-called superposition of different velocities simultaneously, just as the infamous Schrödinger's cat is simultaneously dead and alive. In such a case, is their description of the world still non-local?

Researchers, led by ?aslav Brukner at the University of Vienna and the Austrian Academy of Sciences, have shown that Alice and Bob can indeed design an experiment which would prove that the world is nonlocal. For this they used one of the most fundamental principles of physics namely that physical phenomena do not depend on the frame of reference from which we observe them. For example, according to this principle, any observer, whether moving or not, will see that an apple falling from a tree will touch the ground. The researchers went a step further and extended this principle to reference frames "attached" to quantum particles. These are called "quantum reference frames." The key insight is that if Alice and Bob could move with the quantum reference frames along with their respective particles, they could perform the usual Bell test, since for them the particles would be at rest. In this way, they can prove quantum nonlocality for any quantum particle, regardless of whether the velocity is indefinite or close to that of light.

Flaminia Giacomini, one of the study's authors, says, "Our result proves that it is possible to design a Bell experiment for particles moving in a quantum superposition at very high speeds." The co-author, Lucas Streiter, concludes, "We have shown that nonlocality is a universal property of our world." Their discovery is expected to open applications in quantum technologies, such as quantum satellite communications and quantum computation, using relativistic particles.

Credit: 
University of Vienna