Tech

Demonstration of high-speed SOT-MRAM memory cell compatible with 300mm Si CMOS technology

image: This is a schematic of the STT-MRAM cell (Two Terminal Device).

Image: 
CIES, Tohoku University

Researchers at Tohoku University have announced the demonstration of high-speed spin-orbit-torque (SOT) magnetoresistive random access memory cell compatible with 300 mm Si CMOS technology.

The demand for low-power and high-performance integrated circuits (ICs) has been increasing as artificial intelligence (AI) and Internet-of-Things (IoT) devices become more widely adopted. With the present ICs, purely CMOS-based memories such as embedded Flash memory (eFlash) and static random-access memory (SRAM) are responsible for a high proportion of power consumption. In order to lower power consumption while keeping high performance, magnetoresistive random access memories (MRAMs) have been intensively developed. Spin-transfer torque MRAMs (STT-MRAMs) are the most intensively developed MRAM. Major semiconductor companies have now announced that they are ready for mass production of STT-MRAM for eFlash replacement.

Researchers are aiming to replace SRAM with MRAM. For SRAM replacement, MRAM must achieve high-speed operation above 500 MHz. To meet the demand, an alternative MRAM, so-called spin-orbit torque MRAM (SOT-MRAM) was proposed, which has several advantages for high-speed operation. Because of these advantages, SOT-MRAM has also been developed; however, most laboratory studies focus on the fundamentals of SOT devices. To realize SRAM replacement by SOT-MRAM, it is required to demonstrate high-performance of SOT-MRAM memory cell on 300mm CMOS substrate. In addition, it is necessary to develop the integration process for SOT-MRAM, e.g., thermal tolerance against 400°C annealing, which is a requirement of the standard CMOS back-end-of-line process.

The research team led by Professors Tetsuo Endoh and Hideo Ohno - the current president of Tohoku University - has developed an integration process for SOT devices compatible with 55nm CMOS technology and fabricated SOT devices on 300mm CMOS substrates. The newly developed SOT device has simultaneously achieved high-speed switching down to 0.35 ns and a sufficiently high thermal stability factor (E/kBT=70) for the high-speed non-volatile memory applications with robustness against annealing at 400°C. Based on this achievement, the research team has integrated the SOT device with CMOS transistors and finally demonstrated high-speed operation in complete SOT-MRAM memory cells.

These achievements have addressed the issues to make SOT-MRAM practical for commercial applications and thus offer a way to replace SRAM with SOT-MRAM, which will contribute to the realization of high-performance electronics with low-power consumption.

Credit: 
Tohoku University

Acoustic focusing to amass microplastics in water

video: (Left) In turning off the acoustic force, the flow stream was divided almost equally over the three branches. (Right) Upon turning on the acoustic force, all the PS beads concentrated at the center of the microchannel and flowed into the central branch.

Image: 
Copyright © 2019 Elsevier B.V.

Microplastics are receiving a lot of attention lately due to its difficulty in removal from the environment. Sieves and filtrations are currently the predominant way to capture microplastics in water. However, this is impractical because filters clog easily and regularly need to be cleaned or replaced. Another issue is that it has been impossible to collect anything smaller than 0.3mm, the size of the mesh plankton net pore diameter. This is unfortunate because the majority of microplastics causing havoc are smaller than that, with unknown effects on the eco and biosystems.

A promising new method to collect such microplastics has been devised using acoustics to gather them in water. A bulk acoustic wave (BAW) device was designed and fabricated that channels microplastics, gathering them in the middle channel while water flows out the two side channels. The idea for this study came about when Professor Hiroshi Moriwaki, specializing in environmental analysis at the Faculty of Textile Science and Technology asked Associate Professor Yoshitake Akiyama, first author of the study (known for unique approaches to solving long standing problems *link to inkjet for cryopreservation*) if there was a way to tackle microplastics in water from an engineering standpoint.

The researchers focused in on the fact that one of the biggest sources of microplastics in our oceans are from laundry machines. A typical laundry machine discharges about ten thousand fibers per one 100 liter washing cycle. Many of our clothes are made of chemical fibers, and tiny pieces of microplastic fibers break off in the washing machine. Wastewater treatment plants are currently unable to capture microplastics.

The researchers decided to create a device that collects microplastics and microplastic fibers by piezo vibrations. By using acoustics at a force and amplitude appropriate for the length, diameter and compressibility of the microplastic, debris collects in the middle of a three channel device. The two channels on the side expels clean water while the microplastic fibers gather in the middle, having been acoustically focused utilizing the piezo element to create the acoustic standing wave. Different types of microplastics have different types of densities, bulk modulus and compressibility which makes for a different acoustic contrast factor (ACF). By choosing the width of the microchannel to be half of the wavelength in water, the particles are encouraged to gather in the middle of the tube. It took about 0.7 seconds for the particles to be focused in this way.

The researchers ran into trouble when preparing microplastics for the experiment. It was difficult to create microplastics of appropriate size. At first they tried to use a blender to chop up the fibers to the same length but the plastic fibers would not cut up. By asking colleagues in the Textile department the researchers found out about the Kanehara Pile manufacturer who kindly provided them with materials necessary for research.

For the experiment, a formula was devised to calculate the best acoustic focusing to target microplastic fibers Nylon 6, PET, and polystyrene microparticles. Collection rates were very high, 95% for PET and 99% for Nylon 6 when not accounting for minimal particles that stuck to the walls. The hydrodynamic force aligns the fibers so the BAW device avoids clogging. The particles were tracked using motion-analysis software. For future improvements, the surface of the microchannels could be produced by using methods to minimize roughness and discourage sticking.

Refinements needed for real world applications and scalability is to use multiple channels in serial and parallel with different diameters and force to capture all types of microplastics. By adding multiple channels (7 trifurcated, which means 3 to the power of 7) a 100 liters of laundry water can effortlessly be concentrated into 50mL, which would make it easy to throw away or burn. The study used concentrations of microplastic fibers of the maximum expected in real world applications. Current limits to implementations are that the draining process would take a long time. With this study, PS beads 15μm in diameter were captured, and in theory, the minimum size of PS beads capable of being captured by this BAW device are 4.3μm. Smaller beads could be captured with modifications to the BAW device. Most microplastics in waste water has a diameter of 10μm and length of 2 to 200μm. The BAW device can successfully capture such microplastics. Further developments in acoustofluidics are needed to capture nanoplastics smaller than 100nm in diameter.

Credit: 
Shinshu University

Nanowire detects Abrikosov vortices

image: Olga Skryabina, a researcher at the Laboratory of Topological Quantum Phenomena in Superconducting Systems, MIPT, is monitoring contact-to-chip microwelding

Image: 
Evgeniy Pelevin, MIPT Press Office

Researchers from the Moscow Institute of Physics and Technology, Lomonosov Moscow State University, and the Institute of Solid State Physics of the Russian Academy of Sciences have demonstrated the possibility of detecting Abrikosov vortices penetrating through a superconductor-ferromagnet interface. The device considered in their study, published in Scientific Reports, is a ferromagnetic nanowire with superconductive electrodes connected to it.

Superconductors are materials that have the property of losing electrical resistance below a certain critical temperature ??. Another astonishing property of superconductors is magnetic field expulsion (levitation). This effect results from a current flowing over the superconductor surface, shielding the magnetic field. There are also type II superconductors, which are penetrable for the magnetic flux in the form of quantized vortices at a temperature below critical. This phenomenon was named after Alexey Abrikosov, who originally predicted it. An Abrikosov vortex is a superconducting current vortex with a nonsuperconducting core that carries a magnetic flux quantum.

Olga Skryabina, the first author of the paper and a researcher at the MIPT Laboratory, says: "The research objective was studying the co-existence of antagonistic phenomena in 1D superconductor-ferromagnet systems. Such systems have recently been of great interest due to their strong magnetic anisotropy with various dimensional and spin effects. These phenomena make such systems a promising choice for functional hybrid nano-devices, e.g., superconducting current converters, spin valves, magnetoresistive RAM. We connected a ferromagnetic nickel nanowire to superconducting niobium electrodes."

The researchers have investigated a system of two superconducting niobium electrodes connected by a nickel nanowire (Figure 1). It has been found that as the magnetic field varies, the nanowire resistance strongly depends on the effects occurring at the superconductor-ferromagnet boundary.

First, the physicists considered the system in its normal state, when the temperature is above the critical one, and the magnetic field equally penetrates all the parts of the structure (Figure 2a.) The sample resistance did not change significantly with the increase of the magnetic field strength. Then the researchers lowered the temperature below the critical value. The niobium electrodes transitioned into a superconducting state, and their resistance dropped to zero. At the same time, the experimenters observed a drastic rise of the system resistance. The only explanation for this was the contribution of the superconductor-ferromagnet boundaries to the resistance. Concurrently, the niobium started conducting shielding currents, and the superconductor began expulsing the magnetic field (Figure 2b). These phenomena result in unusual sawtooth magnetic resistance curves, and a shift relative to various sweeps (Figure 3.)

Olga Skryabina continues: "We placed the sample in a magnetic field parallel to the nanowire centerline. It was found that by measuring the sample resistance under such conditions, we can detect the moment when a magnetic flux quantum enters or exists a superconducting."

A vortex penetration and exit into/from the niobium (Figure 2c) cause the sawtooth electrical resistance. The nickel nanowire in the system acts like a lightning rod that "attracts" the magnetic field. A contact with it weakens the niobium electrode superconductivity, and, thus, localizes the Abrikosov vortices penetration point. The research demonstrates an immense difference between these superconducting chains and conventional electric circuits. There is a need for more research of hybrid superconductor devices to develop more advanced superconducting digital and quantum computers, and supersensitive sensors.

Credit: 
Moscow Institute of Physics and Technology

Pregnant smokers at higher risk for gestational diabetes, Hebrew University study finds

image: Hebrew University's Dr. Yael Bar-Zeev.

Image: 
Hadas Parush/Flash 90

Smoking during pregnancy is one of the most significant risk factors for poor pregnancy outcomes. In the United States, 10.7% of all women smoke during their pregnancy or are exposed to second-hand smoke. In doing so, they place their babies at a higher risk for premature birth, low birth weight and developmental delays than do their non-smoking counterparts.

In addition to these risks, an international research team headed by Dr. Yael Bar-Zeev at the Hebrew University of Jerusalem's Braun School of Public Health and Community Medicine, in collaboration with Dr. Haile Zelalem and Professor Ilana Chertok at Ohio University, found that smoking during pregnancy may also increase a woman's risk of developing gestational diabetes mellitus (GDM). Gestational diabetes leads to higher risks for pregnancy and birth complications such as macrosomia (larger than average babies) and caesarean deliveries.

The findings were published this week in Obstetrics & Gynecology.

Bar-Zeev and her team conducted a secondary analysis of data collected by the United States' Centers for Disease Control and Prevention (CDC) called the Pregnancy Risk Assessment Monitoring System (PRAMS). For this study, they looked at 222,408 women who gave birth during 2009-2015, of which 12,897 (5.3%) were diagnosed with gestational diabetes.

The researchers found that pregnant women who smoke the same or higher number of cigarettes per day as they did before their pregnancy are nearly 50% more likely to develop gestational diabetes. Those pregnant women who cut down on their number of cigarettes still have a 22% higher risk than women who never smoked or who quit smoking two years before they became pregnant.

"Ideally, women should quit smoking before they try to become pregnant," Bar-Zeev cautioned. "Further, due to the high risks involved, it's imperative that pregnant smokers have access to pregnancy-specific smoking cessation programs. Currently, in the United States and Israel, these services are not accessible enough or not tailored for pregnant women and that needs to change".

Credit: 
The Hebrew University of Jerusalem

Green hydrogen: Research to enhance efficiency

image: This is the main author of the study, Aleksandr Bashkatov from the Institute of Fluid Dynamics.

Image: 
HZDR / Stephan Floss

Laboratory experiments and a parabolic flight campaign have enabled an international team of researchers from the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) to gain new insights into water electrolysis, in which hydrogen is obtained from water by applying electric energy. Water electrolysis could play a key role in the energy transition if efficiency improvements can be achieved. The findings published recently in the journal Physical Review Letters offer a possible starting point for enhancing the environmental impact of hydrogen-based technologies.

Workable solutions for the intermediate storage of energy are needed to ensure that excess electricity generated by solar and wind energy systems during peak production is not wasted. The production of hydrogen - which can then be converted into other chemical energy carriers - is an attractive option. It is essential that this process occurs in the most efficient - and therefore cost-effective - way.

The team of HZDR researchers, led by Prof. Kerstin Eckert, specifically focused on water electrolysis. This method uses electric energy to split water molecules into their component parts - hydrogen and oxygen. To do this, an electrical current is applied to two electrodes immersed in an acidic or alkaline aqueous solution. Gaseous hydrogen forms at one electrode, and oxygen at the other. However, energy conversion involves losses. In practice, the method currently delivers energy efficiency of around 65 to 85 percent, depending on the electrolytic process used. The aim of electrolysis research is to increase efficiency to around 90 percent by developing better techniques.

Oscillating hydrogen bubbles provide new understanding

A better understanding of the underlying chemical and physical processes is essential for optimizing the process of electrolysis. Gas bubbles growing on the electrode experience buoyancy, causing the bubbles to rise. The problem of precisely predicting the detachment time of gas bubbles from electrodes has baffled researchers for years. It is also known that heat loss occurs when bubbles remain on the electrode. In a combination of laboratory experiments and theoretical calculations, the scientists have now generated a better understanding of the forces that act on the bubble. "Our findings solve an old paradox of research on hydrogen bubbles," Eckert reckoned.

In previous experiments, the researchers already noticed that hydrogen bubbles begin to oscillate rapidly. They investigated this phenomenon in greater detail: using a high-speed camera, they captured the shadow of bubbles, and analyzed how individual bubbles may detach from an electrode a hundred times a second, only to reattach to it immediately afterwards. They realized that a hitherto neglected electric force was competing with buoyancy, facilitating oscillation.

The experiment also showed that a kind of microbubble carpet is permanently formed between the gas bubble and the electrode. Above a certain carpet thickness, the electric force is no longer able to pull the bubble back, enabling it to rise. This knowledge can now be used to improve the efficiency of the entire process.

Parabolic flights confirm findings

To substantiate their results, the researchers repeated the experiment during a parabolic flight sponsored by the German Aerospace Center (DLR). This enabled them to examine how changes in buoyancy influence the dynamics of gas bubbles. "The altered gravity during a parabola enabled us to vary key physical parameters, which we were unable to influence in the lab," explained Aleksandr Bashkatov, lead author of the recently published study. The PhD student at the HZDR conducted the experiments on board the parabolic flight together with other colleagues. During periods of approximate zero gravity, when freefall is experienced during a parabolic flight, buoyancy is practically zero - but is greatly enhanced at the end of the parabola. The results of the flights also showed that it would be difficult to transfer hydrogen technologies to potential use in space - without buoyancy, removing the gas bubbles from the electrode would be an even greater challenge than on Earth.

Application of water electrolyzers: regenerative energies for the region

Despite the fact that the research team's experiments had to take place under simplified laboratory conditions, the new findings will contribute to increasing the efficiency of electrolyzers in the future. The researchers, headed by Kerstin Eckert, are currently planning to team up with partners from Fraunhofer IFAM Dresden, TU Dresden, Zittau-Görlitz University of Applied Sciences and local industrial partners for a project exploring green hydrogen production in the German region of Lusatia. The aim of the project is to improve alkaline water electrolysis to such an extent that it can replace fossil fuels. "Alkaline electrolyzers are much cheaper and ecologically sound, and do not use scarce resources because they have no need for precious metal-coated electrodes. The long-term objective of the consortium is to develop a new generation of powerful alkaline devices," summarized Eckert.

Credit: 
Helmholtz-Zentrum Dresden-Rossendorf

Ultrafast stimulated emission microscopy of single nanocrystals in Science

image: Upon stimulation, two photons emerge from the quantum dot giving detailed information about the dynamics of the excited charges within the Quantum Dot (QD)

Image: 
ICFO

The ability to investigate the dynamics of single particle at the nano-scale and femtosecond level remained an unfathomed dream for years. It was not until the dawn of the 21st century that nanotechnology and femtoscience gradually merged together and the first ultrafast microscopy of individual quantum dots (QDs) and molecules was accomplished. Ultrafast microscopy studies entirely rely on detecting nanoparticles or single molecules with luminescence techniques, which require efficient emitters to work. However, such techniques cause degradation to the sample, as well as, yield little information about the dynamics of the system in the excited state. Only in recent years, the efforts to find an alternative compatible technique to study fast processes in nano-objects came into the spotlight.

Now, ICFO researchers Lukasz Piatkowski, Nicolò Accanto, Gaëtan Calbris, Sotirios Christodoulou, led by ICREA Prof. at ICFO Niek F. van Hulst, in collaboration with Iwan Moreels (Ghent University, Belgium), have published a study in SCIENCE entitled "Ultrafast stimulated emission microscopy of single nanocrystals", where they report on a technique for studying ultrafast events in individual non-fluorescent nano-objects.

In their study, they took individual QDs and rather than waiting for the QD to spontaneously emit light through photoluminescence, the team used a sophisticated combination of laser pulses to promote individual QDs into excited state and then, force them down, back to the ground state to first: image individual QDs and second: discern the evolution of the excited charges within the entire photocycle.

Dr. Lukasz Piatkowski explains why they used a laser pulse pair to effectively image the dynamics of the QDs: "It is like throwing a ball into a tree; the higher you throw it, the more excited the state. The first laser pulse of the system (photon) throws the first ball (charge in the QD) into the tree. If you are using a photoluminescence technique it is like you are standing below the tree, and you cannot see what is happening inside the treetop or crown. Thus, you will not know whether the ball starts to bounce down the branches, where, when and how is starts to fall down, if it stomps with something on its way, if it gets caught in an intermediate branch, etc. So, in order to see what is happening with the first ball, you need to find another technique that allows you to look into the treetop. The technique we used allowed us to throw a second ball into the tree top (second laser pulse interacting with the QD) to bring the first ball down. Throwing the second ball higher or lower, stronger or weaker, sooner or later after the first ball, we obtain information about the first ball and the structure of the tree (how long it took the balls to fall out, where, how, etc.) ".

In their experiment, the first laser pulse brings individual QD to the excite state. Then, every few hundreds femtosecond, they shot a second laser pulse onto the QD to bring the charges down to ground state, inducing recombination and emission of an extra photon. Hence, for every probe photon they shot into the system, they got two twin photons back. These extra photons allowed the authors not only to image the QDs but also to precisely track the evolution of the excited charges in the QD, unveiling how many charges underwent spontaneous recombination, stimulated recombination and excited state absorption.

Being able to track excited charges at the nanoscale is of fundamental importance in nanotechnology, photonics and photovoltaics. The results of the study have proven that ultrafast stimulated emission microscopy can be used to study ultrafast processes in individual chromophoric particles that are otherwise undetectable through fluorescence/photoluminescence techniques. In other words, such study has permitted imaging and studying the dynamics of nano-particles and structures without the need of external fluorescent labels.

As ICREA Prof at ICFO Niek van Hulst remarks, "Significant advances are expected in the future within the field of ultra-fast-nano-regime imaging techniques. The first detection of quantum dots using this approach has been outstanding. We now aim to extend this to molecules and biomolecular complexes, specifically photo-synthetic complexes. We are currently working on 3 and 4 pulse schemes to merge the stimulated emission and luminescence detection of single systems with 2D-spectroscopy.

Credit: 
ICFO-The Institute of Photonic Sciences

How planets may form after dust sticks together

image: These are glass particles colliding in microgravity.

Image: 
Gerhard Wurm, Tobias Steinpilz, Jens Teiser and Felix Jungmann

Scientists may have figured out how dust particles can stick together to form planets, according to a Rutgers co-authored study that may also help to improve industrial processes.

In homes, adhesion on contact can cause fine particles to form dust bunnies. Similarly in outer space, adhesion causes dust particles to stick together. Large particles, however, can combine due to gravity - an essential process in forming asteroids and planets. But between these two extremes, how aggregates grow has largely been a mystery until now.

The study, published in the journal Nature Physics, found that particles under microgravity - similar to conditions believed to be in interplanetary space - develop strong electrical charges spontaneously and stick together, forming large aggregates. Remarkably, although like charges repel, like-charged aggregates form nevertheless, apparently because the charges are so strong that they polarize one another and therefore act like magnets.

Related processes seem to be at work on Earth, where fluidized bed reactors produce everything from plastics to pharmaceuticals. During this process, blowing gas pushes fine particles upwards and when particles aggregate due to static electricity, they can stick to reactor vessel walls, leading to shutdowns and poor product quality.

"We may have overcome a fundamental obstacle in understanding how planets form," said co-author Troy Shinbrot, a professor in the Department of Biomedical Engineering in the School of Engineering at Rutgers University-New Brunswick. "Mechanisms for generating aggregates in industrial processes have also been identified and that - we hope - may be controlled in future work. Both outcomes hinge on a new understanding that electrical polarization is central to aggregation."

The study, led by researchers at the University of Duisburg-Essen in Germany, opens avenues to potentially controlling fine particle aggregation in industrial processing. It appears that introducing additives that conduct electricity may be more successful for industrial processes than traditional electrostatic control approaches, according to Shinbrot.

The researchers want to investigate effects of material properties on sticking and aggregation, and potentially develop new approaches to generating and storing electricity.

Credit: 
Rutgers University

Volcano F is the origin of the floating stones

Stones do not float in water. This is a truism. But there is hardly a rule without exception. In fact, some volcanic eruptions produce a very porous type of rock with a density so low that it does float: Pumice. An unusually large amount of it is currently drifting in the Southwest Pacific towards Australia. When it was first sighted in the waters of the island state of Tonga at the beginning of August, it almost formed a coherent layer on the ocean's surface. The "pumice raft" made it into headlines all over the world.

Various underwater volcanoes were discussed at that time as the potential source. But a direct proof for the exact origin of the pumice was missing so far. Researchers at the GEOMAR Helmholtz Centre for Ocean Research Kiel (Germany), together with colleagues from Canada and Australia, are now publishing evidence in the Journal of Volcanology and Geothermal Research that clearly identifies the culprit. It is a so far nameless underwater volcano just 50 kilometres northwest of the Tongan island of Vava'u. "In the international scientific literature, it appears so far only under the number 243091 or as Volcano F", says Dr. Philipp Brandl of GEOMAR, first author of the study.

Only in January of this year Dr. Brandl and several of his co-authors were working in the region on the German research vessel SONNE. The expedition, named ARCHIMEDES, aimed at studying the formation of new crust in the geologically extremely dynamic region between Fiji and Tonga. "When I then saw the reports on the pumice raft in the media in the summer, I became curious and started researching with my colleagues," says the geologist.

The team found what they were looking for on of freely accessible satellite images. On an image of the ESA satellite Copernicus Sentinel-2 taken on 6 August 2019, clear traces of an active underwater eruption can be seen on the water surface. Since the images are exactly georeferenced, they could be compared with corresponding bathymetric maps of the seafloor. "The eruption traces fit exactly to Volcano F", says Dr. Brandl.

To be on the safe side, the researchers also compared this position with information from stations of the global seismic network that recorded signals from the eruption. "Unfortunately, the density of such stations in the region is very low. There were only two stations that recorded seismic signals of a volcanic eruption. However, their data is consistent with Volcano F as the origin," says Dr. Brandl.

Pumice can form during volcanic eruptions when viscous lava is foamed by volcanic gases such as water vapour and carbon dioxide. This creates so many pores in the cooling rock that its density is lower than that of water. "During an underwater eruption, the probability to generate pumice is particularly high," explains Dr. Brandl.

With the help of additional satellite images, the team traced the drift and dispersal of the pumice raft until mid-August. It slowly drifted west and reached an area of up to 167 square kilometres. This is about twice the size of Manhattan. The team was also able to constrain the magnitude of the underwater eruption. It corresponded to a volcanic eruption index of 2 or 3, which is similar to recent eruptions of Mount Stromboli, for example.

With the current direction and speed, the pumice raft is expected to hit the Great Barrier Reef off the eastern coast of Australia at the end of January or beginning of February. Biologists, in particular, are eagerly awaiting this event because pumice rafts may play an important role in the dispersion of fauna in the vastness of the Pacific Ocean. The Kiel team of geologists would like to examine samples of the pumice in order to determine the geochemistry of Volcano F more precisely. "Maybe our Australian colleagues will send us a few samples next year," says Dr. Brandl.

Credit: 
Helmholtz Centre for Ocean Research Kiel (GEOMAR)

Asian water towers are world's most important and most threatened

Scientists from around the world have assessed the planet’s 78 mountain glacier-based water systems. For the first time, they ranked them in order of their importance to adjacent lowland communities while assessing their vulnerability to future environmental and socioeconomic changes. These systems, known as mountain water towers, store and transport water via glaciers, snow packs, lakes and streams, thereby supplying invaluable water resources to 1.9 billion people globally - roughly a quarter of the world’s population.

The research, published in Nature on Dec. 9, provides evidence that global water towers are at risk, in many cases critically, due to the threats of climate change, growing populations, mismanagement of water resources, and other geopolitical factors. Further, the authors conclude that it is essential to develop international, mountain-specific conservation and climate change adaptation policies and strategies to safeguard both ecosystems and people downstream.

Of the 78 global water towers identified, Asian Water Towers relied on the river systems including Indus, Tarim, Amu Darya, Syr Darya, Ganges-Brahmaputra are ranked as the most important and most threatened water towers.

The most relied-upon mountain system is the Indus water tower, according to their research. The Indus water tower - made up of vast areas of the Himalayan mountain range and covering portions of Afghanistan, China, India and Pakistan - is also one of the most vulnerable.

To determine the importance of these 78 water towers, researchers analyzed the various factors that determine how reliant downstream communities are upon the supplies of water from these systems. They also assessed each water tower to determine the vulnerability of the water resources, as well as the people and ecosystems that depend on them, based on predictions of future climate and socioeconomic changes.

The study, which was authored by 32 scientists from around the world, was led by Prof. Walter Immerzeel and Dr. Arthur Lutz of Utrecht University, longtime researchers of water and climate change in high mountain Asia.

"What is unique about our study is that we have assessed the water towers' importance, not only by looking at how much water they store and provide, but also how much mountain water is needed downstream and how vulnerable these systems and communities are to a number of likely changes in the next few decades," said Prof. Immerzeel.

Dr. Lutz added, "By assessing all glacial water towers on Earth, we identified the key basins that should be on top of regional and global political agendas."

Prof. YAO Tandong, renowned glaciologist from the Institute of Tibetan Plateau Research of the Chinese Academy of Sciences, a co-author of the study, said a temperature rise of 2° C, as described by the Paris Conference on Climate Change, could cause Asian water towers to see a temperature hike as high as 4° C.

"By 2060 to 2070, rising temperatures due to climate change could lead to ever-stronger glacial retreat in the region," said YAO. "In other words, the melting glaciers in Asian water towers could reduce the water supply for people living downstream in coming decades."

Prof. YAO is one of the first scientists to study glacier changes on the Tibetan Plateau and has spent years studying changes in Asian Water Towers. He is also the chief scientist of the Pan-TPE project that supported this research. The Pan-TPE project was launched in 2018 by the Chinese Academy of Sciences to echo calls from Third Pole Environment. It's an international science program to investigate water, ecosystem and human impact in the region, with a focus on Asian Water Tower changes.

Credit: 
Chinese Academy of Sciences Headquarters

Separating drugs with MagLev

The composition of suspicious powders that may contain illicit drugs can be analyzed using a quick and simple method called magneto-Archimedes levitation (MagLev), according to a new study published in the journal Angewandte Chemie. A team of scientists at Harvard University, USA, has developed the MagLev method to differentiate common street drugs in dilute mixtures. The method could complement or even replace other portable drug identification techniques, the scientists suggest.

Synthetic opioids--mainly fentanyl and its analogues--are a group of substances that were involved in 30,000 overdose-related fatalities in 2017 in the US alone. Law enforcement officers have to assess the contents of small samples of powders quickly and precisely, and cases of fentanyl detection are increasing in frequency. Sniffer dogs or colorimetric assays are relied on, which allow a rough but only qualitative analysis. However, an innovative, more complete analysis method has now been developed by the group of George M. Whitesides at Harvard University, in collaboration with colleagues from the Drug Enforcement Administration (DEA), Dulles, USA. Using MagLev, the scientists could easily separate and even isolate different drugs (as powders) from sample mixtures.

The MagLev device consists of two strong permanent magnets that flank a cuvette filled with a solution of a paramagnetic gadolinium chelate complex. When the scientists add a mixture of powdered drugs to the cuvette, the different substances in the mixture levitate; the powders wander and equilibrate at different heights in the cuvette that correspond to their characteristic density. Thus, the once homogeneous powder is divided into several levitating clouds, each one containing a relatively pure substance. The authors say they could separate up to seven test substances simultaneously. Among the substances that have been separated were prominent illicit drugs and adulterants found in mixtures, such as fentanyl, cocaine, heroin, lidocaine, caffeine, and methamphetamine.

For substance identification, the operator of the MagLev device compares the observed density of an unknown fraction with known densities of illicit drugs. However, the scientists also suggest that MagLev can be used as a preparative technique that separates and concentrates dilute substances, such as fentanyl, which might be present in a fraction of less than five percent by weight. In this case, the operator pipettes the fractionated components out of the cuvette, washes them with solvent, and dries them. Fractionation, and thus, concentration makes it easier to identify dilute drugs in mixtures using more selective but less sensitive techniques, such as FTIR or Raman spectroscopy, according to the authors.

Limitations of the technique are that the samples need to be separated as solid powders and should not dissolve in the paramagnetic solution, which contains gadolinium chelate complex and the nonpolar solvents hexane and tetrachloroethylene.

The authors of the study eventually hope to make the MagLev device commercially available to law-enforcement officers.

Credit: 
Wiley

Detours may make batteries better

image: An illustration shows a battery's cathode undergoing phase transition from iron phosphate (FP) to lithium iron phosphate (LFP) during charging. Simulations by Rice University scientists showed that adding defects -- distortions in their crystal lattices -- could help batteries charge faster.

Image: 
Kaiqi Yang/Rice University

HOUSTON - (Dec. 9, 2019) - Here's a case where detours speed up traffic. The result may be better batteries for transportation, electronics and solar energy storage.

Scientists at Rice University's Brown School of Engineering have discovered that placing specific defects in the crystalline lattice of lithium iron phosphate-based cathodes broadens the avenues through which lithium ions travel. Their theoretical calculations could improve performance up to two orders of magnitude and point the way to similar improvements in other types of batteries.

These defects, known as antisites, are formed when atoms are placed at the wrong positions on the lattice -- that is, when iron atoms sit on the sites that should be occupied by lithium. Antisite defects impede lithium movement inside the crystal lattice and are usually considered detrimental to battery performance.

In the case of lithium iron phosphate, however, the Rice researchers discovered they create many detours within the cathode and enable lithium ions to reach the reaction front over a wider surface, which helps improve the charge or discharge rate of the batteries.

The research appears in the Nature journal Computational Materials.

Lithium iron phosphate is a widely used cathode material for lithium-ion batteries and also serves as a good model system for studying the physics underlying the battery cycling process, said Rice materials scientist Ming Tang, who carried out the research with alumnus Liang Hong, now a researcher at MathWorks, and graduate student Kaiqi Yang.

Upon lithium insertion, the cathode changes from a lithium-poor phase to a lithium-rich one, said Tang, an assistant professor of materials science and nanoengineering. When the surface reaction kinetics are sluggish, lithium can only be inserted into lithium iron phosphate within a narrow surface region around the phase boundary -- the "road" -- a phenomenon that limits the speed at which the battery can recharge.

"If there are no defects, lithium can only enter this small region right around the phase boundary," he said. "However, antisite defects can make lithium insertion take place more uniformly across the surface, and so the boundary would move faster and the battery would charge faster.

"If you force the defect-free cathode to be charged fast by applying a large voltage, there will be a very high local lithium flux at the surface and this can cause damage to the cathode," he said. "This problem can be solved by using defects to spread the flux over the entire cathode surface."

Annealing the material -- heating without burning it -- could be used to control the concentration of defects. Tang said defects would also allow larger cathode particles than nanoscale crystals to be used to help improve energy density and reduce surface degradation.

"An interesting prediction of the model is that this optimal defect configuration depends on the shape of the particles," he said, "We saw that facets of a certain orientation could make the detours more effective in transporting lithium ions. Therefore, you will want to have more of these facets exposed on the cathode surface."

Tang said the model could be applied as a general strategy to improve phase-changing battery compounds.

"For structural materials like steel and ceramics, people play with defects all the time to make materials stronger," he said. "But we haven't talked much about using defects to make better battery materials. Usually, people see defects as annoyances to be eliminated.

"But we think we can turn defects into friends, not enemies, for better energy storage."

Credit: 
Rice University

City research draws on Formula 1 technology for the construction of skyscrapers

image: Researchers City, University of London are developing new vibration-control devices based on Formula 1 technology so 'needle-like' high-rise skyscrapers which still withstand high winds can be built.
Current devices called tuned mass dampers (TMDs) are fitted in the top floors of tall buildings to act like heavyweight pendulums counteracting building movement caused by winds and earthquakes.

Image: 
City, University of London

City, University London draws on Formula 1 technology for the construction of "needle-like" skyscrapers.

Researchers City, University of London are developing new vibration-control devices based on Formula 1 technology so "needle-like" high-rise skyscrapers which still withstand high winds can be built

Current devices called tuned mass dampers (TMDs) are fitted in the top floors of tall buildings to act like heavyweight pendulums counteracting building movement caused by winds and earthquakes. But they weigh up to 1,000 tons and span five storeys in 100-storey buildings - adding millions to building costs and using up premium space in tight city centres.

Recent research work published by Dr Agathoklis Giaralis (an expert in structural dynamics at City, University of London), and his colleagues, published in the November 2019 edition of the Engineering Structures journal (Optimal tuned mass damper inter design in wind-excited tall buildings for occupants' comfort serviceability, preferences and energy harvesting) found that lightweight and compact inerters, similar to those developed for the suspension systems of Formula 1 cars, can reduce the required weight of current TMDs by up to 70%.

Dr Giaralis said: "If we can achieve smaller, lighter TMDs, then we can build taller and thinner buildings without causing seasickness for occupants when it is windy. Such slender structures will require fewer materials and resources, and so will cost less and be more sustainable, while taking up less space and also being aesthetically more pleasing to the eye. In a city like London, where space is at a premium and land is expensive, the only real option is to go up, so this technology can be a game-changer."

Tests have shown that up to 30% less steel is needed in beams and columns of typical 20-storey steel building thanks to the new devices. Computer model analyses for an existing London building, the 48-storey Newington Butts in Elephant and Castle, Southwark, had shown that "floor acceleration" - the measure of occupants' comfort against seasickness - can be reduced by 30% with the newly proposed technology.

"This reduction in floor acceleration is significant," added Dr Giaralis. "It means the devices are also more effective in ensuring that buildings can withstand high winds and earthquakes. Even moderate winds can cause seasickness or dizziness to occupants and climate change suggests that stronger winds will become more frequent. The inerter-based vibration control technology we are testing is demonstrating that it can significantly reduce this risk with low up-front cost in new, even very slender, buildings and with small structural modifications in existing buildings."
Dr Giaralis said there was a further advantage:

"As well as achieving reduced carbon emissions through requiring fewer materials, we can also harvest energy from wind-induced oscillations - I don't believe that we are able at the moment to have a building that is completely self-sustaining using this technology, but we can definitely harvest enough for powering wireless sensors used for inner building climate control."

Credit: 
City St George’s, University of London

Lighting up cardiovascular problems using nanoparticles

image: Assistant Professor Eun Ji Chung, USC's Dr. Karl Jacob Jr. and Karl Jacob III Early-Career Chair

Image: 
USC Viterbi School of Engineering

Heart disease and stroke are the world's two most deadly diseases, causing over 15 million deaths in 2016 according to the World Health Organization. A key underlying factor in both of these global health crises is the common condition, atherosclerosis, or the build-up of fatty deposits, inflammation and plaque on the walls of blood vessels. By the age of 40, around half of us will have this condition, many without symptoms.

A new nanoparticle innovation from researchers in USC Viterbi's Department of Biomedical Engineering may allow doctors to pinpoint when plaque becomes dangerous by detecting unstable calcifications that can trigger heart attacks and strokes.

The research ­­-- from Ph.D. student Deborah Chin under the supervision of Eun Ji Chung, the Dr. Karl Jacob Jr. and Karl Jacob III Early-Career Chair, in collaboration with Gregory Magee, assistant professor of clinical surgery from Keck School of Medicine of USC -- was published in the Royal Society of Chemistry's Journal of Materials Chemistry B.

When atherosclerosis occurs in coronary arteries, blockages due to plaque or calcification-induced ruptures can lead to a clot, cutting blood flow to the heart, which is the cause of most heart attacks. When the condition occurs in the vessels leading to the brain, it can cause a stroke.

"An artery doesn't need to be 80 percent blocked to be dangerous. An artery with 45% blockage by plaques could be more rupture-prone," Chung said. "Just because it's a big plaque doesn't necessarily mean it's an unstable plaque."

Chung said that when small calcium deposits, called microcalcifications, form within arterial plaques, the plaque can become rupture prone.

However, identifying whether blood vessel calcification is unstable and likely to rupture is particularly difficult using traditional CT and MRI scanning methods, or angiography, which has other risks.

"Angiography requires the use of catheters that are invasive and have inherent risks of tissue damage," said Chin, the lead author. "CT scans on the other hand, involve ionizing radiation which can cause other detrimental effects to tissue."

Chung said that the resolution limitations of traditional imaging offers doctors a "bird's eye view" of larger-sized calcification, which may not necessarily be dangerous. "If the calcification is on the micro scale, it can be harder to pick out," she said.

The research team developed a nanoparticle, known as a micelle, which attaches itself and lights up calcification to make it easier for smaller blockages that are prone to rupture to be seen during imaging.

Chin said the micelles are able to specifically target hydroxyapatite, a unique form of calcium present in arteries and atherosclerotic plaques.

"Our micelle nanoparticles demonstrate minimal toxicity to cells and tissue and are highly specific to hydroxyapatite calcifications," Chin said. "Thus, this minimizes the uncertainty in identifying harmful vascular calcifications."

The team has tested their nanoparticle on calcified cells in a dish, within a mouse model of atherosclerosis, as well as using patient-derived artery samples provided by vascular surgeon, Magee, which shows their applicability not only in small animals but in human tissues.

"In our case, we demonstrated that our nanoparticle binds to calcification in the most commonly used mouse model for atherosclerosis and also works in calcified vascular tissue derived from patients," Chin said.

Chung said that the next step for the team was to harness the micelle particles to be used in targeted drug therapy to treat calcification in arteries, rather than just as means of detecting the potential blockages.

"The idea behind nanoparticles and nanomedicine is that it can be a carrier like the Amazon carrier system, shuttling drugs right to a specific address or location in the body, and not to places that you don't want it to go to," Chung said.

"Hopefully that can allow for lower dosages, but high efficacy at the disease site without hurting normal cells and organ processes," she said.

Credit: 
University of Southern California

Reorganizing a computer chip: Transistors can now both process and store information

image: Researchers have created a more feasible way to combine transistors and memory on a chip, potentially bringing faster computing.

Image: 
Purdue University photo/Vincent Walter

WEST LAFAYETTE, Ind. -- A computer chip processes and stores information using two different devices. If engineers could combine these devices into one or put them next to each other, then there would be more space on a chip, making it faster and more powerful.

Purdue University engineers have developed a way that the millions of tiny switches used to process information - called transistors - could also store that information as one device.

The method, detailed in a paper published in Nature Electronics, accomplishes this by solving another problem: combining a transistor with higher-performing memory technology than is used in most computers, called ferroelectric RAM.

Researchers have been trying for decades to integrate the two, but issues happen at the interface between a ferroelectric material and silicon, the semiconductor material that makes up transistors. Instead, ferroelectric RAM operates as a separate unit on-chip, limiting its potential to make computing much more efficient.

A team led by Peide Ye, the Richard J. and Mary Jo Schwartz Professor of Electrical and Computer Engineering at Purdue, discovered how to overcome the mortal enemy relationship between silicon and a ferroelectric material.

"We used a semiconductor that has ferroelectric properties. This way two materials become one material, and you don't have to worry about the interface issues," Ye said.

The result is a so-called ferroelectric semiconductor field-effect transistor, built in the same way as transistors currently used on computer chips.

The material, alpha indium selenide, not only has ferroelectric properties, but also addresses the issue of a conventional ferroelectric material usually acting as an insulator rather than a semiconductor due to a so-called wide "band gap," which means that electricity cannot pass through and no computing happens.

Alpha indium selenide has a much smaller band gap, making it possible for the material to be a semiconductor without losing ferroelectric properties.

Mengwei Si, a Purdue postdoctoral researcher in electrical and computer engineering, built and tested the transistor, finding that its performance was comparable to existing ferroelectric field-effect transistors, and could exceed them with more optimization. Sumeet Gupta, a Purdue assistant professor of electrical and computer engineering, and Ph.D. candidate Atanu Saha provided modeling support.

Si and Ye's team also worked with researchers at the Georgia Institute of Technology to build alpha indium selenide into a space on a chip, called a ferroelectric tunneling junction, which engineers could use to enhance a chip's capabilities. The team presents this work on Dec. 9 at the 2019 IEEE International Electron Devices Meeting.

In the past, researchers hadn't been able to build a high-performance ferroelectric tunneling junction because its wide band gap made the material too thick for electrical current to pass through. Since alpha indium selenide has a much smaller band gap, the material can be just 10 nanometers thick, allowing more current to flow through it.

More current allows a device area to scale down to several nanometers, making chips more dense and energy efficient, Ye said. A thinner material - even down to an atomic layer thick - also means that the electrodes on either side of a tunneling junction can be much smaller, which would be useful for building circuits that mimic networks in the human brain.

Credit: 
Purdue University

Researchers identify top ways to stop projected 142% rise in Latino cancer

image: Leaders at the 2018 Advancing the Science in Cancer in Latinos conference are (left to right) Ruben Mesa, M.D., FACP, director of the UT Health San Antonio MD Anderson Cancer Center; Elena Rios, M.D., M.S.P.H., FACP, president and CEO of the National Hispanic Medical Association; Robert Croyle, Ph.D., director of the National Cancer Institute's Division of Cancer Control and Population Sciences; Amelie Ramirez, Dr.P.H., director of the Institute for Health Promotions Research; and Esteban López, M.D., M.B.A., chief medical officer of clinical strategy and innovation at Blue Cross and Blue Shield of Texas.

Image: 
UT Health San Antonio

SAN ANTONIO (Dec. 9, 2019) - As U.S. Latinos face a staggering 142% projected rise in cancer cases by 2030, UT Health San Antonio leaders gathered international cancer experts to publish a new book with innovative research and recommendations to reduce Latino cancer.

The book, Advancing the Science of Cancer in Latinos in Springer Open Books, showcases results of the same-name conference that brought 300 researchers to San Antonio in 2018.

A follow-up conference, set for Feb. 26-28, 2020, in San Antonio, is open for registration.

Included in the new book are promising research findings on Latino cancer and strategies for new research covering the entire cancer continuum, from advances in risk assessment, prevention, screening, detection, diagnosis, treatment, survivorship and policy.

"Our book, Advancing the Science of Cancer in Latinos, takes an unprecedented look at Latino cancer from many disciplines to encourage the kind of collaboration among diverse professionals that we need to move the field forward," said Amelie Ramirez, Dr.P.H., co-editor of the book. She is professor and chair of the Department of Population Health Sciences and director of the Institute for Health Promotion Research (IHPR) at UT Health San Antonio. The IHPR co-hosted the 2018 conference with the university's Mays Cancer Center, home to UT Health San Antonio MD Anderson Cancer Center.

"We believe the recommendations here can spark dialog and collaboration for new solutions to eliminate cancer health disparities among Latino populations," she said.

The book and conference are a call to action to address Latino cancer health disparities.

Cancer is the leading cause of death in Latinos

Latinos face a higher risk for certain cancers, such as stomach and liver cancer, compared to whites. This stems from cultural barriers to care, low screening rates, underrepresentation in clinical studies and data that fails to reflect the diversity within the U.S. Latino population.

The authors urge researchers, population health clinicians, communities and policymakers to see the Latino population as comprised of many subgroups. For example, a family's country of origin can affect genetics, environment, culture, food preferences and lifestyle.

The book recommends that researchers create studies based on subgroups to provide more meaningful results, as health care moves to a customized approach through precision medicine.

"This research approach is important because Latinos are projected to be one-third of the U.S. population by 2050," Dr. Ramirez said.

The book provides recommendations for action in these areas:

Genetics, environment and lifestyle of Latino subgroups

Latino cancer risk, prevention and screening

Biology of cancer health disparities

Advances in cancer therapy and clinical trials

Latino cancer in the era of precision medicine

Engaging Latinos in cancer research

Emerging policies in U.S. health care

"We hope readers will explore this important research to gain a fresh, comprehensive perspective on Latino cancer health disparities," Dr. Ramirez said. "We anticipate this will inspire critical and strategic thinking about how people can apply this research and practice to their work, leading to more collaboration, research and success in improving the health and lives of U.S. Latinos."

Credit: 
University of Texas Health Science Center at San Antonio