Tech

Study of virus attack rate in Manaus, Brazil, shows outcome of mostly unmitigated epidemic

Researchers studying data from blood donors in Manaus, Brazil, who experienced high mortality from SARS-CoV-2, estimate that more than 70% of the population was infected approximately seven months after the virus first arrived in the city. "Manaus represents a 'sentinel' population, giving us a data-based indication of what may happen if SARS-CoV-2 is allowed to spread largely unmitigated," write Lewis Buss and colleagues. Brazil has experienced one of the world's most rapidly growing COVID-19 epidemics, with the Amazon being the worst hit region. In Manaus, the capital and largest metropolis in the Amazon, the first SARS-CoV-2 case was reported in mid-March, after which non-pharmaceutical interventions (NPIs) were introduced. This was followed by an "explosive" epidemic associated with relatively high mortality, and then by a sustained drop in new cases despite relaxation of NPIs. To explore whether the epidemic was contained because infection reached the herd immunity threshold or because of other factors such as behavioral changes and NPIs, Lewis Buss et al. collected data from blood donors in Manaus - which they used to infer a virus attack rate - and compared it to data from blood donors from São Paulo, which was less impacted. Analyzing antibody positivity using SARS-CoV-2 IgG tests, the authors estimate a 76% attack rate by October. This estimate includes adjustments for waning antibody immunity. By comparison, the attack rate in São Paulo by October was 29%, partly explained by the larger population size, Buss and colleagues note. The authors say that, despite the tremendous toll the virus took in these two cities (where transmission is continuing today), the attack rates remain lower than predicted in a mixed population with no mitigation strategies. "It is likely that [NPIs] worked in tandem with growing population immunity to contain the epidemic," they note, also acknowledging voluntary behavioral changes as helping. Further studies in the region are "urgently" needed to determine the longevity of population immunity, they say. "Monitoring of new cases ... will also be vital to understand the extent to which population immunity might prevent future transmission, and the potential need for booster vaccinations to bolster protective immunity."

Credit: 
American Association for the Advancement of Science (AAAS)

Visual short-term memory is more complex than previously assumed

Contrary to previous assumptions, visual short-term memory is not merely based on one kind of information about an object, such as only its colour or only its name. Rather, several types of information can be retained simultaneously in short-term memory. Using complex EEG analyses and deep neural networks, researchers at Beijing Normal University and Ruhr-Universität Bochum have discovered that short-term memory is more complex than previously assumed. The team describes their findings in the journal Proceedings of the National Academy of Sciences, PNAS for short, published online on 7 December 2020.

For the study, Dr. Hui Zhang, Rebekka Heinen and Professor Nikolai Axmacher from the Department of Neuropsychology collaborated with the team headed by Jing Liu and Professor Gui Xue from Beijing Normal University.

The banana in short-term memory

Visual short-term memory helps us remember objects for a short period of time when these objects are no longer visible. Until now, it has been assumed that short-term memory is based on only one type of brain activity. The German-Chinese research team has now disproved this assumption. The researchers recorded brain activity in epilepsy patients using electrodes that were inserted into the brain for the purpose of surgical planning. The patients saw pictures of objects like a banana and had to remember them for a short time.

Deep neural networks help interpret brain activity

Earlier studies by other groups had shown that deep neural networks process images in similar steps as humans do. If a person or a deep neural network sees a banana, the first step is to process simple characteristics such as its yellow colour and smooth texture. Later on, the processed information becomes more and more complex. Eventually, the human and the network recognise the specific crescent shape and finally identify the banana.

The researchers compared the different processing steps of the neural network with the brain data of the patients. This enabled them to see which activity patterns belong to the processing of simple visual properties like the yellow colour of the banana and which belong to more complex properties like its name.

First simple, then complex

Based on this result, the team then showed that objects are not only represented in one form in short-term memory, as previously assumed, but in several forms simultaneously. When looking at them, initially simple properties of the banana are processed, then complex properties are added. During the memorisation phase, simple and complex information is retained together. The visual short-term memory is thus more complex than has long been assumed.

Credit: 
Ruhr-University Bochum

Getting to the bottom of Arctic landslides

image: Slumping of ice-rich permafrost in Central Yakutiya, Siberia.

Image: 
A. Séjourné, GEOPS (CNRS / Université Paris-Saclay)

Erosion of the frozen soil of Arctic regions, known as permafrost, is creating large areas of subsidence, which has catastrophic impact in these regions sensitive to climate change. As the mechanisms behind these geological events are poorly understood, researchers from the Géosciences Paris Sud (GEOPS) laboratory (CNRS / Université Paris-Saclay), in cooperation with the Melnikov Permafrost Institute in Yakutsk, Russia, conducted a cold room[1] simulation of landslides, or slumps, caused by accelerated breakdown of the permafrost. The scientists demonstrated that the ice content of permafrost greatly contributes to soil collapse. They noted that very heterogeneous frozen soils, characterized by the presence of vertical ice wedges,[2] undergo major deformation during thaws. At those times, warm air circulates more freely, which furthers slumping. Such erosion during the warming phase, coupled with the input of excess water, accelerates melting and causes subsidence at the base of the ice layer. The rapid breakdown of these ice-rich soils modifies the chemistry of surface water and results in the release of greenhouse gas, which only reinforces the process by accentuating climate change. Thus it is especially useful to study and monitor slumping to understand and predict future climate trends. The team's findings are published in Geophysical Research Letters (December 7, 2020).

Credit: 
CNRS

Research develops new theoretical approach to manipulate light

The quest to discover pioneering new ways in which to manipulate how light travels through electromagnetic materials has taken a new, unusual twist.

An innovative research project, carried out by experts from the University of Exeter, has developed a new theoretical approach to force light to travel through electromagnetic materials without any reflection.

The discovery could pave the way for more efficient communications and wireless technology.

The project focused on finding new kinds of electromagnetic materials where light can travel in only one direction, without any reflection, using Maxwell's equations. These four pivotal equations, published in the 1860s by physicist James Clerk Maxwell, describe how electric and magnetic fields move through space and time. These equations underpin much of modern technology from optical and radio technologies, to wireless communication, radar and electric motors.

These new unusual materials had previously been understood using ideas that won to 2016 Nobel prize, ideas borrowed from an abstract area of mathematics known as topology, which studies the properties of shapes that stay the same when you squeeze and mold them.

The novelty of this work is that it has found these new electromagnetic materials using only a slight twist on the high-school concept of the refractive index.

This finding may simplify the design of materials where light can propagate in only one direction and might, for instance, be used to improve telecommunication where information propagates as pulses, information that is lost when there is reflection.

The study is published in leading journal Nature Physics.

Mitchell Woolley, co-author and who carried out the research while studying Natural Sciences at the University of Exeter said: "Our paper tests the limits of how light can behave by using Maxwell's equations and electromagnetic theory to engineer exotic optical materials. I think the novelty here was neither using topology nor traditional methods of numerical simulation and optimization to find these materials."

Dr Simon Horsley, lead author of the paper and also from the University of Exeter added: "There is a lot of interesting physics and mathematics still to be found in understanding how light moves through matter. It's very satisfying that the simple concept of the refractive index can be used in such unusual materials."

Credit: 
University of Exeter

Pollution from cooking remains in atmosphere for longer - study

Particulate emissions from cooking stay in the atmosphere for longer than previously thought, making a prolonged contribution to poor air quality and human health, according to a new study.

Researchers at the University of Birmingham succeeded in demonstrating how cooking emissions - which account for up to 10 per cent of particulate pollution in the UK - are able to survive in the atmosphere over several days, rather than being broken up and dispersed.

The team collaborated with experts at the University of Bath, the Central Laser Facility and Diamond Light Source to show how these fatty acid molecules react with molecules found naturally in the earth's atmosphere. During the reaction process, a coating, or crust is formed around the outside of the particle that protects the fatty acid inside from gases such as ozone which would otherwise break up the particles.

This is the first time scientists have been able to recreate the process in a way that enables it to be studied in laboratory conditions by using the powerful X-ray beam at Diamond Light Source to follow the degradation of thin layers of molecules representative of these cooking emissions in minute detail. The results are published in the Royal Society of Chemistry's Faraday Discussions.

The ability of these particles to remain in the atmosphere has a number of implications for climate change and human health. Because the molecules are interacting so closely with water, this affects the ability of water droplets to form clouds. In turn this may alter the amount of rainfall, and also the amount of sunlight that is either reflected by cloud cover or absorbed by the earth - all of which could contribute to temperature changes.

In addition, as the cooking emission particles form their protective layer they can also incorporate other pollutant particles, including those known to be harmful to health such as carcinogens from diesel engine emissions. These particles can then be transported over much wider areas.

Lead author, Dr Christian Pfrang, of the University of Birmingham's School of Geography, Earth and Environmental Sciences, said: "These emissions, which come particularly from cooking processes such as deep fat frying, make up a significant proportion of air pollution in cities, in particular of small particles that can be inhaled known as PM2.5 particles. In London it accounts for around 10 per cent of those particles, but in some of the world's megacities for example in China it can be as much as 22 per cent with recent measurements in Hong Kong indicating a proportion of up to 39%.

"The implications of this should be taken into account in city planning, but we should also look at ways we can better regulate the ways air is filtered - particularly in fast food industries where regulations do not currently cover air quality impacts from cooking extractor emissions for example."

Credit: 
University of Birmingham

Army looks to improve quadrotor drone performance

image: The Common Research Configuration Research Quadrotor Biplane autonomously transitions between hover and forward flight to capitalize on the strengths of both flight modes.

Image: 
U.S. Army

ABERDEEN PROVING GROUND, Md. -- When an aircraft veers upwards too much, the decrease in lift and increase in drag may cause the vehicle to suddenly plummet. Known as a stall, this phenomenon has prompted many drone manufacturers to err on the side of extreme caution when they plan their vehicles' autonomous flight movements.

For vertical takeoff and landing tail-sitter drones, most manufacturers program the aircraft so that the vehicle body turns very slowly whenever it transitions from hover to forward flight and vice versa.

The U.S. Army Combat Capabilities Development Command's, now referred to as DEVCOM, Army Research Laboratory collaborated with researchers at Rensselaer Polytechnic Institute to create a trajectory planner that significantly shortens the time it takes for VTOL tail-sitter drones to make this crucial transition.

The team designed the trajectory planner specifically for the Army's Common Research Configuration platform, a quadrotor biplane tail-sitter used to test new design features and study fundamental aerodynamics.

"The goal of this work was to use a model-based trajectory planner that could capture the quadrotor's dynamic characteristics sufficiently while executing quickly enough to provide trajectories in-flight," said Dr. Jean-Paul Reddinger, Army aerospace engineer at the laboratory's Vehicle Technology Directorate. "We're essentially building in a kinesthetic model of the aircraft's own dynamics that it can reference."

According to Reddinger, VTOL tail-sitters typically rely on a heuristic-based approach whenever they transition between hover and forward flight, where they follow a very slow but very safe predetermined set of actions. In contrast, the trajectory planner can find the optimum sequence of flight movements for these transitions that tailor to each situation.

Researchers discovered the availability of these more agile maneuvers when they modeled the unique interaction between the wake of the vehicle's rotors and the aerodynamics of its wings.

"If this vehicle is hovering, the wings are pointed upwards and the rotors are spinning above it constantly; if you wanted to start moving it forward, you would be dragging this wing effectively flat against the air," Reddinger said. "You would think that this causes a lot of drag, but in reality, because of the air being blown down onto the wing, it's actually not seeing a whole lot of drag."

As a result of this extra downwash from the rotors, VTOL tail-sitters can handle a more aggressive transition between hover and forward flight than one would have assumed, Reddinger said.

Through simulation, the researchers found that the incorporation of rotor on wind wake interference into the trajectory planner enabled the CRC to transition into hover and land in half as much time as compared to the conventional approach.

The team believes that the trajectory planner may eventually allow the CRC to intelligently switch between hover and forward flight as it navigates across dense or urban areas.

"Right now, it's at a state where you give it the initial state that you want--maybe you have a specific altitude or velocity that you're starting at--and it will plot a path that gets you from that initial state to the desired final state as efficiently as possible," Reddinger said. "The direction we're trying to take this in is to incorporate obstacles and additional kinds of constraints on its maneuverability."

Reddinger compared the autonomous behavior of the CRC to that of humans and how the knowledge of our own capabilities allows us to move efficiently from one location to another.

Similarly, the incorporation of more sophisticated flight models in the trajectory planner will provide the CRC with a better understanding of the complex aerodynamic environment as it moves.

"For instance, if there was a building in the way, does it make more sense to fly over the building or around the building?" Reddinger asked. "Do you want to transition to build up speed and then transition back or do you just stay in hover mode? There's a range of possibilities, and the idea is to always be picking the best one."

Once the trajectory planner undergoes more simulation trials, the researchers plan to hook the software up to hardware models to ensure a high level of robustness before they commence flight tests.

Reddinger believes that a faster, more efficient transition between hover and forward flight will eventually help the Army develop new vehicles for intelligence, surveillance and reconnaissance missions as well as aerial resupply operations.

"In order to capitalize on the flight capabilities of the emergence of novel configurations, we need autonomous pilots that are capable of making the most of the agility and performance that these aircraft are designed to allow," Reddinger said. "This method of model-based trajectory planning is a step in the direction of integrating high level autonomy with platform-specific dynamics."

Credit: 
U.S. Army Research Laboratory

In new step toward quantum tech, scientists synthesize 'bright' quantum bits

With their ability to harness the strange powers of quantum mechanics, qubits are the basis for potentially world-changing technologies--like powerful new types of computers or ultra-precise sensors.

Qubits (short for quantum bits) are often made of the same semiconducting materials as our everyday electronics. But an interdisciplinary team of chemists and physicists at Northwestern University and the University of Chicago has developed a new method to create tailor-made qubits: by chemically synthesizing molecules that encode quantum information into their magnetic, or "spin," states.

This new bottom-up approach could ultimately lead to quantum systems that have extraordinary flexibility and control, helping pave the way for next-generation quantum technology.

"Chemical synthesis enables atomistic control over qubit structure," said Danna Freedman, professor of chemistry at Northwestern's Weinberg College of Arts and Sciences. "Molecular chemistry creates a new paradigm for quantum information science." She led the research along with her colleague David Awschalom at the University of Chicago's Pritzker School of Molecular Engineering.

The results were published in the journal Science in November.

"This is a proof-of-concept of a powerful and scalable quantum technology," said Awschalom, the Liew Family Professor in Molecular Engineering. "We can harness the techniques of molecular design to create new atomic-scale systems for quantum information science. Bringing these two communities together will broaden interest and has the potential to enhance quantum sensing and computation."

Awschalom also is director of Q-NEXT, a Department of Energy National Quantum Information Science Research Center established in August and led by Argonne National Laboratory. Freedman, along with two other Northwestern faculty, is a member of the new center.

Qubits work by harnessing a phenomenon called superposition. While the classical bits used by conventional computers measure either 1 or 0, a qubit can be both 1 and 0 at the same time.

The team wanted to find a new bottom-up approach to develop molecules whose spin states can be used as qubits and can be readily interfaced with the outside world. To do so, they used organometallic chromium molecules to create a spin state that they could control with light and microwaves.

By exciting the molecules with precisely controlled laser pulses and measuring the light emitted, they could "read" the molecules' spin state after being placed in a superposition--a key requirement for using them in quantum technologies.

By varying just a few different atoms on these molecules through synthetic chemistry, they were also able to modify both their optical and magnetic properties, highlighting the promise for tailor-made molecular qubits.

"Over the last few decades, optically addressable spins in semiconductors have been shown to be extremely powerful for applications including quantum-enhanced sensing," Awschalom said. "Translating the physics of these systems into a molecular architecture opens a powerful toolbox of synthetic chemistry to enable novel functionality that we are only just beginning to explore."

"Our results open up a new area of synthetic chemistry," Freedman said. "We demonstrated that synthetic control of symmetry and bonding creates qubits that can be addressed in the same way as defects in semiconductors. Our bottom-up approach enables both functionalization of individual units as 'designer qubits' for target applications and the creation of arrays of readily controllable quantum states, offering the possibility of scalable quantum systems."

One potential application for these molecules could be quantum sensors that are designed to target specific molecules. Such sensors could find specific cells within the body, detect when food spoils or even spot dangerous chemicals.

This bottom-up approach could also help integrate quantum technologies with existing classical technologies.

"Some of the challenges facing quantum technologies might be able to be overcome with this very different bottom-up approach," said Sam Bayliss, a postdoctoral scholar in the Awschalom group and co-first author on the paper. "Using molecular systems in light-emitting diodes was a transformative shift; perhaps something similar could happen with molecular qubits."

Daniel Laorenza, a graduate student in Freedman's lab and co-first author, sees tremendous potential for chemical innovation in this space. "This chemically specific control over the environment around the qubit provides a valuable feature to integrate optically addressable molecular qubits into a wide range of environments," he said.

Credit: 
Northwestern University

A standout superalloy

In recent years, it has become possible to use laser beams and electron beams to "print" engineering objects with complex shapes that could not be achieved by conventional manufacturing. The additive manufacturing (AM) process, or 3D printing, for metallic materials involves melting and fusing fine-scale powder particles -- each about 10 times finer than a grain of beach sand -- in sub-millimeter-scale "pools" created by focusing a laser or electron beam on the material.

"The highly focused beams provide exquisite control, enabling 'tuning' of properties in critical locations of the printed object," said Tresa Pollock, a professor of materials and associate dean of the College of Engineering at UC Santa Barbara. "Unfortunately, many advanced metallic alloys used in extreme heat-intensive and chemically corrosive environments encountered in energy, space and nuclear applications are not compatible with the AM process."

The challenge of discovering new AM-compatible materials was irresistible for Pollock, a world-renowned scientist who conducts research on advanced metallic materials and coatings. "This was interesting," she said, "because a suite of highly compatible alloys could transform the production of metallic materials having high economic value -- i.e. materials that are expensive because their constituents are relatively rare within the earth's crust -- by enabling the manufacture of geometrically complex designs with minimal material waste.

"Most very-high-strength alloys that function in extreme environments cannot be printed, because they crack," continued Pollock, the ALCOA Distinguished Professor of Materials. "They can crack in their liquid state, when an object is still being printed, or in the solid state, after the material is taken out and given some thermal treatments. This has prevented people from employing alloys that we use currently in applications such as aircraft engines to print new designs that could, for example, drastically increase performance or energy efficiency."

Now, in an article in the journal Nature Communications, Pollock, in collaboration with Carpenter Technologies, Oak Ridge National Laboratory, UCSB staff scientists Chris Torbet and Gareth Seward, and UCSB Ph.D. students Sean Murray, Kira Pusch, and Andrew Polonsky, describes a new class of superalloys that overcome this cracking problem and, therefore, hold tremendous promise for advancing the use of AM to produce complex one-off components for use in high-stress, high-performance environments.

The research was supported by a $3 million Vannevar Bush Faculty Fellowship (VBFF) that Pollock was awarded from the U.S. Department of Defense in 2017. The VBFF is the Department of Defense's most prestigious single-investigator award, supporting basic research that could have a transformative impact.

In the paper, the authors describe a new class of high-strength, defect-resistant, 3D-printable superalloys, defined as typically nickel-based alloys that maintain their material integrity at temperatures up to 90% of their melting point. Most alloys fall apart at 50% of their melting temperatures. These new superalloys contain approximately equal parts cobalt (Co) and nickel (Ni), plus smaller amounts of other elements. These materials are amenable to crack-free 3D printing via electron beam melting (EBM) as well as the more challenging laser-powder-bed approaches, making them broadly useful for the plethora of printing machines that are entering the market.

Because of their excellent mechanical properties at elevated temperatures, nickel-based superalloys are the material of choice for structural components such as single-crystal (SX) turbine blades and vanes used in the hot sections of aircraft engines. In one variation of a superalloy that the team developed, Pollock said, "The high percentage of cobalt allowed us to design features into the liquid and solid states of the alloy that make it compatible with a wide range of printing conditions."

The development of the new alloy was facilitated by previous work done as part of NSF-funded projects aligned with the national Materials Genome Initiative, which has the underlying goal of supporting research to address grand challenges confronting society by developing advanced materials "twice as fast at half the cost."

Pollock's NSF work in this area was conducted in collaboration with fellow UCSB materials professors Carlos G. Levi and Anton Van der Ven. Their efforts involved developing and integrating a suite of computational and high-throughput alloy design tools needed to explore the large multicomponent composition space required to discover new alloys. In discussing the new paper, Pollock also acknowledged the important role of the collaborative research environment in the College of Engineering that made this work possible.

Credit: 
University of California - Santa Barbara

Wildfire risk rising as scientists determine which conditions beget blazes

RICHLAND, Wash.--As wildfires burn more often across the Western United States, researchers at the U.S. Department of Energy's Pacific Northwest National Laboratory are working to understand how extensively blazes burn. Their investigation, aided by machine learning techniques that sort fires by the conditions that precede them, not only reveals that the risk of wildfire is rising, but also spells out the role moisture plays in estimating fire risk.

In findings shared virtually at the American Geophysical Union's 2020 fall meeting on Tuesday, Dec. 1, atmospheric scientists Ruby Leung and Xiaodong Chen detailed their study of decades-long wildfire records and new simulations of past climate conditions, which they used to identify variables that lead to wildfires. The two will answer questions virtually on Tuesday, Dec. 8.

Surprisingly, just enough humidity in the air--not enough to lead to precipitation--can boost the likelihood of lightning, which can ignite dry grasslands or water-starved trees. The CZU Lightning Complex fires in Santa Cruz, Calif., for example, were triggered by lightning on Sunday, Aug. 16, 2020, and burned nearly 1,500 structures.

While scientists have known the importance of such hydro-meteorological conditions, generating enough data to tease out lengthy soil moisture or humidity trends and thoroughly representing their influence is only recently possible through computational advances in modeling, according to Leung.

Wildfire by type

The researchers employed machine learning to classify wildfires into "types," producing categories like fires that strike when soil is damp or during cloudy days, and the most quickly rising type--fires that spark on warm, dry, sunny days.

These "compound case" wildfires, named for their multiple contributing factors, strike more frequently than any other. A warming climate, said Leung, is likely to exacerbate the trend.

"Based on the historical trends we see over the past 35 years," said Leung, "it is very likely that trend will continue. That is partly driven by rising temperature and partly driven by reduced soil moisture as snowmelt starts earlier in spring, reducing soil moisture in summer and fall."

This study marks progress toward building a more comprehensive, data-rich take on the hydroclimatic priming of wildfires. Such detailed simulations like the one Leung and Chen incorporated in their study offer a more fine-grained glimpse into how wildfires evolve.

"This allows us to draw a very complete picture of how wildfire is triggered across the whole Western United States," said Chen.

Nearly all types of wildfire, including cloudy day fires, are happening more often. "Wet case" fires, which occur when soil moisture levels are higher, are the exception, and their decline coincides with an overall drying trend in the Western United States. California's wet season window is also narrowing, said Leung, adding another challenge to an already fire-ravaged state.

Capturing wildfire risk in the past, present and future

The team plans to project wildfire risk into 2070, demonstrating how that risk shifts under different climate scenarios, and to investigate the role snowpack and precipitation seasonality play in wildfire. This work was carried out under the DOE's HyperFACETS project. This and similar work will inform many research and applications communities and lead to better prediction and preparations for future wildfire seasons.

One aspect of that new work will focus on a single catastrophic event, the 2017 wildfire season in the Western United States, for example, and tweaking conditions to create analogs of likely future events. Whether in fundamental research in landscape evolution and disturbances, or in land, water and wildfire management and resource planning, said Leung, this approach allows for the generation of an assortment of relevant scenarios with accompanying details.

Credit: 
DOE/Pacific Northwest National Laboratory

Stretchable micro-supercapacitors to self-power wearable devices

image: A team of international researchers, led by Huanyu "Larry" Cheng, Dorothy Quiggle Career Development Professor in Penn State's Department of Engineering Science and Mechanics, has developed a self-powered, stretchable system that will be used in wearable health-monitoring and diagnostic devices.

Image: 
Penn State College of Engineering

A stretchable system that can harvest energy from human breathing and motion for use in wearable health-monitoring devices may be possible, according to an international team of researchers, led by Huanyu "Larry" Cheng, Dorothy Quiggle Career Development Professor in Penn State's Department of Engineering Science and Mechanics.

The research team, with members from Penn State and Minjiang University and Nanjing University, both in China, recently published its results in Nano Energy.

According to Cheng, current versions of batteries and supercapacitors powering wearable and stretchable health-monitoring and diagnostic devices have many shortcomings, including low energy density and limited stretchability.

"This is something quite different than what we have worked on before, but it is a vital part of the equation," Cheng said, noting that his research group and collaborators tend to focus on developing the sensors in wearable devices. "While working on gas sensors and other wearable devices, we always need to combine these devices with a battery for powering. Using micro-supercapacitors gives us the ability to self-power the sensor without the need for a battery."

An alternative to batteries, micro-supercapacitors are energy storage devices that can complement or replace lithium-ion batteries in wearable devices. Micro-supercapacitors have a small footprint, high power density, and the ability to charge and discharge quickly. However, according to Cheng, when fabricated for wearable devices, conventional micro-supercapacitors have a "sandwich-like" stacked geometry that displays poor flexibility, long ion diffusion distances and a complex integration process when combined with wearable electronics.

This led Cheng and his team to explore alternative device architectures and integration processes to advance the use of micro-supercapacitors in wearable devices. They found that arranging micro-supercapacitor cells in a serpentine, island-bridge layout allows the configuration to stretch and bend at the bridges, while reducing deformation of the micro-supercapacitors -- the islands. When combined, the structure becomes what the researchers refer to as "micro-supercapacitors arrays."

"By using an island-bridge design when connecting cells, the micro-supercapacitor arrays displayed increased stretchability and allowed for adjustable voltage outputs," Cheng said. "This allows the system to be reversibly stretched up to 100%."

By using non-layered, ultrathin zinc-phosphorus nanosheets and 3D laser-induced graphene foam -- a highly porous, self-heating nanomaterial -- to construct the island-bridge design of the cells, Cheng and his team saw drastic improvements in electric conductivity and the number of absorbed charged ions. This proved that these micro-supercapacitor arrays can charge and discharge efficiently and store the energy needed to power a wearable device.

The researchers also integrated the system with a triboelectric nanogenerator, an emerging technology that converts mechanical movement to electrical energy. This combination created a self-powered system.

"When we have this wireless charging module that's based on the triboelectric nanogenerator, we can harvest energy based on motion, such as bending your elbow or breathing and speaking," Cheng said. "We are able to use these everyday human motions to charge the micro-supercapacitors."

By combining this integrated system with a graphene-based strain sensor, the energy-storing micro-supercapacitor arrays -- charged by the triboelectric nanogenerators -- are able to power the sensor, Cheng said, showing the potential for this system to power wearable, stretchable devices.

Credit: 
Penn State

Breakthrough optical sensor mimics human eye, a key step toward better AI

CORVALLIS, Ore. - Researchers at Oregon State University are making key advances with a new type of optical sensor that more closely mimics the human eye's ability to perceive changes in its visual field.

The sensor is a major breakthrough for fields such as image recognition, robotics and artificial intelligence. Findings by OSU College of Engineering researcher John Labram and graduate student Cinthya Trujillo Herrera were published today in Applied Physics Letters.

Previous attempts to build a human-eye type of device, called a retinomorphic sensor, have relied on software or complex hardware, said Labram, assistant professor of electrical engineering and computer science. But the new sensor's operation is part of its fundamental design, using ultrathin layers of perovskite semiconductors - widely studied in recent years for their solar energy potential - that change from strong electrical insulators to strong conductors when placed in light.

"You can think of it as a single pixel doing something that would currently require a microprocessor," said Labram, who is leading the research effort with support from the National Science Foundation.

The new sensor could be a perfect match for the neuromorphic computers that will power the next generation of artificial intelligence in applications like self-driving cars, robotics and advanced image recognition, Labram said. Unlike traditional computers, which process information sequentially as a series of instructions, neuromorphic computers are designed to emulate the human brain's massively parallel networks.

"People have tried to replicate this in hardware and have been reasonably successful," Labram said. "However, even though the algorithms and architecture designed to process information are becoming more and more like a human brain, the information these systems receive is still decidedly designed for traditional computers."

In other words: To reach its full potential, a computer that "thinks" more like a human brain needs an image sensor that "sees" more like a human eye.

A spectacularly complex organ, the eye contains around 100 million photoreceptors. However, the optic nerve only has 1 million connections to the brain. This means that a significant amount of preprocessing and dynamic compression must take place in the retina before the image can be transmitted.

As it turns out, our sense of vision is particularly well adapted to detect moving objects and is comparatively "less interested" in static images, Labram said. Thus, our optical circuitry gives priority to signals from photoreceptors detecting a change in light intensity - you can demonstrate this yourself by staring at a fixed point until objects in your peripheral vision start to disappear, a phenomenon known as the Troxler effect.

Conventional sensing technologies, like the chips found in digital cameras and smartphones, are better suited to sequential processing, Labram said. Images are scanned across a two-dimensional array of sensors, pixel by pixel, at a set frequency. Each sensor generates a signal with an amplitude that varies directly with the intensity of the light it receives, meaning a static image will result in a more or less constant output voltage from the sensor.

By contrast, the retinomorphic sensor stays relatively quiet under static conditions. It registers a short, sharp signal when it senses a change in illumination, then quickly reverts to its baseline state. This behavior is owed to the unique photoelectric properties of a class of semiconductors known as perovskites, which have shown great promise as next-generation, low-cost solar cell materials.

In Labram's retinomorphic sensor, the perovskite is applied in ultrathin layers, just a few hundred nanometers thick, and functions essentially as a capacitor that varies its capacitance under illumination. A capacitor stores energy in an electrical field.

"The way we test it is, basically, we leave it in the dark for a second, then we turn the lights on and just leave them on," he said. "As soon as the light goes on, you get this big voltage spike, then the voltage quickly decays, even though the intensity of the light is constant. And that's what we want."

Although Labram's lab currently can test only one sensor at a time, his team measured a number of devices and developed a numerical model to replicate their behavior, arriving at what Labram deems "a good match" between theory and experiment.

This enabled the team to simulate an array of retinomorphic sensors to predict how a retinomorphic video camera would respond to input stimulus.

"We can convert video to a set of light intensities and then put that into our simulation," Labram said. "Regions where a higher-voltage output is predicted from the sensor light up, while the lower-voltage regions remain dark. If the camera is relatively static, you can clearly see all the things that are moving respond strongly. This stays reasonably true to the paradigm of optical sensing in mammals."

A simulation using footage of a baseball practice demonstrates the expected results: Players in the infield show up as clearly visible, bright moving objects. Relatively static objects -- the baseball diamond, the bleachers, even the outfielders -- fade into darkness.

An even more striking simulation shows a bird flying into view, then all but disappearing as it stops at an invisible bird feeder. The bird reappears as it takes off. The feeder, set swaying, becomes visible only as it starts to move.

"The good thing is that, with this simulation, we can input any video into one of these arrays and process that information in essentially the same way the human eye would," Labram said. "For example, you can imagine these sensors being used by a robot tracking the motion of objects. Anything static in its field of view would not elicit a response, however a moving object would be registering a high voltage. This would tell the robot immediately where the object was, without any complex image processing."

Credit: 
Oregon State University

Researchers share database for studying individual differences in language skills

image: Researchers share database for studying individual differences in language skills

Image: 
Depositphoto

Although most people learn to speak their mother tongue fluently, native speakers differ in their ability to use language. Adult language users not only differ in the number of words they know, they also differ in how quickly they produce and understand words and sentences. How do individuals differ across language tasks? Are individual differences in language ability related to general cognitive abilities?

Such questions can only be answered by testing large numbers of individuals on a large number of language and cognitive tests. Lead author Florian Hintz and his team designed such a test battery, with the aim of using it in a larger study. In the larger 'IndividuLa' study (funded by the Language in Interaction consortium), the team will be combining test performance data with DNA from a thousand participants. In addition, the brains of about 300 of the 1000 participants will be scanned. However, the authors first needed to pilot the test battery with a smaller number of participants.

"Previous individual-differences studies have often focused on a limited set of skills", says Hintz. "The present dataset goes one step further and provides a comprehensive overview of language users' linguistic and non-linguistic skills, with multiple tests per skill".

The researchers invited 112 participants with ages ranging from 18 to 29 and mixed educational backgrounds to the lab in Nijmegen. Participants completed the battery of 33 tests twice, to establish the reliability of the new measurements, with one month in between test sessions. Testing took about eight hours per participant.

The battery included three types of tests: (1) tests of linguistic experience such as vocabulary size, (2) tests of general cognitive skills such as processing speed or working memory capacity, and (3) tests of linguistic processing skills, measuring production and comprehension of words and sentences. Apart from well-known standardised tests (such as Raven's matrices), the battery included newly developed tests (such as a test on idiomatic expressions and a test on normative rules of Dutch grammar).

The majority of the tests proved to be reliable and suitable for the IndividuLa main study, which is currently ongoing. The team is still recruiting participants for the main study, so native Dutch speakers (between 18 and 30) are invited to take part.

The authors decided to share the data from their pilot study, which is freely available at the UK Data Service data archive (UKDA). The team encourages other researchers to use the database for new analyses. "Individual-differences studies are rarely conducted, as these studies are time-consuming and expensive", says Hintz. "Especially in the current situation, where in-person testing isn't always possible, this resource may provide alternative routes for conducting research".

"The database is valuable to any researcher, clinician, or teacher interested in investigating the relationships between linguistic skills, non-linguistic skills and linguistic knowledge", Hintz concludes. "Ultimately, using this dataset, one may take a first step towards addressing the question 'What makes someone a good language user'?"

Credit: 
Max Planck Institute for Psycholinguistics

Silver linings: Adding silver to the nanoclusters can do wonders for their luminescence

image: Crystals of the silver doped complex showed bright red photoluminescence under UV light, whereas crystals of the undoped structure did not emit any light. This pointed to the role of silver in modifying the structure of the complex to cause the photoluminescence.

Image: 
Takane Imaoka, Kimihisa Yamamoto of Tokyo Institute of Technology

Scientists at Tokyo Institute of Technology have discovered that a silver-doped platinum thiolate nanometal complex shows 18-fold greater photoluminescence than the original platinum complex. In their recent paper, they provide insights into the causes of this, crowning a new approach to creating efficient non-toxic and biocompatible compounds for bioimaging.

Most of us have encountered luminescence in one form or another, be it fireflies in the night or planktons in the ocean, or even a glow stick at the fair. While a wonderful phenomenon in itself, luminescence has a greater appeal to scientists for more specific reasons, such as its ability to make light-sensitive biological samples glow in the dark under the microscope.

Recently, metal nanoclusters--very small particles in the size range of a few nanometers--have garnered a fair bit of attention from biochemists as promising photoluminescent materials for bioimaging, given their convenient size for permeability into various organs, their non-toxicity, and their biocompatibility, in contrast with existing organic dyes or semiconductor nanoparticles. There is, however, a fundamental issue preventing their widespread use: the photoluminescence is extremely low and short-lived.

A team of scientists from Tokyo Institute of Technology (Tokyo Tech), Japan, think that this could be because the mechanisms underlying the photoluminescent behavior of these particles is still poorly understood. In their latest paper published in Angewandte Chemie, the team, led by Prof. Takane Imaoka, report their discovery of the fact that doping a platinum thiolate complex with silver increases photoluminescence by 18 times! They also dig into why, by getting down to the atoms in a silver doped platinum thiolate complex.

Their x-ray crystallographic observation of the structure showed that the silver ion is at the center of a tiara-shaped platinum complex ring. Further observation revealed that the photoluminescence under UV irradiation is high when this structure is in its crystal form (Figure 1) or when its solution in an organic solvent is ultra-cooled to 77 K or -196.15 °C. Prof. Imaoka lays down the questions these observations raised: "One reason for these photoluminescence increases is that the thermal movement of the components of the ring portion is suppressed under these conditions. But what role does the structure play and do frontier molecular orbitals have anything to do with this increase?"

To find out, the team conducted density functional theory calculations. These calculations gave them an idea of the structures of the complex based on the energy states and geometry of molecular orbitals--the range of electron movement within the structure. They found that when energized, such as with UV irradiation, the structure is kept stable by the silver ion, leading to good photoluminescence; this is unlike the ring structure alone which becomes highly distorted upon excitation (Figure 2). "This could be because the size of the silver ion and the cavity of the platinum thiolate ring are a good match and the orbitals are in good alignment," Prof. Imaoka explains. "Any distortion would cause an energetically unfavorable repulsion. The silver ion acts as a template to maintain the highly ordered structure of the tiara-like complex, thereby enormously enhancing its phosphorescence."

The scientists also performed photophysical studies that yielded promising results. The silver doped structure underwent much less non-radiative decay than the non-doped structure.

These findings corroborate with those of another study on a rod-shaped silver ion doped gold complex. "If there is a discernible correlation between this study and previous such studies, then the silver ion's ability to stabilize the lower energy unoccupied molecular orbitals in these structures could be the new key to designing photoluminescent metal nanoclusters. The details of frontier molecular orbitals that are unique to each cluster could be useful in predicting ideal structures of metallic clusters, and perhaps, shining a light on the path to developing novel and efficient ones in the future," Prof. Imaoka comments, excited about his work.

After all, who wouldn't be if one atom is all it takes to make a difference?

Credit: 
Tokyo Institute of Technology

Tension between awareness and fatigue shapes Covid-19 spread

image: In the midst of the coronavirus pandemic, two human factors are battling it out: awareness of the virus's severe consequences and fatigue from nine months of pandemic precautions. The results of that battle can be seen in the oddly shaped case, hospitalization, and fatality-count graphs, a new study suggests.

Image: 
Georgia Tech

In the midst of the coronavirus pandemic, two human factors are battling it out: awareness of the virus's severe consequences and fatigue from nine months of pandemic precautions. The results of that battle can be seen in the oddly shaped case, hospitalization, and fatality-count graphs, a new study suggests.

The tension between awareness and fatigue can lead to case-count plateaus, shoulder-like dynamics, and oscillations as rising numbers of deaths cause people to become more cautious before they let down their guard to engage once again in behaviors that increase risk for transmission, which, in turn, leads to rising death counts -- and renewed awareness.

"Epidemics don't necessarily have a single peak after which the risk subsides," said Joshua Weitz, Patton Distinguished Professor of Biological Sciences and founding director of the Interdisciplinary Ph.D. in Quantitative Biosciences at the Georgia Institute of Technology. "People's behaviors are both influenced by and influence epidemic dynamics, potentially driving plateaus, and oscillations in incidence."

A paper describing the connection between human behavior and viral spread was published this month in the journal Proceedings of the National Academy of Sciences. It was authored by researchers at Georgia Tech, McMaster University, Princeton University, and Texas A&M.

In the early days of the pandemic, many scientists turned to traditional epidemiological studies, which showed epidemic cases could rise to a peak and then fall smoothly as immunity to the infection reached high levels in a population in the absence of large-scale interventions. Public health messages urged the population to "flatten the curve" to prevent disease from overwhelming hospitals.

"We were concerned that a focus on 'the peak' was potentially misguided because it implied that the shape was a feature of the disease alone without considering the consequence of behavior," Weitz said. "In reality, there does not have to be a single peak during an epidemic."

"If people are aware of the severity of the epidemic, they may change their behavior, and if they change their behavior, there will be fewer severe outcomes," Weitz said. "But if awareness is short-term, individuals may tire of public health regulations and the virus will come roaring back. Instead of a single peak in cases, there can be plateaus or oscillations balanced between cautious behavior and relaxation."

The research team analyzed data from the early phase of the epidemic and found evidence that the decrease in fatalities after a peak was slower than the rise toward it. However, in contrast to simple models of awareness-driven behavior, the research team also found evidence that individuals tended to increase their activity -- as measured by mobility indicators -- before epidemic severity waned. This means that individuals may have grown fatigued, worsening the epidemic severity. The study also found that other preventive measures, like mask wearing, have the potential to avert worst-case outcomes in disease transmission even as mobility increases in light of fatigue.

"This study underlines the importance of human behavior in driving epidemic outcomes," said Jonathan Dushoff from the Department of Biology at McMaster University. "To make good predictions beyond the short term, we need to understand all of the factors driving human responses to the virus -- fear, fatigue, information, misinformation, and so forth. We have a long way to go."

Weitz and Dushoff share optimism as well as concerns on the potential effects of anticipation of imminent vaccine distribution on behavior associated with transmission.

"It's hard to be sure what impacts vaccine distribution will have on behavior," Dushoff said. "There is concern in public health circles that people who think the vaccine is just around the corner could relax their guard. Human behavior is complicated."

Lessons for future public health responses may help focus on the role of human behavior as well as communications that make disease impacts personal, fostering long-term awareness and changes in behavior that can reduce collective transmission.

Credit: 
Georgia Institute of Technology

New study allows regional prediction of uranium in groundwater

image: Searsville Lake, a water reservoir in northern California, was completely dry in November 2020. New research identifies the trigger that causes naturally occurring uranium to dislodge from sediments and seep into groundwater, important information as water managers plan for a future with more people and less water in a warming world.

Image: 
Image Nona Chiariello / Jasper Ridge Biological Preserve

Lurking in sediments and surrounding the precious groundwater beneath our feet is a dangerous toxin: uranium. Scientists have long known this and tested for it. But now Stanford researchers have identified the trigger that causes naturally occurring uranium to dislodge from sediments and seep into groundwater, pointing to a solution for managing the toxin before it becomes a problem.

In a new regional model that combines aquifer information with soil properties for predicting groundwater quality, the researchers pinpointed the factors associated with uranium contamination. The research, published in Environmental Science & Technology Dec. 8, indicates that calcium concentrations and soil alkalinity are key determining factors of uranium groundwater contamination in California's Central Valley. The findings will be especially important as water managers plan for a future with more people and less water available from snowpack in a warming world.

Uranium is among the top three harmful, naturally occurring groundwater contaminants in the Central Valley, along with arsenic and chromium. The radioactive, metallic element becomes dangerous when consumed in high quantities, causing kidney damage and increased risk of cancer. It is prevalent within the Central Valley's San Joaquin Valley, and also occurs naturally in semi-arid and arid environments worldwide.

Researchers focused on locations in the Central Valley aquifers where groundwater uranium concentrations have been observed to exceed the drinking water standard of 30 micrograms of uranium per liter.

"Every aquifer has one or more of these natural contaminants. The question is whether they sit benignly in the sediments or really cause problems by getting into the groundwater," said co-author Scott Fendorf, the Huffington Family Professor in Earth system science at the School of Earth, Energy & Environmental Sciences (Stanford Earth). "Water managers can use our findings to forecast solutions before the problems are manifested."

The study focuses on the chemical impacts of groundwater recharge, which is the process of rainfall seeping into soils and moving down into underlying aquifers. As rainwater seeps downward, its chemistry changes as it interacts with the ground environment. Pumping the water back out also influences the dynamics of the aquifer, which can change the chemistry of the system and how elements such as uranium are partitioned between the solids (sediments) and water. If the water picks up more calcium during its travels and also becomes more alkaline, it can attract uranium and contaminate aquifers, the researchers found.

"Our work shows that it's not just properties of the aquifer that are impacting uranium, but factors such as clay content and pH of the soil that served as important predictors of groundwater uranium concentrations," said lead study author Alandra Lopez, a PhD student in Earth system science. "It highlights the importance of including data about soil properties when generating aquifer vulnerability maps for a naturally occurring contaminant like uranium."

The good news: the researchers estimate that the factors introducing this process of uranium loosening from sediments into groundwater mainly occur within the top six feet of the soil, suggesting an easy fix could involve bypassing that area.

"If you're going to manage aquifer recharge, which will be increasingly needed with climate change, be careful about having the water infiltrate through the soil where calcium and alkalinity are often highest. These management scenarios are being considered right now," said Fendorf, who is also a senior fellow at the Stanford Woods Institute for the Environment.

The team says their methodology offers water managers an easy way to predict major influences on groundwater uranium concentrations at scale.

"We're trying to tell everybody that you need to think about this ahead of time, because that's when you can manage around the problem," Fendorf said. "It's a kind of forward prediction versus hindsight reaction - once you measure uranium in the water, your problem is already at hand and it's much more expensive to fix."

Credit: 
Stanford University