Tech

As wildfires flare up across West, research highlights risk of ecological change

image: Professor Jonathan Coop said he's seen an incredible amount of forest lost in the Jemez Mountains where he grew up.

Image: 
Jonathan Coop, Western Colorado University

One of Jonathan Coop's first vivid memories as a child was watching the flames of the 1977 La Mesa Fire in north-central New Mexico. The human-caused fire burned more than 15,000 acres of pine forests in the Bandelier National Monument and areas surrounding the Los Alamos National Laboratory.

Now a forest ecologist and professor at Western Colorado University, Coop studies the ecological effects of fire on forests in the Southwest United States. He's also the lead author of a new scientific synthesis about how wildfires drive changes in forest vegetation across the United States. Sean Parks -- research ecologist with the USDA Forest Service, Rocky Mountain Research Station -- and Camille Stevens-Rumann, assistant professor in the Department of Forest and Rangeland Stewardship at Colorado State University, are co-authors of the synthesis.

"Wildfire-driven forest conversion in Western North American landscapes," was published July 1 in BioScience.

The new paper, with contributions from more than 20 researchers, uncovers common themes that scientists are reporting, including increasing impacts of wildfires amid climate change from the borderlands of Mexico and Arizona to the boreal forests of Canada.

Following high-severity fire, scientists have found forest recovery may increasingly be compromised by lack of tree seed sources, warmer and drier post-fire climate and more frequent reburning.

"In an era of climate change and increasing wildfire activity, we really can't count on forests to come back the way they were before the fire," said Coop. "Under normal circumstances, forest systems have built-in resilience to disturbance - they can take a hit and bounce back. But circumstances aren't normal anymore."

The loss of resilience means that fire can catalyze major, lasting changes. As examples, boreal conifer forests can be converted to deciduous species, and ponderosa pine forests in the southwest may give way to oak scrub. These changes, in turn, lead to consequences for wildlife, watersheds and local economies.

'Assisted migration' an option in some cases, places

Researchers said that in places where the most apparent vegetation changes are occurring, such as the Southwest U.S. and in Colorado, land managers are already exploring ways to help forests adapt by planting tree species that are better suited to the emerging climatic conditions following severe fire.

"In places where changes are not quite so visible, including Montana and Idaho, those conversations are still happening," said Stevens-Rumann. "In these large landscapes where trees are not coming back, you have to start getting creative."

Parks, who often uses data collected in protected areas to study wildfire patterns, causes and consequences, said some fires can be good, creating openings for wildlife, helping forests rejuvenate and reducing fuel loads.

"However, some fires can result in major changes to the types of vegetation," he said, adding that this is particularly true for high-severity wildfires when combined with the changing climate. "Giving managers information about where and how climate change and wildfires are most likely to affect forest resilience will help them develop adaptation strategies to maintain healthy ecosystems."

Stevens-Rumann said that land managers have largely continued to operate in the way they've done in the past, replacing fire-killed trees with the same species. "Given the effects of climate change, we need to start being much more creative," she said. "Let's try something different and come up with solutions that allow natural processes to happen and interact with landscapes in different ways."

Coop said that ecologists and managers are beginning to develop a suite of approaches to increase forest resilience in an era of accelerating change.

One approach that he said he's partial to is allowing fires to burn under benign or moderate fire weather conditions - similar to what happens in a prescribed burn - which results in forests that are less prone to high-severity fire because of reduced fuel loads and patchy landscapes. This is also known as managing wildfire for resource objectives, an approach that researchers said is cost-efficient, allowing managers to treat more acres.

"Increasingly, we're realizing you either have the fires you want and can influence or you're stuck with these giant fires where, like hurricanes, there's no shaping their path," said Coop.

Loss of forests is personal

For many of the researchers involved in this synthesis, the issues being analyzed are personal.

Before becoming a scientist, Stevens-Rumann spent three years on a USDA Forest Service "Hotshot" crew, specializing in fighting fires in hard-to-access and dangerous terrain. Parks grew up in Colorado and California and acknowledges seeing changes in the forests and landscapes he grew up with.

Coop said he's seen an incredible amount of forest lost in the Jemez Mountains where he grew up. The La Mesa fire was only the first in a series of increasingly large and severe fires, culminating with the 140,000-acre Las Conchas fire in 2011. Within the footprint of Las Conchas, less than a quarter of the landscape is still forested.

"Seeing these things unfold over my lifetime, I don't know if I ever really could have imagined it," he said. "I've borne witness to these very dramatic changes unfolding in the one place that I really know best on Earth."

Credit: 
Colorado State University

Optoelectronic parametric oscillator

image: a, Schematic of a typical OPO and the proposed OEPO. b, Energy transitions in a typical OPO and in the proposed OEPO. Oscillation in the OPO is based on optical parametric amplification. The energy flows from the pump to the signal and idler through an optical nonlinear medium. There is no phase jump for the oscillating signals in the optical nonlinear medium. In the proposed OEPO, oscillation is based on electrical parametric frequency conversion. A pair of oscillations is converted into each other in the electrical nonlinear medium by the local oscillator (LO). There is a phase jump for the oscillating signals in the nonlinear medium, which leads to the unique mode properties of the proposed OEPO. c, Cavity modes of OPO and the proposed OEPO. The cavity modes of OPO are discrete, and the minimum mode spacing is the cavity free spectral range (FSR), which is 2π?τ where τ is the cavity delay. Due to the phase jump in the parametric frequency conversion process, the cavity modes of OEPO can be continuously tuned by tuning the LO. The minimum mode spacing is π?τ. PD: photodetector; LNA: low noise amplifier; BPF: bandpass filter.

Image: 
by Tengfei Hao, Qizhuang Cen, Shanhong Guan, Wei Li, Yitang Dai, Ninghua Zhu, and Ming Li

Parametric oscillators are a driven harmonic oscillator that based on nonlinear process in a resonant cavity, which are widely used in various areas of physics. In the past, parametric oscillators have been designed in pure optical and pure electrical domain, i.e., an optical parametric oscillator (OPO) and a varactor diode, respectively. Particularly, the OPO has been widely investigated in recent years since it greatly extends the operating frequency of laser by utilizing the second-order or third-order nonlinearity, noted that the operating frequency range of the ordinary laser is limited to the simulated atomic energy level. On the other hand, the oscillation in both the OPO and electrical parametric oscillator is a delay-controlled operation, which means that a steady oscillation is confined by the cavity delay since the signal must repeat itself after each round-trip. This operation leads difficulty in frequency tuning, and the frequency tuning is discrete with a minimum tuning step determined by the cavity delay.

In a new paper published in Light: Science & Applications, a team of scientists, led by Professor Ming Li from State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, China, and co-workers have developed a new parametric oscillator in the optoelectronic domain, i.e., an optoelectronic parametric oscillator (OEPO) based on second-order nonlinearity in an optoelectronic cavity. Thanks to the unique energy transition process in the optoelectronic cavity, the oscillation in the OEPO is a phase-controlled operation, leading to a steady single mode oscillation or multi-mode oscillation that is not bound by the cavity delay. Furthermore, the multi-mode oscillation in the OEPO is stable and easy to realize thanks to the phase control of parametric frequency conversion process, while stable multi-mode oscillation is difficult to achieve in conventional oscillators such as an optoelectronic oscillator (OEO) or OPO due to the mode hopping and mode competition effect. The proposed OEPO has great potential in applications such as, microwave signal generation, oscillator-based computation, and radio-frequency phase-stable transfer.

A pair of oscillation modes is converted into each other in the nonlinear medium by a local oscillator in the proposed OEPO. The sum phase of each mode pair is locked by the local oscillator, which ensures stable multimode oscillation that is difficult to realize in conventional oscillators. Moreover, owing to the unique energy transition process in the optoelectronic cavity, frequency of the single-mode oscillation can be independent of the cavity delay. Continuous frequency tuning is achieved without the need for modification of the cavity delay. These scientists summarize the difference between the OEPO and traditional oscillators:

"In conventional oscillators, cavity modes are integer multiples of the fundamental one and are restricted by the cavity delay. The phase evolution of the cavity modes is linear or quasi-linear. In our proposed OEPO, the parametric frequency conversion provides a phase jump for the cavity modes. The phase evolution is not linear owing to this phase jump. We can call such oscillation phase-controlled operation, while the conventional oscillation is delay-controlled operation. As a result, the cavity modes of OEPO are not restricted by the cavity delay."

"Owing to the unique phase-controlled oscillation in the proposed OEPO, stable and tuneable multimode oscillation is easy to realize. The OEPO can therefore be applied in scenarios requiring stable, wideband and complex microwave waveforms. By increasing the cavity length of the OEPO, the mode spacing of the optoelectronic cavity can be as small as several kHz, which would allow many densely distributed modes to oscillate in the cavity simultaneously. Oscillator-based computation can benefit from such a large mode number. Furthermore, when the OEPO operates in single-mode, the parametric frequency conversion actually works as a phase conjugate operation. Therefore, the OEPO can be used for phase-stable RF transfer since the phase error resulting from cavity turbulence can be auto-aligned." the scientists forecast.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Wearable-tech glove translates sign language into speech in real time

video: Short video of a wearable glove to translate sign language.

Image: 
Jun Chen Lab/UCLA

UCLA bioengineers have designed a glove-like device that can translate American Sign Language into English speech in real time though a smartphone app. Their research is published in the journal Nature Electronics.

"Our hope is that this opens up an easy way for people who use sign language to communicate directly with non-signers without needing someone else to translate for them," said Jun Chen, an assistant professor of bioengineering at the UCLA Samueli School of Engineering and the principal investigator on the research. "In addition, we hope it can help more people learn sign language themselves."

The system includes a pair of gloves with thin, stretchable sensors that run the length of each of the five fingers. These sensors, made from electrically conducting yarns, pick up hand motions and finger placements that stand for individual letters, numbers, words and phrases.

The device then turns the finger movements into electrical signals, which are sent to a dollar-coin-sized circuit board worn on the wrist. The board transmits those signals wirelessly to a smartphone that translates them into spoken words at the rate of about a one word per second.

The researchers also added adhesive sensors to testers' faces -- in between their eyebrows and on one side of their mouths -- to capture facial expressions that are a part of American Sign Language.

Previous wearable systems that offered translation from American Sign Language were limited by bulky and heavy device designs or were uncomfortable to wear, Chen said.

The device developed by the UCLA team is made from lightweight and inexpensive but long-lasting, stretchable polymers. The electronic sensors are also very flexible and inexpensive.

In testing the device, the researchers worked with four people who are deaf and use American Sign Language. The wearers repeated each hand gesture 15 times. A custom machine-learning algorithm turned these gestures into the letters, numbers and words they represented. The system recognized 660 signs, including each letter of the alphabet and numbers 0 through 9.

Credit: 
University of California - Los Angeles

Engineers use 'DNA origami' to identify vaccine design rules

CAMBRIDGE, MA -- By folding DNA into a virus-like structure, MIT researchers have designed HIV-like particles that provoke a strong immune response from human immune cells grown in a lab dish. Such particles might eventually be used as an HIV vaccine.

The DNA particles, which closely mimic the size and shape of viruses, are coated with HIV proteins, or antigens, arranged in precise patterns designed to provoke a strong immune response. The researchers are now working on adapting this approach to develop a potential vaccine for SARS-CoV-2, and they anticipate it could work for a wide variety of viral diseases.

"The rough design rules that are starting to come out of this work should be generically applicable across disease antigens and diseases," says Darrell Irvine, who is the Underwood-Prescott Professor with appointments in the departments of Biological Engineering and Materials Science and Engineering; an associate director of MIT's Koch Institute for Integrative Cancer Research; and a member of the Ragon Institute of MGH, MIT, and Harvard.

Irvine and Mark Bathe, an MIT professor of biological engineering and an associate member of the Broad Institute of MIT and Harvard, are the senior authors of the study, which appears today in Nature Nanotechnology. The paper's lead authors are former MIT postdocs Rémi Veneziano and Tyson Moyer.

DNA design

Because DNA molecules are highly programmable, scientists have been working since the 1980s on methods to design DNA molecules that could be used for drug delivery and many other applications, most recently using a technique called DNA origami that was invented in 2006 by Paul Rothemund of Caltech.

In 2016, Bathe's lab developed an algorithm that can automatically design and build arbitrary three-dimensional virus-like shapes using DNA origami. This method offers precise control over the structure of synthetic DNA, allowing researchers to attach a variety of molecules, such as viral antigens, at specific locations.

"The DNA structure is like a pegboard where the antigens can be attached at any position," Bathe says. "These virus-like particles have now enabled us to reveal fundamental molecular principles of immune cell recognition for the first time."

Natural viruses are nanoparticles with antigens arrayed on the particle surface, and it is thought that the immune system (especially B cells) has evolved to efficiently recognize such particulate antigens. Vaccines are now being developed to mimic natural viral structures, and such nanoparticle vaccines are believed to be very effective at producing a B cell immune response because they are the right size to be carried to the lymphatic vessels, which send them directly to B cells waiting in the lymph nodes. The particles are also the right size to interact with B cells and can present a dense array of viral particles.

However, determining the right particle size, spacing between antigens, and number of antigens per particle to optimally stimulate B cells (which bind to target antigens through their B cell receptors) has been a challenge. Bathe and Irvine set out to use these DNA scaffolds to mimic such viral and vaccine particle structures, in hopes of discovering the best particle designs for B cell activation.

"There is a lot of interest in the use of virus-like particle structures, where you take a vaccine antigen and array it on the surface of a particle, to drive optimal B-cell responses," Irvine says. "However, the rules for how to design that display are really not well-understood."

Other researchers have tried to create subunit vaccines using other kinds of synthetic particles, such as polymers, liposomes, or self-assembling proteins, but with those materials, it is not possible to control the placement of viral proteins as precisely as with DNA origami.

For this study, the researchers designed icosahedral particles with a similar size and shape as a typical virus. They attached an engineered HIV antigen related to the gp120 protein to the scaffold at a variety of distances and densities. To their surprise, they found that the vaccines that produced the strongest response B cell responses were not necessarily those that packed the antigens as closely as possible on the scaffold surface.

"It is often assumed that the higher the antigen density, the better, with the idea that bringing B cell receptors as close together as possible is what drives signaling. However, the experimental result, which was very clear, was that actually the closest possible spacing we could make was not the best. And, and as you widen the distance between two antigens, signaling increased," Irvine says.

The findings from this study have the potential to guide HIV vaccine development, as the HIV antigen used in these studies is currently being tested in a clinical trial in humans, using a protein nanoparticle scaffold.

Based on their data, the MIT researchers worked with Jayajit Das, a professor of immunology and microbiology at Ohio State University, to develop a model to explain why greater distances between antigens produce better results. When antigens bind to receptors on the surface of B cells, the activated receptors crosslink with each other inside the cell, enhancing their response. However, the model suggests that if the antigens are too close together, this response is diminished.

Beyond HIV

In recent months, Bathe's lab has created a variant of this vaccine with the Aaron Schmidt and Daniel Lingwood labs at the Ragon Institute, in which they swapped out the HIV antigens for a protein found on the surface of the SARS-CoV-2 virus. They are now testing whether this vaccine will produce an effective response against the coronavirus SARS-CoV-2 in isolated B cells, and in mice.

"Our platform technology allows you to easily swap out different subunit antigens and peptides from different types of viruses to test whether they may potentially be functional as vaccines," Bathe says.

Because this approach allows for antigens from different viruses to be carried on the same DNA scaffold, it could be possible to design variants that target multiple types of coronaviruses, including past and potentially future variants that may emerge, the researchers say.

Credit: 
Massachusetts Institute of Technology

Power outage: Research offers hint about heart weakness in Barth syndrome

Barth syndrome is a rare condition that occurs almost exclusively in males. Symptoms include an enlarged and weakened heart. The condition is present at birth or becomes evident early in life. Life expectancy is shortened and there is no treatment.

The laboratory of Madesh Muniswamy, PhD, in the Long School of Medicine at The University of Texas Health Science Center at San Antonio, found a clue about the processes that underlie this devastating condition.

The body runs on energy made by cell structures called mitochondria. The Muniswamy lab studied the interaction of a protein called MCU (mitochondrial calcium uniporter) with a phospholipid (fat) called cardiolipin. This fat is part of the mitochondria walls.

The team found that cardiolipin's binding with MCU acts as an on switch for energy production. When the two bind together, calcium ions rush into the mitochondria to produce energy needed during physiologic processes such as fasting and feeding.

The connection to Barth syndrome? Loss of energy.

"We observe reduced abundance and activity of MCU in cells of mammals that model Barth syndrome, and we also see a partial loss of cardiolipin in the cells," Dr. Muniswamy said.

"Consistently, MCU is also decreased in the cardiac tissue of human Barth syndrome patients, raising the possibility that impaired MCU function contribute to Barth syndrome pathology," he said.

The study, coauthored with colleagues at Texas A&M University, Harvard and MIT, is in Proceedings of the National Academy of Sciences.

Credit: 
University of Texas Health Science Center at San Antonio

A new theory about political polarization

video: The simulation demonstrates the emergence of hyperpolarization, showing the link between social emotions and opinion divergence.

The three dimensions correspond to three political issues (e.g., marihuana legalization, gay marriage, or income taxation). The space spanned by these dimensions is the "opinion space." Each blue dot represents an individual. From random positions, individuals rapidly converge in the middle of the opinion space (where all three issues are irrelevant).
At the same time, they align themselves along a diagonal, corresponding to the political spectrum, i.e. from far left to center to far right. In the next step, they split into two groups, corresponding to political parties. Members of the same party now have identical opinions on all three issues, and opposing opinions to the other party. This conflict creates positive feelings towards one's party colleagues and animosity towards the other party, driving the individuals ever further apart, until they are in opposite corners of the opinion space.

Image: 
© CSH Vienna

[Vienna, 29 June 2020] The ever-deepening rift between the political left- and right-wing has long been puzzling theorists in political science and opinion dynamics. An international team led by researchers of the Complexity Science Hub Vienna (CSH) now offers an explanation: Their newly developed "Weighted Balance Theory" (WBT) model sees social emotions as a driving force of political opinion dynamics. The theory is published in the Journal of Artificial Societies and Social Simulation (JASSS).

A certain degree of polarization of political opinions is considered normal--and even beneficial--to the health of democracy. In the last few decades, however, conservative and liberal views have been drifting farther apart than ever, and at the same time have become more consistent. When too much polarization hampers a nation's ability to combat threats such as the coronavirus pandemic, it can even be deadly.

How do extreme positions evolve?

"We feel high balance when dealing with someone we like and with whom we agree in all political issues," explains first author Simon Schweighofer, who was working at the CSH when the paper was written. "We also feel high balance towards those we hate and with whom we disagree," adds the expert in quantitative social science. The human tendency to maintain emotional balance was first described 1946 by Fritz Heider's "cognitive balance theory."

But what happens when opinions and interpersonal attitudes are in conflict with each other, i.e., when individuals disagree with others they like, or agree with others they dislike? "People will try to overcome this imbalance by adapting their opinions, in order to increase balance with their emotions," says Schweighofer.

A vicious circle of increasingly intense emotions and opinions gradually replaces moderate positions until most issues are seen in the same--often extremely polarized--way as one's political allies, the scientists found.

"It ultimately ends in total polarization," illustrates co-author David Garcia (CSH and MedUni Vienna). Not only do people categorically favor or oppose single issues like abortion, same-sex marriage and nuclear energy. "If they are pro-choice, they are at the same time highly likely to be for gay marriage, against the use of nuclear energy, for the legalization of marijuana, and so on," says Garcia. The possible variety of combinations of different opinions is reduced to the traditional left-right split.

A mathematical model of hyperpolarization

The researchers developed a so-called agent-based model to simulate this process. Their mathematical model was able to reproduce the same dynamics that can be observed in real-life political processes (see videos).

"We call the combination of extremeness and correlation between policy issues hyperpolarization," says Simon Schweighofer. "Hyperpolarization has so far been overlooked in social theories on opinion formation. Our Weighted Balance Model--which is a truly interdisciplinary effort that integrates research strains from psychology, political science and opinion dynamics into an overarching theoretical framework--offers a new perspective on the emergence of political conflict," he concludes.

Credit: 
Complexity Science Hub

Lose weight of fusion reactor component

image: (IMAGE 1) The superconducting coil consists of two pairs of helical coils and two sets of circular vertical magnetic field coils. In order to prevent the coil from moving or deforming due to the strong electromagnetic force acting on the superconducting coils, it is firmly supported by a supporting structure made of stainless steel with a high strength of 20 cm thick. These superconducting coils and supporting structures are cooled to cryogenic temperatures simultaneously.

Image: 
National Institute for Fusion Science

The group of associate professor Hitoshi Tamura and others of the National Institute of Natural Sciences (NINS) National Institute for Fusion Science (NIFS) first applied topology optimization technique to the concept design of a helical fusion reactor which aims to demonstrate power generation. The group successfully achieved a weight reduction of about 2,000 tons of the support structure surrounding helically twisted coils while maintaining the strength of the structure.

A superconducting coil is essential to realize a magnetic fusion power reactor, in which plasma is required to be confined by a strong magnetic field. The superconducting coil is made of a superconducting conductor wound several hundred times and the coil generates a strong magnetic field by passing a large current flow of about 100 kiloamperes. An electromagnetic force is generated when a magnetic field acts on a coil in which an electric current flows. This electromagnetic force is so large that the superconducting coil itself cannot withstand this force. To prevent causing the coil to move or deform extremely, it is necessary to firmly surround the coil with a structure made of a strong material to support the coil. This structure is called a coil support structure (IMAGE 1).

So far, the weight of the coil support structure of the helical fusion reactor has been estimated to be 20 times heavier than that of the Large Helical Device (LHD) and 1.6 times that of the International Thermonuclear Experimental Reactor (ITER). In addition, since the superconducting coils are operated at cryogenic conditions (below minus 260 degrees Celsius), the heavy and solid coil support structure also needs to be cooled to the same temperature as the coils to maintain the coils in the superconducting state. Reducing an amount of materials is an extremely important issue from the viewpoints of cost and power consumption. It is strongly desired to reduce the total weight of the coil support structure as much as possible while maintaining the role of sustaining the coil. To solve this issue, the research group applied the "topology optimization method" to the design of the coil support structure. Topology optimization is an analytical method to reduce the volume of the structure by removing the part that does not affect the strength. It is equivalent to searching for the optimum shape from various combinations including the change of the topology. This method has the potential to create shapes that cannot be imagined based on conventional designs. Since it is extremely effective in reducing the weight and cost such as automobile parts, it has developed rapidly in recent years. However, there has been no example of application of the topology optimization to overall design of a component in the fusion reactor.

The research group applied the topology optimization method for the first time to the overall design of the structure in a huge and complicated fusion reactor to reduce its weight. The stress force acting in the structure determines the strength of the structure. If the stress is larger than the acceptable level of the component material, the structure will start breaking. Structural optimization should be done so that the stress does not exceed acceptable levels by reducing weight. The research group analyzed in detail what level of stress and deformation would act on the coil support structure due to the electromagnetic force acting on the coil. Then, topology optimization was applied to the model. In topology optimization, the model is divided into many small regions and the degree of influence on the overall strength when a certain region is removed is calculated. Finally, a set of regions that can be removed without any influence is determined. In this way, the optimum shape that does not affect the overall strength and has reduced weight was found. Consequently, the weight of the coil support structure was successfully reduced by about 25% from 7,800 tons.

In the future, it is expected that the fusion reactor design research using the topology optimization method will make further progress, and we will greatly approach the demonstration of the fusion reactor.

Credit: 
National Institutes of Natural Sciences

Tennis: Losers move their heads more often than winners

image: Those sudden tantrums displayed on court by former US tennis player John McEnroe are legendary - but so too are those of Nick Kyrgios, Alexander Zverev, Serena Williams and Co. And their tennis rackets certainly bear witness to that! Emotions and nonverbal movement behaviour are closely linked processes. Up till now however, there has been insufficient knowledge about the spontaneous nonverbal expressions in response to the experience of positive and negative emotions, i.e. when winning or losing a sports competition.

Image: 
German Sport University

A new study by the Section Neurology, Psychosomatic Medicine and Psychiatry of the Institute of Movement Therapy and Movement-oriented Prevention and Rehabilitation at the German Sport University Cologne, has now found that, contrary to previous assumptions, losers express themselves nonverbally more strongly than winners. "Losers make more spontaneous nonverbal head movements after losing points in tennis than after winning points. They carry out nonverbal head-shaking movements upwards as well as side-to-side," explains scientist Dr. Ingo Helmich.

17 professional male tennis players (average age: 28.1 years) were analysed on five official match days of the first 2018 season of the German Tennis Bundesliga. The players' entire spontaneous nonverbal head movement behaviour between point- scoring was video-taped during the competition and analysed by two trained and certified evaluators using the NEUROpsychological GESture(NEUROGES) Systems, a standardised analysis system for nonverbal behaviour, in relation to gained or lost points.

"For the first time, these results present a clear picture of nonverbal head movements of winners and losers in sport", says Dr. Helmich. The analysis of nonverbal movement behaviour relating to emotions is relevant to better understand and possibly improve an athlete's performance during a competition.

Credit: 
German Sport University

Researchers control elusive spin fluctuations in 2D magnets

ITHACA, N.Y. - Like Bigfoot and the Loch Ness monster, critical spin fluctuations in a magnetic system haven't been captured on film. Unlike the fabled creatures, these fluctuations - which are highly correlated electron spin patterns - do actually exist, but they are too random and turbulent to be seen in real time.

A Cornell team developed a new imaging technique that is fast and sensitive enough to observe these elusive critical fluctuations in two-dimensional magnets. This real-time imaging allows researchers to control the fluctuations and switch magnetism via a "passive" mechanism that could eventually lead to more energy-efficient magnetic storage devices.

Radical Collaboration

The team's paper, "Imaging and Control of Critical Fluctuations in Two-Dimensional Magnets," published June 8 in Nature Materials.

The paper's co-senior authors are Kin Fai Mak, associate professor of physics in the College of Arts and Sciences, and Jie Shan, professor of applied and engineering physics in the College of Engineering. Both researchers are members of the Kavli Institute at Cornell for Nanoscale Science, and they came to Cornell through the provost's Nanoscale Science and Microsystems Engineering (NEXT Nano) initiative. Their shared lab specializes in the physics of atomically thin quantum materials.

Magnetization fluctuations are considered "critical" when they occur near the thermodynamic critical point, which is the moment when a form of matter transitions into a new phase, giving rise to all sorts of unusual phenomena. A typical example is iron, which loses its magnetic properties when heated to extreme temperatures.

In this critical region, or regime, the fluctuations stop behaving randomly and instead become highly correlated.

"If you imagine all air molecules correlated, they're moving together on a very large length scale as wind," said Chenhao Jin, a postdoctoral fellow with the Kavli Institute and the paper's lead author. "That's what happens when the fluctuation becomes correlated. It can lead to dramatic effects in a system and at any scale because the correlation, in principal, can go to infinity. The fluctuation we are looking at here is the spin, or magnetic moment, fluctuations."

These critical magnetization fluctuations are difficult to see because they are constantly changing and occur in a very narrow temperature range.

"Physicists have studied the magnetic phase transition for many decades, and we know this phenomena is more easily observed in a two-dimensional system," Mak said. "What is more two-dimensional than a magnet that has only a single layer of atoms?"

Observing a signal from a single atomic layer still presents plenty of challenges. The researchers used a single-layer ferromagnetic insulator, chromium bromide, which as a two-dimensional system features a wider critical regime and stronger fluctuations. In order to see these fluctuations in real time, the researchers needed a method that was equally fast, with a high spatial resolution and wide field-imaging capability.

The team was able to meet those criteria by using light with a very pure polarization state to probe the monolayer and record a clean signal of the magnetic moment - which is the strength and orientation of the magnet - as it makes its spontaneous fluctuations.

The ability to capture this phenomena in real time means the researchers can control the critical fluctuations in the magnet simply by applying a small voltage and letting the fluctuations toggle back and forth between states. Once the targeted state or value has been achieved, the voltage can be turned off. No magnetic field is needed to control the fluctuations because they essentially drive themselves. This could potentially lead to the creation of magnetic storage devices that consume much less energy.

"It's a fundamentally different concept from active magnetic state switching, because it's totally passive," Mak said. "It's a switching based on the information gained from measurements, rather than actively driving the system. So it's a new concept that could potentially save lots of energy."

Credit: 
Cornell University

Nanotechnology applied to medicine: The first liquid retina prosthesis

video: The new artificial liquid retina is biomimetic and consists of an aqueous component in which photoactive polymeric nanoparticles (whose size is of 350 nanometres, thus about 1/100 of the diameter of a hair) are suspended, going to replace the damaged photoreceptors.

Image: 
IIT-Istituto Italiano di Tecnologia

Genoa (Italy), 29 June 2020 - Researchers at IIT-Istituto Italiano di Tecnologia (Italian Institute of Technology) has led to the revolutionary development of an artificial liquid retinal prosthesis to counteract the effects of diseases such as retinitis pigmentosa and age-related macular degeneration that cause the progressive degeneration of photoreceptors of the retina, resulting in blindness. The study has been published in Nature Nanotechnology: http://www.nature.com/articles/s41565-020-0696-3

The multidisciplinary team is composed by researchers from the IIT's Center for Synaptic Neuroscience and Technology in Genoa coordinated by Fabio Benfenati and a team from the IIT's Center for Nano Science and Technology in Milan coordinated by Guglielmo Lanzani, and it also involves the IRCCS Ospedale Sacrocuore Don Calabria in Negrar (Verona) with the team lead by Grazia Pertile, the IRCCS Ospedale Policlinico San Martino in Genoa and the CNR in Bologna. The research has been supported by Fondazione 13 Marzo Onlus, Fondazione Ra.Mo., Rare Partners srl and Fondazione Cariplo.

The study represents the state of the art in retinal prosthetics and is an evolution of the planar artificial retinal model developed by the same team in 2017 and based on organic semiconductor materials (Nature Materials 2017, 16: 681-689).

The "second generation" artificial retina is biomimetic, offers high spatial resolution and consists of an aqueous component in which photoactive polymeric nanoparticles (whose size is of 350 nanometres, thus about 1/100 of the diameter of a hair) are suspended, going to replace the damaged photoreceptors.

The experimental results show that the natural light stimulation of nanoparticles, in fact, causes the activation of retinal neurons spared from degeneration, thus mimicking the functioning of photoreceptors in healthy subjects.

Compared to other existing approaches, the new liquid nature of the prosthesis ensures fast and less traumatic surgery that consist of microinjections of nanoparticles directly under the retina, where they remain trapped and replace the degenerated photoreceptors; this method also ensures an increased effectiveness.

The data collected show also that the innovative experimental technique represents a valid alternative to the methods used to date to restore the photoreceptive capacity of retinal neurons while preserving their spatial resolution, laying a solid foundation for future clinical trials in humans. Moreover, the development of these photosensitive nanomaterials opens the way to new future applications in neuroscience and medicine.

"Our experimental results highlight the potential relevance of nanomaterials in the development of second-generation retinal prostheses to treat degenerative retinal blindness, and represents a major step forward" Fabio Benfenati commented. "The creation of a liquid artificial retinal implant has great potential to ensure a wide-field vision and high-resolution vision. Enclosing the photoactive polymers in particles that are smaller than the photoreceptors, increases the active surface of interaction with the retinal neurons, allows to easily cover the entire retinal surface and to scale the photoactivation at the level of a single photoreceptor."

"In this research we have applied nanotechnology to medicine" concludes Guglielmo Lanzani. "In particular in our labs we have realized polymer nanoparticles that behave like tiny photovoltaic cells, based on carbon and hydrogen, fundamental components of the biochemistry of life. Once injected into the retina, these nanoparticles form small aggregates the size of which is comparable to that of neurons, that effectively behave like photoreceptors."

"The surgical procedure for the subretinal injection of photoactive nanoparticles is minimally invasive and potentially replicable over time, unlike planar retinal prostheses" adds Grazia Pertile, Director at Operating Unit of Ophthalmology at IRCCS Ospedale Sacro Cuore Don Calabria. "At the same time maintaining the advantages of polymeric prosthesis, which is naturally sensitive to the light entering the eye and does not require glasses, cameras or external energy sources."

The research study is based on preclinical models and further experimentations will be fundamental to make the technique a clinical treatment for diseases such as retinitis pigmentosa and age-related macular degeneration.

Credit: 
Istituto Italiano di Tecnologia - IIT

Rice lab's bright idea is pure gold

image: Rice University physicists discover that plasmonic metals can be prompted to produce "hot carriers" that in turn emit unexpectedly bright light in nanoscale gaps between electrodes. The phenomenon could be useful for photocatalysis, quantum optics and optoelectronics.

Image: 
Illustration by Longji Cui and Yunxuan Zhu/Rice University

HOUSTON - (June 29, 2020) - Seeing light emerge from a nanoscale experiment didn't come as a big surprise to Rice University physicists. But it got their attention when that light was 10,000 times brighter than they expected.

Condensed matter physicist Doug Natelson and his colleagues at Rice and the University of Colorado Boulder discovered this massive emission from a nanoscale gap between two electrodes made of plasmonic materials, particularly gold.

The lab had found a few years ago that excited electrons leaping the gap, a phenomenon known as tunneling, created a larger voltage than if there were no gap in the metallic platforms.

In the new study in the American Chemical Society journal Nano Letters, when these hot electrons were created by electrons driven to tunnel between gold electrodes, their recombination with holes emitted bright light, and the greater the input voltage, the brighter the light.

The study led by Natelson and lead authors Longji Cui and Yunxuan Zhu appears in the American Chemical Society journal Nano Letters and should be of interest to those who research optoelectronics, quantum optics and photocatalysis.

The effect depends upon the metal's plasmons, ripples of energy that flow across its surface. "People have explored the idea that the plasmons are important for the electrically driven light emission spectrum, but not generating these hot carriers in the first place," Natelson said. "Now we know plasmons are playing multiple roles in this process."

The researchers formed several metals into microscopic, bow tie-shaped electrodes with nanogaps, a test bed developed by the lab that lets them perform simultaneous electron transport and optical spectroscopy. Gold was the best performer among electrodes they tried, including compounds with plasmon-damping chromium and palladium chosen to help define the plasmons' part in the phenomenon.

"If the plasmons' only role is to help couple the light out, then the difference between working with gold and something like palladium might be a factor of 20 or 50," Natelson said. "The fact that it's a factor of 10,000 tells you that something different is going on."

The reason appears to be that plasmons decay "almost immediately" into hot electrons and holes, he said. "That continuous churning, using current to kick the material into generating more electrons and holes, gives us this steady-state hot distribution of carriers, and we've been able to maintain it for minutes at a time," Natelson said.

Through the spectrum of the emitted light, the researchers' measurements revealed those hot carriers are really hot, reaching temperatures above 3,000 degrees Fahrenheit while the electrodes stay relatively cool, even with a modest input of about 1 volt.

Natelson said the discovery could be useful in the advance of optoelectronics and quantum optics, the study of light-matter interactions at vanishingly small scales. "And on the chemistry side, this idea that you can have very hot carriers is exciting," he said. "It implies that you may get certain chemical processes to run faster than usual.

"There are a lot of researchers interested in plasmonic photocatalysis, where you shine light in, excite plasmons and the hot carriers from those plasmons do interesting chemistry," he said. "This complements that. In principle, you could electrically excite plasmons and the hot carriers they produce can do interesting chemistry."

Credit: 
Rice University

Hackensack Meridian CDI scientists uncover signposts in DNA for cancer, disease risk

June 29, 2020 - Nutley, NJ - By sequencing entire genomes for DNA modifications, and analyzing both cancer tissues and healthy ones, Hackensack Meridian Health researchers and doctors have found what could be a key to risks for cancer and other diseases: specific locations in the DNA where those expression changes (methylation) are imbalanced, according to a new publication.

The authors, from the Center for Discovery and Innovation (CDI), Hackensack University Medical Center and its John Theurer Cancer Center (JTCC), and the National Cancer Institute-recognized Georgetown Lombardi Comprehensive Cancer Center Consortium, published their findings in the major journal Genome Biology on June 29.

The most strongly disease-relevant genetic variants can be hard to localize in widespread scanning of the genome - but by zooming in on key genetic locations associated with these DNA methylation imbalances in multiple normal and cancer tissues, the scientists report they have uncovered promising new leads beneath the broader statistical signals.

"Our dense map of allele-specific methylation (i.e., DNA methylation imbalances dictated by genetic variation) will help other scientists prioritize and focus their work on the most relevant genetic variants associated with diseases," said Catherine Do, M.D., Ph.D., assistant member of the CDI, and the first author. "Because we also dug into the mechanisms of this phenomenon to understand how it can result in disease susceptibility, our study will help identify new interesting biological pathways for personalized medicine and drug development".

"Cancer cells are gangsters, but in our approach we make them work for us in a useful way", said Benjamin Tycko, M.D., Ph.D., the CDI lab director who oversaw the study. "These 'footprints' of allele-specific methylation are more abundant in cancers than in normal tissues, but since Catherine's work has shown they are produced by the same biological pathways, we can use our dense maps to understand the beginnings of both cancers and non-cancerous diseases - such as autoimmune, neuropsychiatric, and cardiometabolic disorders."

The current study generated one of the largest high-quality datasets used in this kind of approach. Among the DNA samples studied were various tissue types from 89 healthy controls, plus 16 cancer samples from three types of tumors treated by oncologists and surgeons at the JTCC: B-cell lymphomas, multiple myeloma, and glioblastoma multiforme (a common and difficult-to-treat brain tumor).

The scientists identified a total of 15,112 allele-specific methylation sites including 1,838 sites located near statistical signals of disease susceptibility from genome-wide association studies (GWAS). These data have been made publicly available so that other scientists can test new hypotheses in "post-GWAS" studies.

Also among its novel findings, the paper reports evidence that non-coding mutations (that do not change protein sequences) "might play roles in cancer through long range regulatory effects on gene expression." One specific example cited in the paper is a mutation that causes allele-specific methylation in the TEAD1 gene, which evidence has shown becomes over-expressed in aggressive and treatment-refractory forms of multiple myeloma.

Another discovery is that some disease-relevant genetic variants can reside in "chromatin deserts," places in the DNA which few or no specific biological signals in available tissue types - but which are revealed by the footprints of allele-specific methylation and may have been active at different times of the cell history or development stages.

"This could be a key breakthrough in determining how cancer starts - and give us a better chance to treat it," said David Siegel, M.D., founding director of the Institute for Multiple Myeloma and Lymphoma at the CDI, and also the chief of the Multiple Myeloma Division at John Theurer Cancer Center at Hackensack University Medical Center, and one of the authors. "Finding non-coding mutations that cause epigenetic activation of genes such as TEAD1 in multiple myeloma can potentially help us narrow in on the most promising targets for developing new treatments."

"Advanced epigenetic research like this will drive clinical decisions in the near future," said Andre Goy, M.D., M.S., physician-in-chief for oncology at Hackensack Meridian Health and director of John Theurer Cancer Center, also an author of the paper. "Studies like this, looking at the most detailed changes of DNA methylation and gene expression, could help us solve the riddle of how cancer starts - and perhaps how to conquer it."

"I applaud this important contribution to our understanding of cancer and its underlying biology that can potentially be applied to other disease," said Louis M. Weiner, M.D., director of Georgetown Lombardi Comprehensive Cancer Center and the MedStar Georgetown Cancer Institute. "This research finding is yet more evidence of what can be achieved by bringing together investigators from different institutions and specialties."

Credit: 
Hackensack Meridian Health

NASA-NOAA satellite animation shows the end of Tropical Cyclone Boris

image: NASA-NOAA's Suomi NPP satellite provided this image of former Tropical Cyclone Boris as it weakened to a remnant low-pressure area on June 29, 2020 in the Central Pacific Ocean. The circulation center can be seen surrounded by wispy clouds with the exception of a small area of thunderstorms on the western side of the circulation.

Image: 
NASA Worldview, Earth Observing System Data and Information System (EOSDIS)

NASA-NOAA's Suomi NPP satellite imagery provided a look at the end of the second named tropical cyclone of the Eastern Pacific Ocean's 2020 Hurricane Season.

Tropical Cyclone Boris formed in the Eastern Pacific Ocean on Wednesday, June 24 and by early on Sunday, June 29, the storm had become a remnant low-pressure area.

At NASA's Goddard Space Flight Center in Greenbelt, Md., an animation of satellite imagery was created from NASA Worldview, Earth Observing System Data and Information System (EOSDIS). The animation used imagery from the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument aboard NASA-NOAA's Suomi NPP satellite and ran from June 26 to June 29, 2020.  The animation showed the progression and weakening of Tropical Cyclone Boris as it crossed from the Eastern Pacific Ocean into the Central Pacific Ocean.

The final advisory issued by NOAA's National Hurricane Center came on Saturday, June 27 at 11 p.m. EDT. At that time, center of Post-Tropical Cyclone Boris was located near latitude 12.1 degrees north and longitude 142.0 degrees west. That was about 1,015 miles (1,635 km) east-southeast of Hilo, Hawaii. The post-tropical cyclone was moving toward the west-southwest near 8 mph (13 kph).  Maximum sustained winds were near 30 mph (45 kph) and weakening.

NASA-NOAA's Suomi NPP satellite provided an image of former Tropical Cyclone Boris as it weakened to a remnant low-pressure area on June 29. In the image, the circulation center can be seen surrounded by wispy clouds with the exception of a small area of thunderstorms on the western side of the circulation.

The remnant low is expected to dissipate by June 30.

Tropical cyclones/hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

Credit: 
NASA/Goddard Space Flight Center

The state of coral reefs in the Solomon Islands

image: Scientists on the Global Reef Expedition research mission to the Solomon Islands conducted over 1,000 standardized surveys of coral reefs and reef fish and created maps for over 3,000 square kilometers of coastal marine habitats in the Solomon Islands.

Image: 
© Khaled bin Sultan Living Oceans Foundation/Ken Marks

Scientists at the Khaled bin Sultan Living Oceans Foundation (KSLOF) have published a report on the status of coral reefs in the Solomon Islands. Released today, the Global Reef Expedition: Solomon Islands Final Report summarizes the foundation's findings from a monumental research mission to study corals and reef fish in the Solomon Islands and provides recommendations on how to preserve these precious ecosystems into the future.

Over the course of five years, KSLOF's Global Reef Expedition circumnavigated the globe collecting valuable baseline data on coral reefs to address the coral reef crisis. In 2014, the Global Reef Expedition arrived in the Solomon Islands, where an international team of scientists, local experts, and government officials spent more than a month at sea surveying the reefs and creating detailed habitat and bathymetric maps of the seafloor. Together, they conducted over 1,000 standardized surveys of coral reefs and reef fish in the Western, Isabel, and Temotu Provinces, and created maps for over 3,000 km2 of coastal marine habitats in the Solomon Islands.

What they found were impressive reefs covered with abundant and diverse coral communities, but few fish. Most of the big fish were gone, and many of the nearshore reefs appeared to be overfished. There was also evidence of damage to reefs from a prior tsunami, and scars on the reef from predatory crown-of-thorns starfish, which had eaten away large patches of living coral.

"Our most alarming finding was the overall lack of fish, particularly on reefs near coastal communities," said Renée Carlton, a Marine Ecologist at the Living Oceans Foundation and lead author on the report. "Overfishing not only impacts on the amount of fish on the reef, but it also impacts the coral community as well as people who rely on the fish for food and income. By prioritizing local management and taking steps now to protect the reefs and reduce fishing pressure, the long-term sustainability of the reefs in the Solomon Islands can be improved to be used well into the future."

Although several years have passed since the expedition, the data from this research mission will be critical for monitoring changes to the reefs over time. Data from the research mission can also inform management plans to conserve critical marine habitats in the Solomon Islands and help marine managers identify which areas may require additional protection.

Many of the sites visited on the expedition were remote and under-studied, so not much was known about the state of these reefs before this research mission. Because Prince Khaled bin Sultan donated the use of yacht--the M/Y Golden Shadow--for the Global Reef Expedition, the research team was able to access remote and otherwise inaccessible research sites far from port. These remote reefs in the Solomon Islands were generally healthier and in better shape than those near coastal communities and had some of the highest coral diversity observed anywhere on the Global Reef Expedition.

"The coral communities on reefs surrounding the Solomon Islands were simply stunning. It was a privilege for the Living Oceans Foundation to visit them and collect a broad portfolio of data, in the field and from satellite, which can be used to set a baseline condition for the country's reefs against which change can be tracked," said Dr. Sam Purkis, KSLOF's Chief Scientist as well as Professor and Chair of the Department of Marine Geosciences at the University of Miami's Rosenstiel School of Marine and Atmospheric Science.

Dr. Purkis and his team used a combination of satellite data, depth soundings, and field observations to make detailed maps of the reef down to a 2-square-meter scale. These are some of the highest-resolution bathymetric and habitat maps ever created of the Solomon Islands. They identify the location and extent of reefs surveyed by KSLOF in the Solomon Islands, as well as other important coastal marine habitats such as mangrove forests and seagrass beds. These maps can be explored on the foundation's website at LOF.org, and can also be used by marine managers, scientists, and conservation organizations to track changes to the reefs over time.

"This report provides the people of the Solomon Islands with relevant information and recommendations they can use to effectively manage their reefs and coastal marine resources," said Alexandra Dempsey, the Director of Science Management at KSLOF and one of the report's authors. "Our goal at the Khaled bin Sultan Living Oceans Foundation is to provide people with data and scientific knowledge to protect, conserve, and restore their marine ecosystems. We hope this research will encourage the Solomon Islands to consider robust marine conservation and management efforts to protect their coral reefs and nearshore fisheries before it is too late."

Credit: 
Khaled bin Sultan Living Oceans Foundation

SwRI scientists demonstrate speed, precision of in situ planetary dating device

image: SwRI is designing the CODEX instrument to use radioisotope dating techniques in situ to determine the age of rocks on other planets or moons. With fiver lasers and a mass spectrometer, the 20-inch-cube instrument is designed to vaporize tiny bits of rock and measure the elements present to pin down the rock's age with previously unmet precision.

Image: 
Tom Whitaker, SwRI

SAN ANTONIO -- June 29, 2020 -- Southwest Research Institute scientists have increased the speed and accuracy of a laboratory-scale instrument for determining the age of planetary specimens onsite. The team is progressively miniaturizing the Chemistry, Organics and Dating Experiment (CODEX) instrument to reach a size suitable for spaceflight and lander missions.

"In situ aging is an important scientific goal identified by the National Research Council's Decadal Survey for Mars and the Moon as well as the Lunar and Mars Exploration Program Analysis Groups, entities responsible for providing the science input needed to plan and prioritize exploration activities," said SwRI Staff Scientist Dr. F. Scott Anderson, who is leading CODEX development. "Doing this onsite rather than trying to return samples back to Earth for evaluation can resolve major dilemmas in planetary science, offers tremendous cost savings and enhances the opportunities for eventual sample return."

CODEX will be a little larger than a microwave and include seven lasers and a mass spectrometer. In situ measurements will address fundamental questions of solar system history, such as when Mars was potentially habitable. CODEX has a precision of ±20-80 million years, significantly more accurate than dating methods currently in use on Mars, which have a precision of ±350 million years.

"CODEX uses an ablation laser to vaporize a series of tiny bits off of rock samples, such as those on the surface of the Moon or Mars," said Anderson, who is the lead author of a CODEX paper published in 2020. "We recognize some elements directly from that vapor plume, so we know what a rock is made of. Then the other CODEX lasers selectively pick out and quantify the abundance of trace amounts of radioactive rubidium (Rb) and strontium (Sr). An isotope of Rb decays into Sr over known amounts of time, so by measuring both Rb and Sr, we can determine how much time has passed since the rock formed."

While radioactivity is a standard technique for dating samples on Earth, few other places in the solar system have been dated this way. Instead, scientists have largely constrained the chronology of the inner solar system by counting impact craters on planetary surfaces.

"The idea behind crater dating is simple; the more craters, the older the surface," says Dr. Jonathan Levine, a physicist at Colgate University, who is part of the SwRI-led team. "It's a little like saying that a person gets wetter the longer they have been standing out in the rain. It's undoubtedly true. But as with the falling rain, we don't really know the rate at which meteorites have fallen from the sky. That's why radioisotope dating is so important. Radioactive decay is a clock that ticks at a known rate. These techniques accurately determine the ages of rocks and minerals, allowing scientists to date events such as crystallization, metamorphism and impacts."

The latest iteration of CODEX is five times more sensitive than its previous incarnation. This precision was largely accomplished by modifying the sample's distance from the instrument to improve the data quality. The instrument also includes an ultrafast pulsed laser and improved signal-to-noise ratios to better constrain the timing of events in solar system history.

"We are miniaturizing the CODEX components for field use on a lander mission to the Moon or Mars," Anderson said. "Developing compact lasers with pulse energies comparable with what we currently require is a considerable challenge, though five out of the seven have been successfully miniaturized. These lasers have a repetition rate of 10 kHz, which will allow the instrument to acquire data 500 times faster than the current engineering design."

The CODEX mass spectrometer, power supplies and timing electronics are already small enough for spaceflight. Instrument components are being enhanced to improve ruggedness, thermal stability, radiation resistance and power efficiency to endure launch and extended autonomous operations in alien environments.

Targeting several future missions, SwRI is developing two versions of the instrument, CODEX, which is designed for Mars and can measure organics, and CDEX, which is designed for the Moon, and does not need to measure organics. NASA's Planetary Instrument Concepts for the Advancement of Solar System Observations (PICASSO) and the Maturation of Instruments for Solar System Exploration (MatISSE) programs are funding the instrument development, with previous support for CODEX/CDEX from the Planetary Instrument Definition and Development Program (PIDDP).

Credit: 
Southwest Research Institute