Tech

Researchers demonstrate a platform for future optical transistors

image: Photons do not interact with each other well, which creates a big problem for microelectronics engineers. A group of researchers from ITMO University, together with colleagues, have come up with a new solution to this problem by creating a planar system where photons couple to other particles, which enables them to interact with each other.

Image: 
Department of Physics, ITMO University

Leading research groups in the field of nanophotonics are working toward developing optical transistors - key components for future optical computers. These devices will process information with photons instead of electrons, thus reducing the heat and increasing the operation speed. However, photons do not interact with each other well, which creates a big problem for microelectronics engineers. A group of researchers from ITMO University, together with colleagues, have come up with a new solution to this problem by creating a planar system where photons couple to other particles, which enables them to interact with each other. The principle demonstrated in their experiment can provide a platform for developing future optical transistors. The results of their work are published in Light: Science & Applications.

It is widely known that transistors, key elements of the modern digital world, function thanks to the controlled motion of electrons. This approach has been used for decades, but it has several drawbacks. First, electronic devices tend to heat up when they perform a task, which means that part of the energy is wasted as heat and not used for actual work. To fight this heating, we need to equip our devices with coolers, thus wasting even more energy. Second, electronic devices have a limited processing speed. Some of these issues can be solved by using photons, light particles, instead of electrons. Devices that use photons for information encoding would produce less heat, require less energy, and work faster.

That is exactly why scientists all over the world conduct research in the field of optical computers. However, the main problem is that photons, unlike electrons, do not interact with each other. Researchers from over the world suggest different methods to "train" photons to interact with each other. The idea of one of these methods is to couple photons with other particles. A group of researchers from ITMO's Department of Physics and Engineering, together with colleagues, have demonstrated a new efficient implementation, where photons couple to excitons in single-layer semiconductors. Excitons form in semiconductors when electrons are excited leaving behind empty valence bonds (or electron holes, as physicists call them). Both the electron and its hole can interact with each other creating a new particle - an exciton, which in turn can interact with other excitons.

"If we strongly couple excitons to light particles, we will get polaritons," explains Vasily Kravtsov, a leading research fellow at ITMO University and one of the paper's co-authors. "These are partly light, meaning that they can be used to transfer information very fast; but at the same time they can interact with each other very well."

It seems like polaritons are a straightforward solution, and now all we need to do is to create a polariton-based transistor. However, it is not that easy: we need to design a system where these particles could exist long enough while still maintaining their high interaction strength. In the labs of ITMO's Department of Physics and Engineering, polaritons are created with the help of a laser, a waveguide, and an extremely thin molybdenum diselenide semiconductor layer.

A three-atom-thick semiconductor layer is placed on a nanophotonic waveguide, with a precise net of very fine grooves engraved on its surface. After that, it is lit up with a red laser to create excitons in the semiconductor. These excitons couple with light particles creating polaritons, which are "trapped" in the system.

Polaritons obtained in this way not only exist for relatively long periods of time, but also have extra high nonlinearity, meaning that they actively interact with each other.

"It brings us closer to creating an optical transistor, as we now have a planar platform less than 100 nanometers thick, which could be integrated on a chip. As the nonlinearity is rather high, we would not need a powerful laser - a small red light source will suffice, which could also be integrated onto the chip," elaborates Vasily Kravtsov.

At the moment, the study continues, as the researchers have to demonstrate the efficiency of their system at room temperatures.

Credit: 
ITMO University

Sweet as: The science of how diet can change the way sugar tastes

Researchers at the University of Sydney have discovered the basic science of how sweet taste perception is fine-tuned in response to different diets. While it has long been known that food can taste different based on previous experience, until now we didn't know the molecular pathways that controlled this effect.

Professor Greg Neely at the Charles Perkins Centre and School of Life and Environmental Sciences with Professor Qiaoping Wang (formerly at the Charles Perkins Centre and now based at Sun Yat-Sen University, China) used fruit flies to study sweet taste. They learned that taste is highly subjective based on previous experience.

Professor Neely said they learned four important things:

1. The food animals eat can change how they perceive future food.

2. This response uses the same machinery that the brain uses to learn.

3. Pathways that can extend lifespan were also involved in enhancing taste perception, and diets in fruit flies that promote long life were also found to enhance taste perception.

4. Lifespan, learning and sensory perception are linked in ways we are just starting to understand.

"We found that the fruit fly 'tongue' - taste sensors on its proboscis and front feet - can learn things using the same molecular pathways that the fly brain uses to learn things. Central to this is the neurotransmitter dopamine.

"It turns out these are also the same chemical pathways that humans use to learn and remember all sorts of things," Professor Neely said. "This really highlights how learning is a whole-body phenomenon; and was a complete surprise to us."

Professor Wang, who led the study, said: "We were surprised to find that a protein-restricted diet that makes an animal live much longer also turns up the intensity of sucrose perception for that animal, and that is dependent on the same learning and longevity pathways.

"The response was also really specific. For example, when we fed flies food that had no sweetness, the animals' sweet taste perception was enhanced, but only for glucose, not for fructose. We have no idea why they specifically focus just on one kind of sugar when they perceive them both as sweet."

"We also found that eating high amounts of sugar suppressed sweet taste perception, making sugar seem less sweet," Professor Neely said. "This finding, which occurs through a different mechanism, matched nicely with recent results from our colleague Monica Dus at the University of Michigan, who is the world expert in this area."

Taste study

The researchers found if they changed the diet of the fruit fly (increasing sugar, removing taste of sugar, increasing protein, changing sugar for complex carbohydrate), this drastically altered how well the fruit fly could taste subsequent sugar after a few days. Flies normally live about 80 days in optimal circumstances.

"We found that when flies ate unsweetened food, this made sugary food taste much more intense," Professor Wang said.

"Then we looked at all the proteins that changed in the fruit fly 'tongue' in response to diet, and we investigated what was happening," Professor Neely said.

They found the sensation of taste is controlled by dopamine (the "reward" neuromodulator). The researchers then mapped the pathway and found the same pathways that are well established as controlling learning and memory or promoting long life also enhance taste sensation.

"While this work was conducted in fruit flies, the molecules involved are conserved through to humans. We know humans also experience changes in taste perception in response to diet, so it's possible the whole process is conserved; we will have to see," Professor Wang said.

The research published in Cell Reports, is a follow up study to Professor's Neely's work testing the effects of artificial sweeteners in humans. That research found artificial sweeteners activate a neuronal starvation pathway, and end up promoting increased food intake, especially when combined with a low-carb diet.

"Our first studies were focused on how different food additives impact the brain, and from this we found taste changed in response to diet, so here we followed up that observation and describe how that works," Professor Neely said. "Turns out the fly 'tongue' itself is remembering what has come before, which is kind of neat."

Credit: 
University of Sydney

Science snapshots from Berkeley Lab: 3D nanoparticles and magnetic spin

image: 3D images of platinum particles between 2-3 nm in diameter shown rotating in liquid under an electron microscope. Each nanoparticle has approximately 600 atoms. White spheres indicate the position of each atom in a nanoparticle.

Image: 
Courtesy of IBS

Since their invention in the 1930s, electron microscopes have helped scientists peer into the atomic structure of ordinary materials like steel, and even exotic graphene. But despite these advances, such imaging techniques cannot precisely map out the 3D atomic structure of materials in a liquid solution, such as a catalyst in a hydrogen fuel cell, or the electrolytes in your car's battery.

Now, researchers at Berkeley Lab, in collaboration with the Institute for Basic Science in South Korea, Monash University in Australia, and UC Berkeley, have developed a technique that produces atomic-scale 3D images of nanoparticles tumbling in liquid between sheets of graphene, the thinnest material possible. Their findings were reported April 2 in the journal Science.

"This is an exciting result. We can now measure atomic positions in three dimensions down to a precision six times smaller than hydrogen, the smallest atom," said study co-author Peter Ercius, a staff scientist at Berkeley Lab's Molecular Foundry.

The technique, called 3D SINGLE (Structure Identification of Nanoparticles by Graphene Liquid cell Electron microscopy), employs one of the world's most powerful microscopes at Berkeley Lab's Molecular Foundry. The researchers captured thousands of images of eight platinum nanoparticles "trapped" in liquid between two graphene sheets - called a "graphene window."

These graphene sheets - each one just an atom thick - are "strong enough to contain tiny pockets of liquid necessary to acquire high-quality images of the nanoparticles' atomic arrangement," Ercius explained.

The researchers then adapted computer algorithms originally designed for biological studies to combine many 2D images into atomic-resolution 3D images.

The achievement, which improves upon a technique first reported in 2015, marks a significant milestone for the researchers. "With 3D SINGLE, we can determine why such small nanoparticles are more efficient catalysts than larger ones in fuel cells and hydrogen vehicles," Ercius said.

Fine-Tuning Magnetic Spin for Faster, Smaller Memory Devices

Unlike the magnetic materials used to make a typical memory device, antiferromagnets won't stick to your fridge. That's because the magnetic spins in antiferromagnets are oppositely aligned and cancel each other out.

Scientists have long theorized that antiferromagnets have potential as materials for ultrafast stable memories. But no one could figure out how to manipulate their magnetization to read and write information in a memory device.

Now, a team of researchers at Berkeley Lab and UC Berkeley working in the Center for Novel Pathways to Quantum Coherence in Materials, an Energy Frontier Research Center funded by the U.S. Department of Energy, have developed an antiferromagnetic switch for computer memory and processing applications. Their findings, published in the journal Nature Materials, have implications for further miniaturizing computing devices and personal electronics without loss of performance.

Using a focused ion beam instrument at Berkeley Lab's Molecular Foundry, the scientists - led by James Analytis, a faculty scientist in Berkeley Lab's Materials Sciences Division and associate professor and Kittel Chair of Condensed Matter Physics at UC Berkeley - fabricated the device from atomically thin sheets of niobium disulfide, a transition metal dichalcogenide (TMD). To form an antiferromagnetic TMD, they synthesized layers of iron atoms between each niobium disulfide sheet.

Study co-authors Nityan Nair and Eran Maniv discovered that applying small pulses of electrical current rotates the spins of the antiferromagnet, which in turn switches the material's resistance from high to low.

To their surprise, they also found that "these magnetic spins can be flipped or manipulated with small applied currents, around 100 times smaller than those used in any other materials with a similar response," said Analytis.

The researchers next plan to test different antiferromagnetic TMDs in the hope of identifying a system that operates at room temperature and thus further develop the field of spin-based electronics or spintronics, where information is transported by the electrons' magnetic spin.

Credit: 
DOE/Lawrence Berkeley National Laboratory

Stuttering DNA orchestrates the start of the mosquito's life

image: This is an Aedes aegypti mosquito.

Image: 
Hans Smid

All organisms have DNA, the genetic material that provides a blueprint for life. The long double-helix-shaped DNA molecules in the body's cells are first translated into RNA molecules and then translated into proteins that ensure the functioning of the cell and the entire organism. But there are large parts of the DNA that are not used for making proteins. This is called 'junk DNA', because its function remained unclear for a long time. However, a certain type of junk DNA that is found in mosquitoes and which repeats itself dozens of times, known as 'satellite DNA', has now been shown to play an essential role in the early development of mosquito embryos.

Researchers at Radboud university medical center published their findings in the scientific journal Nature.

Early development

All animals--but in this study specifically the yellow fever mosquito Aedes aegypti--are composed of a variety of cell types that form different tissues and organs, all originating from a single fertilized egg. This is a complex process; in order to ensure that the fertilized egg successfully develops into an embryo, the mother not only provides half of the hereditary DNA, but she also provides the egg with extra proteins and RNA.

These RNAs and proteins are essential additions, because these molecules direct the first cell divisions of the fertilized egg. Only after a number of cell divisions have taken place can the mosquito-to-be produce the proteins and RNAs that will prompt further development. At the same time, the mother's added proteins and RNAs must be broken down in time, so that they do not disturb the subsequent development of the mosquito.

Orchestrating RNAs

It turns out that a certain piece of satellite DNA in the nascent mosquito embryo is responsible for the breakdown of the maternal RNA. But how does this work? Researchers from the laboratory of Professor Ronald van Rij, chiefly among them PhD candidate Rebecca Halbach, discovered that the stuttering DNA produces two small RNA molecules. This happens during the earliest stages of the embryonic development of the mosquito. However, these small RNA molecules do not produce any protein. These RNAs regulate the activity of other pieces of RNA that do encode proteins. In this case, they will bind to the mother's RNA molecules, which will then be broken down. This step is so essential that absence of these 'regulatory RNAs' will result in the continued presence of the mother's RNA molecules, which will disrupt the subsequent development of the embryo.

Satellites and dinosaurs

Van Rij emphasizes how extraordinary this research has been: "During this study, we made some completely unexpected discoveries. Even though satellite DNA was first discovered sixty years ago, there is little known about its function. In this study we revealed that it actually has a very important function during a critical phase of development."

It also became apparent that the mechanism is quite old in evolutionary terms. Van Rij: "Together with researchers from Wageningen University, we examined a large group of mosquito species. This showed us that this satellite DNA and the specific regulatory RNAs originated around 200 million years ago, which was during the late Triassic period, an era that coincided with the rise of the dinosaurs. I think it's great that these small RNAs have remained unchanged for so long. This also specifically suggests that they have an important function."

From mosquito to human?

There are clear similarities between the first steps in the embryonic development of different animal species, but there are large differences at the molecular level. This particular piece of stuttering DNA is not found in humans, but it is possible that other satellite DNA plays a role in the embryonic development of humans or other animals.

Background: an accidental discovery

This research builds upon an accidental discovery. Aedes aegypti is a mosquito that transmits important pathogens, such as dengue virus and Zika virus. Van Rij's research group examines how these types of viruses are transmitted by mosquitoes. "We do a lot of research on the mosquito's immune system against viruses," says Van Rij. "This immune system recognizes the viruses' RNA and breaks it down into small fragments. During an analysis of the RNA fragments in virus-infected cells, researchers Pascal Miesen and Rebecca Halbach accidentally stumbled upon small RNA molecules that did not derive from the virus. The small RNAs turned out to be produced from satellite DNA from the cell itself. We found this to be so remarkable that we decided to investigate further. After five years of research--which was partly made possible by an NWO Vici grant--it has now led to this Nature publication."

Credit: 
Radboud University Medical Center

Achieving strong structures with carbon fiber reinforced plastics

image: A failure due to the classical global buckling of the both-end-pinned column is experienced by the control unstrengthened specimen, (a). Compared with the shape of the control specimens, the deformed shape of all CFRP-strengthened specimens is quite different, and it enables an effective increase in the maximum compressive loading.

Image: 
COPYRIGHT (C) TOYOHASHI UNIVERSITY OF TECHNOLOGY. ALL RIGHTS RESERVED.

Researchers of the Structural Engineering Laboratory, Department of Architecture and Civil Engineering, Toyohashi University of Technology have developed a new concept for strengthening steel in critical building structures using bond-free carbon fiber reinforced plastic (CFRP) laminates to enhance the buckling performance of structural steel elements. This method does not require steel surface treatment prior to CFRP application because the CFRP is not bonded onto the surface, which contributes to structural strength through its flexural rigidity. The research findings were published in Construction and Building Materials in early 2020.

Following its success regarding reinforcement of concrete in the civil engineering field, CFRP has now been developed as a means for strengthening steel members instead of conventional steel plates being used. CFRP is preferred because it offers several advantages such as light weight, high strength-to-weight ratio, and excellent fatigue and corrosion resistances. To date, however, research and development on strengthening steel using CFRP has focused primarily on bonding techniques involving CFRP being attached to steel surfaces using an adhesive. Bonded strengthening presents disadvantages as complex and time-consuming surface treatments are required prior to CFRP installation. Moreover, the bonding strength performance between steel and CFRP, which is the key aspect of this strengthening technique, can also decrease significantly owing to environmental exposure throughout the service life. Replacing this approach with another new bonded-CFRP technique is not particularly an appropriate solution because it is unlikely to be cost-effective.

The research team developed its method for strengthening steel using CFRP, for which they did not bond it to the steel surface. This method has been shown to delay buckling and increase the compression capacity of steel bars, where the capacity gain is affected by the number of carbon fiber layers used.

"As an alternative to the steel strengthening method using CFRP which is bonded to the steel surface, we developed this unbonded-CFRP method," explained the primary author, Fengky Satria Yoresta. "The major advantages of this method are that it is easier and less time-consuming to implement, especially when it is applied to the existing elements of building structures. No more troublesome steel surface treatments are needed, such as sand blasting, grit blasting, or hand grinding, and this leads to significant cost savings," he said.

Associate Professor Yukihiro Matsumoto, the leader of the research team, added, "Nearly all previous studies used adhesively-bonded joints to strengthen steel members with CFRP. This method is quite complex because appropriate steel surface treatments are required before the application of the CFRP to obtain an acceptable bond between the CFRP and the steel surface. The surface treatment conditions also affect bonding strengths."

"Moreover, we cannot perfectly estimate the effects of environmental exposure during the service life on the bonding performance between the CFRP and the steel. As such, we sought to improve on conventional methods by developing our own bond-free strengthening method."

"The unbonded strengthening method is useful, easy to apply, and manageable," he said. "However, our method does not transfer stress smoothly, so an appropriate mechanical model needed to be established. Consequently, we performed mechanical simulations and experiments to demonstrate this," he added.

The findings of their work make the research team members believe that the bond-free CFRP method can be applied not only in civil engineering, but also to other fields such as those in the aerospace, automotive, and marine industries. This promising new method is expected to be adopted to rapidly produce innovative, high-quality products.

Credit: 
Toyohashi University of Technology (TUT)

New isomer separation method a boon for research on protein oxidation

image: Chemists at UC Santa Cruz reported a new method for separation of methionine sulfoxide diastereomers that opens up new opportunities for studying their roles in biological processes.

Image: 
Cover art by J. Raskatov

Proteins are made up of long chains of amino acids, and most proteins have one or more of the sulfur-containing amino acid methionine. Oxidation of the sulfur atom in methionine is an important biomolecular reaction that can have a wide range of biological consequences depending on the context and the protein involved.

Interestingly, the sulfur oxidation of methionine can give rise to two different versions of the oxidized molecule (methionine sulfoxide) differing only in the 3-dimensional spatial arrangement of its atoms. The two versions are called stereoisomers (or, more precisely, diastereomers, which are stereoisomers that are not mirror images). Researchers studying the biological effects of sulfur oxidation would like to be able to separate the two stereoisomers, but this has been extremely difficult to do.

Now, chemists at UC Santa Cruz have reported a new method for separation of methionine sulfoxide diastereomers that opens up new opportunities for studying their roles in biological processes. In a paper published April 6 in Chemistry, A European Journal, they reported obtaining both stereoisomers in purities exceeding 99%.

"The field of methionine oxidation has been hampered for decades by a lack of robust access to these reagents, and we believe this will be a tremendous boost," said first author Jevgenij Raskatov, assistant professor of chemistry and biochemistry at UC Santa Cruz.

Raskatov's lab collaborated with researchers at the California Institute of Technology on the paper, which was featured on the cover of the journal. Raskatov said he has already heard from other researchers in the field who are interested in the new method.

Raskatov's team used an advanced chromatography technique called supercritical fluid chromatography to purify the methionine sulfoxide stereoisomers. Supercritical fluids have the properties of both liquids and gases, which can be very useful in chromatography. The researchers also analyzed the structures of the purified stereoisomers using x-ray crystallography and confirmed their remarkable stereochemical stability.

In addition to Raskatov, the coauthors of the paper include Scott Virgil and Lawrence Henling at Caltech and Hsiau-Wei Lee, Ka Chan, Ariel Kuhn, and Alejandro Foley at UC Santa Cruz.

Credit: 
University of California - Santa Cruz

Machine learning reveals new candidate materials for biocompatible electronics

image: Machine learning tools developed by Assoc. Prof. Andrew Ferguson and his collaborators are able to screen self-assembling peptides to find the best candidates for electronic, biocompatible materials.

Image: 
Image courtesy of Kirill Shmilovich et al.

Scientists and engineers are on a quest to develop electronic devices that are compatible with our bodies: think of materials that can help wire neurons back together after brain injuries, or diagnostic tools that can easily be absorbed within the body.

A family of self-assembling peptides, called π-conjugated oligopeptides, has shown promise for becoming the basis of the next-generation of these electronic, biocompatible materials. But identifying the right molecular sequences to create the optimal self-assembled nanostructures would require testing thousands of possibilities that each take approximately one month to test in the lab.

Assoc. Prof. Andrew Ferguson and his collaborators have sped up that process by developing machine learning tools that can screen for the best candidates. By screening 8,000 candidates of self-assembled peptides, the team was able to rank each design. That paves the way for experimentalists to test the most promising candidates.

The results were published in the journal J. Phys. Chem. B. The paper was also selected as the ACS Editors' Choice, which offers free public access to new research of importance to the global scientific community, and to be featured on the journal cover.

"By understanding data science, materials science, and molecular science, we were able to find an innovative way to screen for new possible candidates," Ferguson said. "The fact that this paper was chosen as an ACS Editors' Choice shows that there is a lot of interest in coupling artificial intelligence to domain science. It's an important problem that is of broad interest to the physical chemistry community."

Ranking peptides for experimentalists

To help find the best candidates, Ferguson and graduate student Kirill Shmilovich screened a family of π-conjugated oligopeptides using machine learning and molecular simulation. The set included 8,000 potential peptides, if researchers kept the same core and just changed the three amino acids on each side of the molecule. (The amino acids on the sides are symmetrical -- if you change one on one side, it changes on the other side, as well.)

Using a form of machine learning known as active learning or Bayesian optimization to guide molecular simulations, they were able to construct reliable data-driven models of how the sequence of the peptide influenced its properties after considering only 186 peptides.

The model predictions could then be reliably extrapolated to predict the properties of the rest of the peptide family. The process also removed human bias from the equation, letting artificial intelligence find features of peptide designs that researchers hadn't considered before that actually made them better candidates.

They then ranked each peptide and handed off their results to their experimental collaborators, who will then test the top candidates in the lab. Next, they hope to expand their system to include trying out different π-conjugated cores, while feeding new experimental data back into the loop to further strengthen their models.

They also hope to use this machine learning system for designing proteins, optimizing self-assembling colloids to make atomic crystals, and even to one day incorporate these tools into a self-driving laboratory, where artificial intelligence would take data, create predictions, run experiments, then feed that data back to the model -- all without human intervention.

"This is a method that could be useful in many different domains," Ferguson said.

Credit: 
University of Chicago

NASA continues tracking Tropical Cyclone Harold's excessive rainfall

video: This animation shows the heavy precipitation associated with Tropical Cyclone Harold as it progresses from the Solomon Islands (upper left) on April 2, 2020 to its movement beyond the island of Tonga on April 8 (lower right). Harold's core region produced precipitation rates in excess of 30 millimeters per hour, which is equivalent to a 7-inch-deep rain accumulation if the core region were to remain over a given location for 6 hours. The precipitation estimates in this animation come from the IMERG multi-satellite algorithm developed by NASA and run in near real-time.

Image: 
 NASA/JAXA, B. Jason West and Owen Kelley

powerful Tropical Cyclone Harold from the Solomon Islands to the island of Tonga in the South Pacific. Satellite data was used to calculate the rainfall generated as Harold moved through the Southern Pacific Ocean. NASA also provided infrared imagery on Harold.

powerful Tropical Cyclone Harold from the Solomon Islands to the island of Tonga in the South Pacific. Satellite data was used to calculate the rainfall generated as Harold moved through the Southern Pacific Ocean. NASA also provided infrared imagery on Harold.

On April 8 at 0141 UTC (April 7 at 9:41 p.m. EDT) NASA's Aqua satellite analyzed the storm using the Atmospheric Infrared Sounder or AIRS instrument when it was major hurricane. AIRS found coldest cloud top temperatures as cold as or colder than minus 80 degrees Fahrenheit (minus 62.2 Celsius).  NASA research has shown that cloud top temperatures that cold indicate strong storms that have the capability to create heavy rain.

Visualizing Harold's Heavy Rainfall

Harold's path of heavy rain from April 2 to 9 was calculated and mapped in an animation created at NASA's Goddard Space Flight Center in Greenbelt, Maryland.

The animation shows the heavy precipitation associated with Tropical Cyclone Harold as it progresses from the Solomon Islands on April 2, 2020 to its movement beyond the island of Tonga on April 8. Harold's core region produced precipitation rates in excess of 30 millimeters per hour, which is equivalent to a 7-inch-deep rain accumulation if the core region were to remain over a given location for 6 hours. The precipitation estimates in this animation come from the IMERG multi-satellite algorithm developed by NASA and run in near real-time.

The precipitation estimates in this animation come from the IMERG multi-satellite algorithm developed by NASA and run in near real-time.

What is NASA's IMERG?

 NASA's Integrated Multi-satellitE Retrievals for GPM or IMERG, is a NASA satellite rainfall product. The near-real time rain estimates come from the NASA's IMERG, which combines observations from a fleet of satellites, in near-real time, to provide near-global estimates of precipitation every 30 minutes. By combining NASA precipitation estimates with other data sources, we can gain a greater understanding of major storms that affect our planet.

Instead, what the IMERG does is "morph" high-quality satellite observations along the direction of the steering winds to deliver information about rain at times and places where such satellite overflights did not occur. Information morphing is particularly important over the majority of the world's surface that lacks ground-radar coverage. Basically, IMERG fills in the blanks between weather observation stations.

Harold's Status on April 9, 2020

At 10 a.m. EDT (1500 UTC), JTWC's final warning on Harold noted that the system had  weakened to the strength of a Category 1 hurricane on the Saffir-Simpson Hurricane Wind scale with maximum sustained winds near 80 knots (92 mph/148 kph). Harold was located near latitude 26.7 degrees south and longitude 166.1 degrees west, approximately 426 nautical miles south-southeast of Niue and has tracked east southeastward at 30 knots (35 mph/56 kph).

NASA Finds Harold Hit by Wind Shear

On April 9, infrared imagery, from the AIRS instrument aboard NASA's Aqua satellite, revealed that strong, persistent northwesterly vertical wind shear continues to displace the bulk of central convection and thunderstorms to the south and southeast of Harold's center.

Global Precipitation Measurement (GPM) core satellite's Microwave Imager (GMI) instrument still showed a well-defined microwave eye feature and tightly curved shallow banding of thunderstorms.  However, the JTWC noted that the imagery indicates only limited deep convective banding over the southern semicircle

What is Wind Shear?

In general, wind shear is a measure of how the speed and direction of winds change with altitude. Tropical cyclones are like rotating cylinders of winds. Each level needs to be stacked on top each other vertically in order for the storm to maintain strength or intensify. Wind shear occurs when winds at different levels of the atmosphere push against the rotating cylinder of winds, weakening the rotation by pushing it apart at different levels.

JTWC forecasters said, "Additionally, Harold will undergo extra-tropical transition as it accelerates east-southeastward within the mid-latitude westerlies and is forecast to complete extra-tropical transition by 12 hours as the system gains frontal characteristics."

What does Extra-tropical Mean?

When a storm becomes extra-tropical, it means that a tropical cyclone has lost its "tropical" characteristics. The National Hurricane Center defines "extra-tropical" as a transition that implies both poleward displacement (meaning it moves toward the north or south pole) of the cyclone and the conversion of the cyclone's primary energy source from the release of latent heat of condensation to baroclinic (the temperature contrast between warm and cold air masses) processes. It is important to note that cyclones can become extratropical and retain winds of hurricane or tropical storm force.

Tropical cyclones/hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration using a fleet of satellites contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

For more information about NASA's IMERG, visit: https://pmm.nasa.gov/gpm/imerg-global-image

By Rob Gutro
NASA's Goddard Space Flight Center

Credit: 
NASA/Goddard Space Flight Center

Identical mice, different gut bacteria, different levels of cancer

image: A researcher examines a mouse in the University of Michigan Germ-Free Mouse Facility.

Image: 
Photo courtesy of ULAM Germ-Free Mouse Facility.

Researchers at the University of Michigan Rogel Cancer Center are shedding new light on the way microorganisms that live in the gastrointestinal tract can affect the development of colorectal cancer.

Some types of gut bacteria are better than others at stimulating certain immune cells, specifically CD8+ T cells, in the body, they found. And while these CD8+ T cells normally help protect the body against cancer, overstimulating them may promote inflammation and exhaust the T cells -- which can actually increase susceptibility to cancer, according to new mouse model study published in Cell Reports.

The work will help scientists pinpoint which populations of bacteria are tumor suppressive or tumor promoting and how, says study first author Amy Yu, a doctoral candidate in immunology at U-M.

"There has also been a lot of excitement about the role bacteria may play in improving the effectiveness of immunotherapy," says senior study author Grace Chen, M.D., Ph.D., an associate professor of hematology/oncology at Michigan Medicine and member of the Rogel Cancer Center. "This work suggests it may be a double edged sword -- and that promoting T cell exhaustion is something researchers need to watch out for."

In the U.S., colorectal cancer is the third-leading cause of cancer-related death in both men and women, according to the American Cancer Society.

Different mice, different outcomes

The current study builds on previous work from Chen's group, which found that disturbances of the gut microbiome can directly contribute to the development of cancer.

The group found that mice from two different research colonies had vastly different susceptibility to colorectal cancer when they were exposed to a carcinogen as well as an agent that promotes gastrointestinal inflammation.

The mice from the first colony grew an average of five tumors, while the mice from the second colony developed 15 tumors and had a more significant inflammatory response.

When the researchers sequenced fecal bacteria from the two different colonies, they found they had distinct microbiomes composed of different types of bacteria.

"This was exciting because my lab is very interested in which bacteria have the biggest impact on colorectal cancer risk, and by what mechanisms," Chen says.

Same mice, different outcomes

To better understand what was causing the differences the researchers were seeing in the two different colonies of mice, they transplanted gut bacteria from each of the two colonies into genetically identical mice that had been bred in a bacteria free environment.
Once again, mice with bacteria from the second colony fared far worse.

"This showed that the different gut microbiota directly contributed to tumor development," Yu notes. "Our data ultimately revealed nine different bacterial populations that may have tumor-suppressive or tumor-promoting activity."

Investigating mechanisms

The team next conducted experiments to better understand what was driving the increased inflammation and tumor growth associated with the bacteria from the second mouse colony.

Through immune cell profiling, they found that there were more T cells in the colon tissue of mice with bacteria from the second colony, and many more of a type of cell called CD8+.

"It's a little counter intuitive, since T cells and CD8+ cells are usually associated with better outcomes in colorectal cancer patients," Chen says. "We hypothesized that these cells get over-activated in the presence of certain bacteria and then exhausted, leaving them less capable of killing tumor cells."

When the bacteria from the second mouse colony were transplanted into mice that were engineered to lack CD8+ T cells, fewer tumors developed, supporting T cells' role in promoting the growth of the cancer in the presence of certain bacteria, Chen notes.

Meanwhile, the lab continues to build on the research as it investigates the mechanisms by which different bacteria may contribute to promoting or protecting against the development of colorectal cancer.

Credit: 
Michigan Medicine - University of Michigan

Drug shows promise in reversing kidney damage caused by lupus

A drug used for cancer therapy has shown promise in reversing kidney damage caused by systemic lupus erythematosus (SLE, or lupus), according to a Yale-led study published April 8 in the journal Science Translational Medicine.

"Kidney damage affects about half of the patients with lupus, sometimes leading to renal failure with a requirement for dialysis or transplantation," said Joe Craft, the Paul B. Beeson Professor of Medicine (rheumatology) and professor of immunobiology. "Finding what causes that damage is extremely important."

Lupus is an autoimmune disease in which immune cells attack tissues in the body of the host, with kidneys being particularly susceptible in a condition called lupus nephritis. In lupus nephritis, the patient's own T cells infiltrate kidney tissue and trigger a decrease in oxygen, leading to tissue damage and potentially end-stage kidney disease.

Craft and lead author Ping-Min Chen, a former graduate student at Yale now a postdoctoral researcher at Harvard Medical School, investigated the effects these invasive T cells had in lupus nephritis. They found that the T cells implicated in lupus express a factor called hypoxia-inducible factor-1 (HIF-1), which is regulated by oxygen levels in the kidney. When activated, HIF-1 instructs the T cells to attack tissues, further lowering tissue oxygen levels and causing more kidney damage.

The researchers theorized that blocking HIF-1 might help prevent oxygen depletion and damage of kidney tissue. They used a drug to inhibit HIF-1 -- which has been used in clinical trials as a treatment for cancer in humans -- to treat mice with models of lupus. They found the drug slowed infiltration of T cells into kidney tissue and reversed damage. They also found that the same HIF-1 regulated damage is present in biopsy samples of SLE patients with lupus nephritis.

"The findings suggest this therapy might be beneficial in lupus nephritis," Craft said. "Since this drug and others that block HIF-1 function have been used in humans with cancer, they could be used for treatment of patients with lupus."

Credit: 
Yale University

Origins of Earth's magnetic field remain a mystery

Microscopic minerals excavated from an ancient outcrop of Jack Hills, in Western Australia, have been the subject of intense geological study, as they seem to bear traces of the Earth's magnetic field reaching as far back as 4.2 billion years ago. That's almost 1 billion years earlier than when the magnetic field was previously thought to originate, and nearly back to the time when the planet itself was formed.

But as intriguing as this origin story may be, an MIT-led team has now found evidence to the contrary. In a paper published in Science Advances, the team examined the same type of crystals, called zircons, excavated from the same outcrop, and have concluded that zircons they collected are unreliable as recorders of ancient magnetic fields.

In other words, the jury is still out on whether the Earth's magnetic field existed earlier than 3.5 billion years ago.

"There is no robust evidence of a magnetic field prior to 3.5 billion years ago, and even if there was a field, it will be very difficult to find evidence for it in Jack Hills zircons," says Caue Borlina, a graduate student in MIT's Department of Earth, Atmospheric, and Planetary Sciences (EAPS). "It's an important result in the sense that we know what not to look for anymore."

Borlina is the paper's first author, which also includes EAPS Professor Benjamin Weiss, Principal Research Scientist Eduardo Lima, and Research Scientist Jahandar Ramezan of MIT, along with others from Cambridge University, Harvard University, the University of California at Los Angeles, the University of Alabama, and Princeton University.

A field, stirred up

Earth's magnetic field is thought to play an important role in making the planet habitable. Not only does a magnetic field set the direction of our compass needles, it also acts as a shield of sorts, deflecting away solar wind that might otherwise eat away at the atmosphere.

Scientists know that today the Earth's magnetic field is powered by the solidification of the planet's liquid iron core. The cooling and crystallization of the core stirs up the surrounding liquid iron, creating powerful electric currents that generate a magnetic field stretching far out into space. This magnetic field is known as the geodynamo.

Multiple lines of evidence have shown that the Earth's magnetic field existed at least 3.5 billion years ago. However, the planet's core is thought to have started solidifying just 1 billion years ago, meaning that the magnetic field must have been driven by some other mechanism prior to 1 billion years ago. Pinning down exactly when the magnetic field formed could help scientists figure out what generated it to begin with.

Borlina says the origin of Earth's magnetic field could also illuminate the early conditions in which Earth's first life forms took hold.

"In the Earth's first billion years, between 4.4 billion and 3.5 billion years, that's when life was emerging," Borlina says. "Whether you have a magnetic field at that time has different implications for the environment in which life emerged on Earth. That's the motivation for our work."

"Can't trust zircon"

Scientists have traditionally used minerals in ancient rocks to determine the orientation and intensity of Earth's magnetic field back through time. As rocks form and cool, the electrons within individual grains can shift in the direction of the surrounding magnetic field. Once the rock cools past a certain temperature, known as the Curie temperature, the orientations of the electrons are set in stone, so to speak. Scientists can determine their age and use standard magnetometers to measure their orientation, to estimate the strength and orientation of the Earth's magnetic field at a given point in time.

Since 2001, Weiss and his group have been studying the magnetization of the Jack Hills rocks and zircon grains, with the challenging goal of establishing whether they contain ancient records of the Earth's magnetic field.

"The Jack Hills zircons are some of the most weakly magnetic objects studied in the history of paleomagnetism," Weiss says. "Furthermore, these zircons include the oldest known Earth materials, meaning that there are many geological events that could have reset their magnetic records."

In 2015, a separate research group that had also started studying the Jack Hills zircons argued that they found evidence of magnetic material in zircons that they dated to be 4.2 billion years old -- the first evidence that Earth's magnetic field may have existed prior to 3.5 billion years ago.

But Borlina notes that the team did not confirm whether the magnetic material they detected actually formed during or after the zircon crystal formed 4.2 billion years ago -- a goal that he and his team took on for their new paper.

Borlina, Weiss, and their colleagues had collected rocks from the same Jack Hills outcrop, and from those samples, extracted 3,754 zircon grains, each around 150 micrometers long -- about the width of a human hair. Using standard dating techniques, they determined the age of each zircon grain, which ranged from 1 billion to 4.2 billion years old.

Around 250 crystals were older than 3.5 billion years. The team isolated and imaged those samples, looking for signs of cracks or secondary materials, such as minerals that may have been deposited on or within the crystal after it had fully formed, and searched for evidence that they were significantly heated over the last few billion years since they formed. Of these 250, they identified just three zircons that were relatively free of such impurities and therefore could contain suitable magnetic records.

The team then carried out detailed experiments on these three zircons to determine what kinds of magnetic materials they might contain. They eventually determined that a magnetic mineral called magnetite was present in two of the three zircons. Using a high-resolution quantum diamond magnetometer, the team looked at cross-sections of each of the two zircons to map the location of the magnetite in each crystal.

They discovered magnetite lying along cracks or damaged zones within the zircons. Such cracks, Borlina says, are pathways that allow water and other elements inside the rock. Such cracks could have let in secondary magnetite that settled into the crystal much later than when the zircon originally formed. Either way, Borlina says the evidence is clear: These zircons cannot be used as a reliable recorder for Earth's magnetic field.

"This is evidence we can't trust these zircon measurements for the record of the Earth's magnetic field," Borlina says. "We've shown that, before 3.5 billion years ago, we still have no idea when Earth's magnetic field started."

Despite these new results, Weiss stresses that previous magnetic analyses of these zircons are still highly valuable.

"The team that reported the original zircon magnetic study deserves a lot of credit for trying to tackle this enormously challenging problem," Weiss says. "As a result of all the work from both groups, we now understand much better how to study the magnetism of ancient geological materials. We now can begin to apply this knowledge to other mineral grains and to grains from other planetary bodies."

Credit: 
Massachusetts Institute of Technology

National online education platforms could make STEM degrees more affordable, Russia-based study shows

An online education model in Russia in which national platforms license STEM (science, technology, engineering, and mathematics) courses from top universities to institutions with instructor shortages could significantly lower instruction costs, allowing resource-constrained universities to enroll more STEM students, according to a new study. "Investments in online education programs could also strengthen instructional resilience of colleges when in-person delivery is not an option, such as right now, when most of the US colleges are closed to mitigate the COVID-19 outbreak," says Igor Chirikov, the lead researcher on the study. Chirikov and colleagues found that exam scores remained relatively unchanged based on the form of instruction, although students who took fully online classes reported being slightly less satisfied with their courses than in-person students or those who took courses in a blended format. Universities around the world (and particularly in China, India, Russia, and the U.S.) are working to improve access to STEM degrees - a challenging goal when STEM programs are costlier than programs in other fields. To test an affordable approach to this challenge, Chirikov et al. randomly assigned 325 second-year college students from three Russian universities to take Engineering Mechanics (EM) and Construction Materials Technology (CMT) courses either fully online, fully in-person, or with online lectures and in-person discussion sections. The researchers then estimated how the online and blended formats could reduce per-student instruction costs for 129 Russian universities, finding blended instruction lowers the per-student cost by 19.2% for EM and 15.4% for CMT, while fully online instruction lowers costs by 80.9% for EM and 79.1% for CMT. These savings would allow universities to teach 3.4% more EM students and 2.5% more CMT students through blended instruction or 18.2% more EM and 15% more CMT students through online instruction.

Credit: 
American Association for the Advancement of Science (AAAS)

Babies in popular low-riding pushchairs are exposed to alarming levels of toxic air pollutants

Parents who are using popular low-riding pushchairs could be exposing their babies to alarming levels of air pollution, finds a new study from the University of Surrey.

In a paper published by Environment International, experts from Surrey's Global Centre for Clean Air Research (GCARE) investigated the amount of harmful air pollutants babies potentially inhale while out in a pram with their parents or carers.

The study looked at three different pushchair types - single pushchairs facing the road, single pushchair facing the adult and double pushchairs facing the road - and assessed the difference in concentration of pollutants compared to those experienced by adults. The GCARE team also investigated whether pushchair covers altered exposure levels.

The team simulated 89 school drop off and pick up trips in Guildford, Surrey, walking just over 2km, between the times of 8am to 10am and 3pm to 5pm.

Significantly, the team found that on average, regardless of the type of pushchair, babies could be breathing 44 per cent more harmful pollutants than their parents during both morning and afternoon school runs.

The GCARE team also found that a child at the bottom of a double pushchair faced up to 72 per cent higher exposure to pollutants than a child on the top seat.

However, the team from Surrey found a ray of hope in the form of pushchair covers, discovering that they reduced concentration of small-sized particles by as much as 39 per cent.

Professor Prashant Kumar, Founding Director of GCARE at the University of Surrey, said: "For parents, nothing is more important than the health of our children and this is why we at the University of Surrey are continuing to build on this research to understand the impact air pollution has on babies travelling in pushchairs.

"Our research shows that choices such as the type of pushchair you use, can impact on the amount of pollution your child faces when you are running a typical errand. But there is cause for some optimism, as our study confirms that pushchair covers and upping the buggy heights appears to have shielded children from an appreciable amount of pollution under certain conditions."

Credit: 
University of Surrey

What do soap bubbles and butterflies have in common?

image: The wing of a typical Common Buckeye (right) and a butterfly bred to enhance the iridescent blue (left). The breeding thickened the chitin lamina of wing scales, changing the structural color of the scales from gold to blue.

Image: 
Aaron Pomerantz

Edith Smith bred a bluer and shinier Common Buckeye at her butterfly farm in Florida, but it took University of California, Berkeley, graduate student Rachel Thayer to explain the physical and genetic changes underlying the butterfly's newly acquired iridescence.

In the process, Thayer discovered how relatively easy it is for butterflies to change their wing colors over just a few generations and found the first gene proven to influence the so-called "structural color" that underlies the iridescent purple, blue, green and golden hues of many butterflies.

Her findings are a starting point for new genetic approaches to investigate how butterflies produce intricate nanostructures with optical properties, which ultimately could help engineers develop new ways to produce photonic nanostructures for solar panels or iridescent colors for paints, clothing and cosmetics.

Structural color is different from pigment color, like that in your skin or on a canvas, which absorbs or reflects different colors of light. Instead, it comes from light's interaction with a solid material in the same way that a transparent bubble develops a colorful sheen. The light penetrates it and bounces back out, interfering with light reflected from the surface in a way that cancels out all but one color.

At the Shady Oak Butterfly Farm in Brooker, Florida, Smith's breeding experiments with the Common Buckeye (Junonia coenia) -- a mostly brown butterfly with showy, colorful spots, found throughout the United States and often raised by butterfly farmers for butterfly gardens or wedding ceremonies -- were ideal for Thayer's study of structural color.

"Edith noticed that sometimes these butterflies have just a few blue scales on the very front part of the forewing and started breeding the blue animals together," said Thayer, who is in UC Berkeley's Department of Integrative Biology. "So, effectively, she was doing an artificial selection experiment, guided by her own curiosity and intuition about what would be interesting."

In a paper appearing online today in the journal eLife, Thayer and Nipam Patel, a UC Berkeley professor of molecular and cell biology who is on leave as director of the Marine Biological Laboratory in Woods Hole, Massachusetts, describe the physical changes in wing scales associated with Smith's experiment on the Common Buckeye, and report one genetic regulator of blue iridescence.

"I especially loved the clear evolutionary context: being able to directly compare the 'before' and 'after' and piece together the whole story," Thayer said. "We know that blueness in J. coenia is a recent change, we know explicitly what the force of selection was, we know the time frame of the change. That doesn't happen every day for evolutionary biologists."

Structural color produces showy butterflies

According to Thayer, hundreds of butterflies have been studied because of the showy structural color in their wing scales. The showiest is the blue morpho, with 5-inch wings of iridescent blue edged with black. Her study, however, focused on a less showy genus, Junonia, and found that iridescent color is common throughout the 10 species, even the drab ones. One unremarkable light gray butterfly, the pansy J. atlites, proved under a microscope to have iridescent rainbow-colored scales whose colors blend together into gray when viewed with the naked eye.

One major lesson from the study, she said, is that "most butterfly patterns probably have a mix of pigment color and structural color, and which one has the strongest impact on wing color depends on how much pigment is there."

Thayer raised both the wild, brownish Common Buckeye and the cross-bred, bluer variety obtained from Smith. Using a state-of-the-art helium ion microscope, she imaged scales from the wings to see which scale structures are responsible for the color and to determine whether the color change was due to a change in structural color, or just a loss of brown pigment that allowed the blue color to stand out.

She found no difference in the amount of brown pigment on the scales, but a significant difference in the thickness of chitin, the strong polymer from which the scale is built and that also generates the structural color. In the wild buckeye, the thickness of the chitin layer was about 100 nanometers, yielding a golden hue that blended with the brown pigment. The bluer buckeye had chitin about 190 nanometers thick -- about the thickness of a soap bubble -- that produced a blue iridescence that outshined the brown pigment.

"They are actually creating the color the same way a soap bubble iridescence works; it's the same phenomenon physically," Thayer said.

She also found that, though the scales from the Junonia butterflies have an elaborate microscopic structure, structural color comes from the bottom, or base, of the scale.

"That is not intuitive, because the top part of the scale has all of these curves and grooves and details that really catch your eye, and the most famous structural colors are elaborate structures, often in the top part of the scale," she said. "But the simple, flat layer at the bottom of the scale controls structural coloration in each species we checked."

"The color comes down to a relatively simple change in the scale: the thickness of the lamina," said Patel. "We believe that this will be a genetically tractable system that can allow us to identify the genes and developmental mechanisms that can control structural coloration."

Thayer also investigated the scales of mutant buckeyes created by Cornell University researchers that lacked a key gene, called optix, that controls color. The micrograph images demonstrated that lack of the gene also increased the thickness of the thin film of chitin in the scales, creating a blue color. Optix is a regulatory gene that controls many other butterfly genes, which Thayer will be looking at next.

"One thing that I thought was cool about our findings was seeing that the same mechanism that has recurred over millions of years of butterfly evolution could be reproduced really rapidly in (Smith's) artificial section experiment," she said. "That says that color evolving by changes in lamina thickness is a repeatable, important phenomenon."

Credit: 
University of California - Berkeley

Researchers uncover importance of aligning biological clock with day-night cycles

image: New research provides a striking example of the importance of keeping the internal biological clock aligned with the external environment so that processes occur at the right time of day.

Image: 
Golden Lab, UC San Diego

Timing is everything. A fresh example supporting the old saying has been found in connection with the systems regulated by biological clocks.

Research on circadian rhythms, our internal 24-hour patterns that affect sleep-wake and metabolic cycles, has shown that timing is key for human health. When our activities and internal circadian clocks are out of step with the natural day-night cycle--for example, in cases of irregular shift work, jet lag and poor sleep-wake habits--we increase our risk of disease because of the mistiming of important biological processes. But the genetics behind these mechanisms haven't been well established.

Now scientists at the University of California San Diego studying photosynthetic bacteria called cyanobacteria, or "blue-green algae," have identified the roots of a behavior that is regulated by the circadian clock.

The findings, led by Division of Biological Sciences Associate Project Scientist Arnaud Taton and senior authors James and Susan Golden, are published in Nature Communications.

"I think this paper demonstrates the importance of having internal biological time coincide with environmental time," said UC San Diego Distinguished Professor Susan Golden, director of the Center for Circadian Biology and senior author of the paper. "There are lots of human illnesses in which people are poorly aligned to their environment. This can result from habits such as getting too much light at night, eating at odd times of the day and not sleeping regularly. In the cyanobacterium it makes a very big difference for biological time and external environment time to be aligned."

Scientists know that in the hunt for new genes, bacteria incorporate DNA from the environment. Such processes ensure that there is raw material to generate genetic variation, which is how species evolve. Yet the details of this puzzling process is understood in only a few organisms. The ability to take up DNA is typically tightly regulated, suggesting to scientists that it would be detrimental to the organism to indiscriminately take up foreign genes.

In the new study, the researchers identified the DNA uptake machinery in the cyanobacterium Synechococcus elongatus and discovered that the internal circadian clocks within their cells prevent DNA uptake early in the day and enhance the process early at night. They had predicted that clock-mediated expression of certain dusk-peaking, dark-induced genes is central for taking up DNA from the environment. They found that when darkness occurs at the time the cells' internal clock tells them it's dusk, DNA uptake and incorporation increase dramatically. In contrast, darkness at times that do not match the internal clock time fails to provide a boost in DNA uptake and incorporation.

As for why early DNA uptake is discouraged and late is enhanced, scientists aren't quite sure. They are testing hypotheses such as whether it may be helpful to avoid taking up potentially dangerous DNA when viruses are more prevalent, which in some environments is during the day.

"This study provides a striking example of the importance of keeping the internal biological clock aligned with the external environment so that processes occur at the right time of day," the researchers say.

Credit: 
University of California - San Diego