Tech

Controlling artificial cilia with magnetic fields and light

image: Photograph of an array of magnetic cilia folded and held in the tip of tweezers.

Image: 
Jessica A.-C. Liu

Researchers from North Carolina State University and Elon University have made artificial cilia, or hair-like structures, that can bend into new shapes in response to a magnetic field, then return to their original shape when exposed to the proper light source.

"This work expands the capabilities of magnetic cilia and our understanding of their behaviors, which has potential applications in soft robotics, including microrobotics," says Joe Tracy, corresponding author of a paper on the work and a professor of materials science and engineering at NC State. "A key point of this work is that we've demonstrated shape memory magnetic cilia whose shape can be set, locked, unlocked and reconfigured. This property will be useful for enhanced and new applications."

The finding builds on the team's earlier research designing soft robots that could be controlled using magnets and light. However, there are significant departures from the previous work.

"The cilia are actuated by magnetic torques, which means the cilia rotate and align with the field from an inexpensive permanent magnet, instead of being pulled toward the magnet," says Ben Evans, co-author of the paper and a professor of physics at Elon. "Actuation of the soft robots in our earlier work relied on magnetic field gradients, which moved the robot by pulling it. The new approach offers another tool for designing soft robots."

The researchers also developed a theoretical model that allows users to predict how the shape memory magnetic cilia will respond when actuated, or set into motion. In addition, the model explains why the cilia respond the way they do.

"These shape memory magnetic cilia are also simple to fabricate through self-assembly using inexpensive permanent magnets," says Jessica Liu, first author of the paper and a recent Ph.D. graduate from NC State. "We're optimistic that these demonstrations and our model can help the research community design ciliary systems with new capabilities for specific applications."

"We think this work will contribute to advancing the capabilities of soft robotics," Tracy says.

Credit: 
North Carolina State University

Women with Neandertal gene give birth to more children

One in three women in Europe inherited the receptor for progesterone from Neandertals - a gene variant associated with increased fertility, fewer bleedings during early pregnancy and fewer miscarriages. This is according to a study published in Molecular Biology and Evolution by researchers at the Max Planck Institute for Evolutionary Anthropology in Germany and Karolinska Institutet in Sweden.

"The progesterone receptor is an example of how favourable genetic variants that were introduced into modern humans by mixing with Neandertals can have effects in people living today," says Hugo Zeberg, researcher at the Department of Neuroscience at Karolinska Institutet and the Max Planck Institute for Evolutionary Anthropology, who performed the study with colleagues Janet Kelso and Svante Pääbo.

Progesterone is a hormone, which plays an important role in the menstrual cycle and in pregnancy. Analyses of biobank data from more than 450,000 participants - among them 244,000 women - show that almost one in three women in Europe have inherited the progesterone receptor from Neandertals. 29 percent carry one copy of the Neandertal receptor and three percent have two copies.

"The proportion of women who inherited this gene is about ten times greater than for most Neandertal gene variants," says Hugo Zeberg. "These findings suggest that the Neandertal variant of the receptor has a favourable effect on fertility."

The study shows that women who carry the Neandertal variant of the receptor tend to have fewer bleedings during early pregnancy, fewer miscarriages, and give birth to more children. Molecular analyses revealed that these women produce more progesterone receptors in their cells, which may lead to increased sensitivity to progesterone and protection against early miscarriages and bleeding.

Credit: 
Karolinska Institutet

Novel insight reveals topological tangle in unexpected corner of the universe

image: Image depicts some of the polarization lines within a ferroelectric nanoparticle. The lines intertwine into a Hopfion topological structure.

Image: 
Image by Yuri Tikhonov, University of Picardie and Russia’s Southern Federal University, and Anna Razumnaya, Southern Federal University

Scientists find a unique knotted structure — one that repeats itself throughout nature — in a ferroelectric nanoparticle, a material with promising applications in microelectronics and computing.

Just as a literature buff might explore a novel for recurring themes, physicists and mathematicians search for repeating structures present throughout nature.

For example, a certain geometrical structure of knots, which scientists call a Hopfion, manifests itself in unexpected corners of the universe, ranging from particle physics, to biology, to cosmology. Like the Fibonacci spiral and the golden ratio, the Hopfion pattern unites different scientific fields, and deeper understanding of its structure and influence will help scientists to develop transformative technologies.

“The polarization lines intertwining themselves into a Hopfion structure may give rise to the material’s useful electronic properties, opening new routes for the design of ferroelectric-based energy storage devices and information systems.” — Valerii Vinokur, Argonne senior scientist and Distinguished Fellow

In a recent theoretical study, scientists from the U.S. Department of Energy’s (DOE) Argonne National Laboratory, in collaboration with the University of Picardie in France and the Southern Federal University in Russia, discovered the presence of the Hopfion structure in nano-sized particles of ferroelectrics — materials with promising applications in microelectronics and computing.

The identification of the Hopfion structure in the nanoparticles contributes to a striking pattern in the architecture of nature across different scales, and the new insight could inform models of ferroelectric materials for technological development.

Ferroelectric materials have the unique ability to flip the direction of their internal electric polarization — the slight, relative shift of positive and negative charge in opposite directions — when influenced by electric fields. Ferroelectrics can even expand or contract in the presence of an electric field, making them useful for technologies where energy is converted between mechanical and electrical.

In this study, the scientists harnessed fundamental topological concepts with novel computer simulations to investigate the small-scale behavior of ferroelectric nanoparticles. They discovered that the polarization of the nanoparticles takes on the knotted Hopfion structure present in seemingly disparate realms of the universe.

“The polarization lines intertwining themselves into a Hopfion structure may give rise to the material’s useful electronic properties, opening new routes for the design of ferroelectric-based energy storage devices and information systems,” said Valerii Vinokur, senior scientist and Distinguished Fellow in Argonne’s Materials Science division. “The discovery also highlights a repeated tendency in many areas of science.”

What (and where) in the world are Hopfions?

Topology, a subfield of mathematics, is the study of geometric structures and their properties. A Hopfion topological structure, first proposed by Austrian mathematician Heinz Hopf in 1931, emerges in a wide range of physical constructs but is rarely explored in mainstream science. One of its defining characteristics is that any two lines within the Hopfion structure must be linked, constituting knots ranging in complexity from a few interconnected rings to a mathematical rat’s nest.

“The Hopfion is a very abstract mathematical concept,” said Vinokur, “but the structure shows up in hydrodynamics, electrodynamics and even in the packing of DNA and RNA molecules in biological systems and viruses.”

In hydrodynamics, the Hopfion appears in the trajectories of liquid particles flowing inside of a sphere. With friction neglected, the paths of the incompressible liquid particles are intertwined and connected. Cosmological theories also reflect Hopfion patterns. Some hypotheses suggest that the paths of every particle in the universe interweave themselves in the same Hopfion manner as the liquid particles in a sphere.

According to the current study, the polarization structure in a spherical ferroelectric nanoparticle takes on this same knotted swirl.

Simulating the swirl

The scientists created a computational approach that tamed polarization lines and enabled them to recognize the emerging Hopfion structures in a ferroelectric nanoparticle. The simulations leveraged unique, open-source computational codes, called Ferret and MOOSE, developed with funding from DOE initiatives at Argonne and DOE’s Idaho National Laboratory. The simulations, performed by researcher Yuri Tikhonov from the Southern Federal University and the University of Picardie, modeled the polarization within nanoparticles between 50 to 100 nanometers in diameter, a realistic size for ferroelectric nanoparticles in technological applications.

“When we visualized the polarization, we saw the Hopfion structure emerge,” said Igor Luk’yanchuck, a scientist from the University of Picardie. “We thought, wow, there is a whole world inside of these nanoparticles.”

Click here for related video, "Simulation of Hopfion structure in ferroelectric nanoparticle" by Yuri Tikhonov, University of Picardie and Russia’s Southern Federal University, and Anna Razumnaya, Southern Federal University, revealing the Hopfion structure of polarization lines within a ferroelectric nanoparticle

The polarization lines revealed by the simulation represent the directions of displacements between charges within atoms as they vary around the nanoparticle in a way that maximizes energy efficiency. Because the nanoparticle is confined to a sphere, the lines travel around it indefinitely, never terminating on — or escaping from — the surface. This behavior is parallel to the flow of an ideal fluid about a closed, spherical container.

The link between liquid flow and the electrodynamics displayed in these nanoparticles bolster a long- theorized parallelism. “When Maxwell developed his famous equations to describe the behavior of electromagnetic waves, he used the analogy between hydrodynamics and electrodynamics,” said Vinokur. “Scientists have since hinted at this relationship, but we demonstrated that there is a real, quantifiable connection between these concepts that is characterized by the Hopfion structure.”

The study’s findings establish the fundamental importance of Hopfions to the electromagnetic behavior of ferroelectric nanoparticles. The new insight could result in increased control of the advanced functionalities of these materials — such as their supercapacitance — for technological applications.

“Scientists often view properties of ferroelectrics as separate concepts that are highly dependent on chemical composition and treatment,” said Luk’yanchuck, “but this discovery may help describe many of these phenomena in a unifying, general way.”

Another possible technological advantage of these small-scale topological structures is in memory for advanced computing. Scientists are exploring the potential for ferroelectric materials for computational systems. Traditionally, the flip-able polarization of the materials could enable them to store information in two separate states, generally referred to as 0 and 1. However, microelectronics made of ferroelectric nanoparticles might be able to leverage their Hopfion-shaped polarization to store information in more complex ways.

“Within one nanoparticle, you may be able to write much more information because of these topological phenomena,” said Luk’yanchuck. “Our theoretical discovery could be a groundbreaking step in the development of future neuromorphic computers that store information more organically, like the synapses in our brains.”

Future plans

To perform deeper studies into the topological phenomena within ferroelectrics, the scientists plan to leverage Argonne’s supercomputing capabilities. The scientists also plan to test the theoretical presence of Hopfions in ferroelectric nanoparticles using Argonne’s Advanced Photon Source (APS), a DOE Office of Science User Facility.

“We view these results as a first step,” said Vinokur. “Our intention is to study the electromagnetic behavior of these particles while considering the existence of Hopfions, as well as to confirm and explore its implications. For such small particles, this work can only be performed using a synchrotron, so we are fortunate to be able to use Argonne’s APS.”

Credit: 
DOE/Argonne National Laboratory

New type of coupled electronic-structural waves discovered in magnetite

image: Illustration of the newly discovered charge fluctuations in the trimeron order of magnetite triggered by a laser beam.

Image: 
Source: Ambra Garlaschelli and MIT

An international team of scientists uncovered exotic quantum properties hidden in magnetite, the oldest magnetic material known to mankind. The study reveals the existence of low-energy waves that indicate the important role of electronic interactions with the crystal lattice. This is another step to fully understand the metal-insulator phase transition mechanism in magnetite, and in particular to learn about the dynamical properties and critical behavior of this material in the vicinity of the transition temperature.

Magnetite (Fe3O4) is a common mineral, whose strong magnetic properties were already known in ancient Greece. Initially, it was used mainly in compasses, and later in many other devices, such as data recording tools. It is also widely applied to catalytic processes. Even animals benefit from the properties of magnetite in detecting magnetic fields - for example, birds are known to use it in navigation.

Physicists are also very interested in magnetite because around a temperature of 125 K it shows an exotic phase transition, named after the Dutch chemist Verwey. This Verwey transition was also the first phase metal-to-insulator transformation observed historically. During this extremely complex process, the electrical conductivity changes by as much as two orders of magnitude and a rearrangement of the crystal structure takes place. Verwey proposed a transformation mechanism based on the location of electrons on iron ions, which leads to the appearance of a periodic spatial distribution of Fe2+ and Fe3+ charges at low temperatures.

In recent years, structural studies and advanced calculations have confirmed the Verwey hypothesis, while revealing a much more complex pattern of charge distribution (16 non-equivalent positions of iron atoms) and proving the existence of orbital order. The fundamental components of this charge-orbital ordering are polarons - quasiparticles formed as a result of a local deformation of the crystal lattice caused by the electrostatic interaction of a charged particle (electron or hole) moving in the crystal. In the case of magnetite, the polarons take the form of trimerons, complexes made of three iron ions, where the inner atom has more electrons than the two outer atoms.

The new study, published in the journal Nature Physics, was carried out by scientists from many leading research centers around the world. Its purpose was to experimentally uncover the excitations involved in the charge-orbital order of magnetite and describe them by means of advanced theoretical methods. The experimental part was performed at MIT (Edoardo Baldini, Carina Belvin, Ilkem Ozge Ozel, Nuh Gedik); magnetite samples were synthesized at the AGH University of Science and Technology (Andrzej Kozlowski); and the theoretical analyses were carried out in several places: the Institute of Nuclear Physics of the Polish Academy of Sciences (Przemyslaw Piekarz, Krzysztof Parlinski), the Jagiellonian University and the Max Planck Institute (Andrzej M. Oles), the University of Rome "La Sapienza" (Jose Lorenzana), Northeastern University (Gregory Fiete), the University of Texas at Austin (Martin Rodriguez-Vega), and the Technical University in Ostrava (Dominik Legut).

"At the Institute of Nuclear Physics of the Polish Academy of Sciences, we have been conducting studies on magnetite for many years, using the first-principles calculation method," explains Prof. Przemyslaw Piekarz. "These studies have indicated that the strong interaction of electrons with lattice vibrations (phonons) plays an important role in the Verwey transition."

The scientists at MIT measured the optical response of magnetite in the extreme infrared for several temperatures. Then, they illuminated the crystal with an ultrashort laser pulse (pump beam) and measured the change in the far-infrared absorption with a delayed probe pulse. "This is a powerful optical technique that enabled us to take a closer view at the ultrafast phenomena governing the quantum world," says Prof. Nuh Gedik, head of the research group at MIT.

The measurements revealed the existence of low-energy excitations of the trimeron order, which correspond to charge oscillations coupled to a lattice deformation. The energy of two coherent modes decreases to zero when approaching the Verwey transition - indicating their critical behavior near this transformation. Advanced theoretical models allowed them to describe the newly-discovered excitations as a coherent tunneling of polarons. The energy barrier for the tunneling process and other model parameters were calculated using density functional theory (DFT), based on the quantum-mechanical description of molecules and crystals. The involvement of these waves in the Verwey transition was confirmed using the Ginzburg-Landau model. Finally, the calculations also ruled out other possible explanations for the observed phenomenon, including conventional phonons and orbital excitations.

"The discovery of these waves is of key importance for understanding the properties of magnetite at low temperatures and the Verwey transition mechanism," say Dr. Edoardo Baldini and Ms. Carina Belvin of MIT, the lead authors of the article. "In a broader context, these results reveal that the combination of ultrafast optical methods and state-of-the-art calculations makes it possible to study quantum materials hosting exotic phases of matter with charge and orbital order."

The obtained results lead to several important conclusions. First, the trimeron order in magnetite has elementary excitations with a very low energy, absorbing radiation in the far-infrared region of the electromagnetic spectrum. Second, these excitations are collective fluctuations of charge and lattice deformations that exhibit critical behavior and are thus involved in the Verwey transition. Finally, the results shed new light on the cooperative mechanism and dynamical properties that lie at the origin of this complex phase transition.

"As for the plans for the future of our team, as part of the next stages of work we intend to focus on conducting theoretical calculations aimed at better understanding the observed coupled electronic-structural waves," concludes Prof. Piekarz.

The Henryk Niewodniczanski Institute of Nuclear Physics (IFJ PAN) is currently the largest research institute of the Polish Academy of Sciences. The broad range of studies and activities of IFJ PAN includes basic and applied research, ranging from particle physics and astrophysics, through hadron physics, high-, medium-, and low-energy nuclear physics, condensed matter physics (including materials engineering), to various applications of methods of nuclear physics in interdisciplinary research, covering medical physics, dosimetry, radiation and environmental biology, environmental protection, and other related disciplines. The average yearly yield of the IFJ PAN encompasses more than 600 scientific papers in the Journal Citation Reports published by the Clarivate Analytics. The part of the Institute is the Cyclotron Centre Bronowice (CCB) which is an infrastructure, unique in Central Europe, to serve as a clinical and research centre in the area of medical and nuclear physics. IFJ PAN is a member of the Marian Smoluchowski Kraków Research Consortium: "Matter-Energy-Future" which possesses the status of a Leading National Research Centre (KNOW) in physics for the years 2012-2017. In 2017 the European Commission granted to the Institute the HR Excellence in Research award. The Institute is of A+ Category (leading level in Poland) in the field of sciences and engineering.

Credit: 
The Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences

Astronomers discover new class of cosmic explosions

Astronomers have found two objects that, added to a strange object discovered in 2018, constitute a new class of cosmic explosions. The new type of explosion shares some characteristics with supernova explosions of massive stars and with the explosions that generate gamma-ray bursts (GRBs), but still has distinctive differences from each.

The saga began in June of 2018 when astronomers saw a cosmic blast with surprising characteristics and behavior. The object, dubbed AT2018cow ("The Cow"), drew worldwide attention from scientists and was studied extensively. While it shared some characteristics with supernova explosions, it differed in important aspects, particularly its unusual initial brightness and how rapidly it brightened and faded in just a few days.

In the meantime, two additional blasts -- one from 2016 and one from 2018 -- also showed unusual characteristics and were being observed and analyzed. The two new explosions are called CSS161010 (short for CRTS CSS161010 J045834-081803), in a galaxy about 500 million light-years from Earth, and ZTF18abvkwla ("The Koala"), in a galaxy about 3.4 billion light-years distant. Both were discovered by automated sky surveys (Catalina Real-time Transient Survey, All-Sky Automated Survey for Supernovae, and Zwicky Transient Facility) using visible-light telescopes to scan large areas of sky nightly.

Two teams of astronomers followed up those discoveries by observing the objects with the National Science Foundation's Karl G. Jansky Very Large Array (VLA). Both teams also used the Giant Metrewave Radio Telescope in India and the team studying CSS161010 used NASA's Chandra X-ray Observatory. Both objects gave the observers surprises.

Anna Ho, of Caltech, lead author of the study on ZTF18abvkwla, immediately noted that the object's radio emission was as bright as that from a gamma-ray burst. "When I reduced the data, I thought I had made a mistake," she said.

Deanne Coppejans, of Northwestern University, led the study on CSS161010, which found that the object had launched an "unexpected" amount of material into interstellar space at more than half the speed of light. Her Northwestern co-author Raffaella Margutti, said, "It took almost two years to figure out what we were looking at just because it was so unusual."

In both cases, the follow-up observations indicated that the objects shared features in common with AT2018cow. The scientists concluded that these events, called Fast Blue Optical Transients (FBOTs), represent, along with AT2018cow, a type of stellar explosion significantly different from others. The scientists reported their findings in papers in the Astrophysical Journal and the Astrophysical Journal Letters.

FBOTs probably begin, the astronomers said, the same way as certain supernovae and gamma-ray bursts -- when a star much more massive than the Sun explodes at the end of its "normal" atomic fusion-powered life. The differences show up in the aftermath of the initial explosion.

In the "ordinary" supernova of this type, called a core-collapse supernova, the explosion sends a spherical blast wave of material into interstellar space. If, in addition to this, a rotating disk of material briefly forms around the neutron star or black hole left after the explosion and propels narrow jets of material at nearly the speed of light outward in opposite directions, these jets can produce narrow beams of gamma rays, causing a gamma-ray burst.

The rotating disk, called an accretion disk, and the jets it produces, are called an "engine" by astronomers.

FBOTs, the astronomers concluded, also have such an engine. In their case, unlike in gamma-ray bursts, it is enshrouded by thick material. That material probably was shed by the star just before it exploded, and may have been pulled from it by a binary companion.

When the thick material near the star is struck by the blast wave, it causes the bright visible-light burst soon after the explosion that initially made these objects appear so unusual. That bright burst also is why astronomers call these blasts "fast blue optical transients." This is one of the characteristics that distinguished them from ordinary supernovae.

As the blastwave from the explosion collides with the material around the star as it travels outwards, it produces radio emission. This very bright emission was the important clue that proved that the explosion was powered by an engine.

The shroud of dense material "means that the progenitor star is different from those leading to gamma-ray bursts," Ho said. The astronomers said that in the Cow and in CSS161010, the dense material included hydrogen, something never seen in in gamma-ray bursts.

Using the W.M. Keck Observatory, the astronomers found that both CSS 161010 and ZTF18abvkwla, like the Cow, are in small, dwarf galaxies. Coppejans said that the dwarf galaxy properties "might allow some very rare evolutionary paths of stars" that lead to these distinctive explosions.

Although a common element of the FBOTs is that all three have a 'central engine,' the astronomers caution that the engine also could be the result of stars being shredded by black holes, though they consider supernova-type explosions to be the more likely candidate.

"Observations of more FBOTs and their environments will answer this question," Margutti said.

To do that, the scientists say they will need to use telescopes covering a wide range of wavelengths, as they have done with the first three objects. "While FBOTs have proven rarer and harder to find than some of us were hoping, in the radio band they're also much more luminous than we'd guessed, allowing us to provide quite comprehensive data even on events that are far away," said Daniel Perley, of the Liverpool John Moores University.

Credit: 
National Radio Astronomy Observatory

Essential key to hearing sensitivity discovered

image: Jung-Bum Shin, PhD, of UVA's Department of Neuroscience, has shed light on the biological architecture that lets us hear - and on a genetic disorder that causes both deafness and blindness.

Image: 
Dan Addison | UVA Communications

New research from the University of Virginia School of Medicine is shedding light on the biological architecture that lets us hear - and on a genetic disorder that causes both deafness and blindness.

Sihan Li, a graduate student in the lab of Jung-Bum Shin, PhD, of UVA's Department of Neuroscience, has made a surprising discovery about how the hearing organ in mammals achieves its extraordinary sensitivity.

It was long suspected that tiny molecular motors maintain the proper tension in the so-called hair cell mechanoreceptors that are located in the inner ear. This tension is a key factor in how we detect sound, similarly to how a taut fishing line indicates nibbling fish.

The research team led by Li and Shin demonstrated that maintaining this tension was the responsibility of a protein called Myosin-VIIa. They also found that there is not just one Myosin-VIIa but several - subtle variations that all play important roles. Problems with these protein "isoforms," as the variations are known, lead to hearing loss, Shin's team found. That speaks to the vital importance of these underappreciated variations in proteins.

"Our sense of hearing is incredibly sensitive, and our study identified a very important component in the underlying mechanism," Shin said. "Furthermore, we showed that the molecular machinery that enables hearing is much more complicated than we thought, with each protein having multiple sister forms that have distinct functions."

Genetic Hearing Loss

Myosin-VIIa is made by the gene MYO7A. Mutations in that gene cause a rare genetic disorder, Usher syndrome type 1. Children with the syndrome typically are born deaf and then suffer progressive vision loss. The discovery by the Shin lab will contribute towards a better understanding of this disease.

Shin and his team found that lab mice lacking proper Myosin-VIIa isoforms developed hearing loss. His work shows that the mice were able to develop hair cells, but their function was impaired and grew progressively worse. (Myosin-VIIa is also produced in the retina, the part of the eye that senses light. The Shin lab did not look at that, but his work might shed more light on how impairments in Myosin-VIIa affect vision as well.)

One of the great questions arising from the work, the researchers say, is exactly why the inner ear uses multiple isoforms of this protein. Finding those answers will help us understand an important aspect of our ability to hear, and it may one day help doctors develop new treatments for hearing loss.

"After all, the flip side of the extreme sensitivity of our hearing organ is that it is also very vulnerable to stress factors, such as noise and age. We have found one important mechanism by which the ear achieves its sensitivity," Shin said. "This will help us understand the harmful processes that lead to the loss of our hearing sensitivity with age or due to noise trauma, laying the foundation for the development of preventative and therapeutic strategies."

Credit: 
University of Virginia Health System

Scientists find optimal age of stem cells

image: Biophysicists from MIPT and Vladimirsky Moscow Regional Clinical Research Institute have determined the optimal age of reprogrammed stem cells suitable for restoring heart tissue. It spans the period roughly from day 15 until day 28 of maturation.

Image: 
Daria Sokol/MIPT Press Office

Biophysicists from the Moscow Institute of Physics and Technology and Vladimirsky Moscow Regional Clinical Research Institute have determined the optimal age of reprogrammed stem cells suitable for restoring heart tissue. It spans the period roughly from day 15 until day 28 of maturation. The research findings were published in Scientific Reports.

Induced pluripotent stem cells, or iPSCs, are used in regenerative medicine. Derived from human blood, these cells undergo chemical "rejuvenation," and the resulting stem cells can be reprogrammed into cells of various types. This makes it possible to restore tissue with cells that the body recognizes as its own.

It was previously believed that mature cells, aged more than two months, should be used to restore heart tissue. A team of researchers led by MIPT Professor Konstantin Agladze set out to test experimentally which reprogrammed stem cell age is the best for that purpose.

The biologists introduced iPSCs of different ages into human heart cell cultures and tested the quality of the resulting cardiac tissue. This involved optically mapping the behavior of the tissue under induced excitation waves. The test imitates the functioning of the cardiac muscle in the body. In order for the heart to contract correctly, the excitation wave needs to propagate across the cell ensemble consistently.

The cells introduced between days 15 and 28 of maturation proved to form a consolidated excitable system with the heart cells initially present in the culture. No such system emerged when the team waited until after day 28.

"We found that after day 28 of differentiation, the cells are no longer usable, because they do not merge into a homogeneous tissue with the heart cells. Adhesion does occur, but there is no unity, and the implanted cells are not functional," said Konstantin Agladze, who heads the Excitable Systems Biophysics Lab at MIPT.

The laboratory conducts fundamental research in the field of regenerative medicine, with a focus on cardiomyocytes -- the cells that make up the heart muscle. The team's work underlies recommendations for those implementing the regenerative approaches; and the study reported in this story is important for identifying the "window of opportunity" when stem cells should best be used in tissue restoration.

Credit: 
Moscow Institute of Physics and Technology

Astronomers create cloud atlas for hot, Jupiter-like exoplanets

image: Predicted cloud altitudes and compositions for a range of temperatures common on hot Jupiter planets. The range, in Kelvin, corresponds to about 800-3,500 degrees Fahrenheit, or 427-1,927 degrees Celsius.

Image: 
UC Berkeley image by Peter Gao

Giant planets in our solar system and circling other stars have exotic clouds unlike anything on Earth, and the gas giants orbiting close to their stars -- so-called hot Jupiters -- boast the most extreme.

A team of astronomers from the United States, Canada and the United Kingdom have now come up with a model that predicts which of the many types of proposed clouds, from sapphire to smoggy methane haze, to expect on hot Jupiters of different temperatures, up to thousands of degrees Kelvin.

Surprisingly, the most common type of cloud, expected over a large range of temperatures, should consist of liquid or solid droplets of silicon and oxygen, like melted quartz or molten sand. On cooler hot Jupiters, below about 950 Kelvin (1,250 degrees Fahrenheit), skies are dominated by a hydrocarbon haze, essentially smog.

The model will help astronomers studying the gases in the atmospheres of these strange and distant worlds, since clouds interfere with measurements of the atmospheric composition. It could also help planetary scientists understand the atmospheres of cooler giant planets and their moons, such as Jupiter and Saturn's moon Titan in our own solar system.

"The kinds of clouds that can exist in these hot atmospheres are things that we don't really think of as clouds in the solar system," said Peter Gao, a postdoctoral fellow at the University of California, Berkeley, who is first author of a paper describing the model that appeared May 25 in the journal Nature Astronomy. "There have been models that predict various compositions, but the point of this study was to assess which of these compositions actually matter and compare the model to the available data that we have."

The study takes advantage of a boom over the past decade in the study of exoplanet atmospheres. Though exoplanets are too distant and dim to be visible, many telescopes -- in particular, the Hubble Space Telescope -- are able to focus on stars and capture starlight passing through the atmospheres of planets as they pass in front of their stars. The wavelengths of light that are absorbed, revealed by spectroscopic measurements, tell astronomers which elements make up the atmosphere. To date, this technique and others have found or inferred the presence of water, methane, carbon monoxide and carbon dioxide, potassium and sodium gases and, in the hottest of the planets, vaporized aluminum oxide, iron and titanium.

But while some planets seem to have clear atmospheres and clear spectroscopic features, many have clouds that completely block the starlight filtering through, preventing the study of gases below the upper cloud layers. The compositions of the gases can tell astronomers how exoplanets form and whether the building blocks of life are present around other stars.

"We have found a lot of clouds: some kinds of particles -- not molecules, but small droplets -- that are hanging out in these atmospheres," Gao said. "We don't really know what they are made of, but they are contaminating our observations, essentially making it more difficult for us to assess the composition and abundances of important molecules, like water and methane."

Ruby clouds

To explain these observations, astronomers have proposed many strange types of clouds, composed of aluminum oxides, such as corundum, the stuff of rubies and sapphires; molten salt, such as potassium chloride; silicon oxides, or silicates, like quartz, the main component of sand; sulfides of manganese or zinc that exist as rocks on Earth; and organic hydrocarbon compounds. The clouds could be liquid or solid aerosols, Gao said.

Gao adapted computer models initially created for Earth's water clouds and subsequently extended to the cloudy atmospheres of planets like Jupiter, which has ammonia and methane clouds. He expanded the model even further to the much higher temperatures seen on hot gas giant planets -- up to 2,800 Kelvin, or 4,600 degrees Fahrenheit (2,500 degrees Celsius) -- and the elements likely to condense into clouds at these temperatures.

The model takes into account how gases of various atoms or molecules condense into droplets, how these droplets grow or evaporate and whether they are likely to be transported in the atmosphere by winds or updrafts, or sink because of gravity.

"The idea is that the same physical principles guide the formation of all types of clouds," said Gao, who has also modeled sulfuric acid clouds on Venus. "What I have done is to take this model and bring it out to the rest of the galaxy, making it able to simulate silicate clouds and iron clouds and salt clouds."

He then compared his predictions to available data on 30 exoplanets out of a total of about 70 transiting exoplanets with recorded transmission spectra to date.

The model revealed that many of the exotic clouds proposed over the years are difficult to form because the energy required to condense the gases is too high. Silicate clouds condense easily, however, and dominate over a 1,200-degree Kelvin range of temperatures: from about 900 to 2,000 Kelvin. That's a range of about 2,000 degrees Fahrenheit.

According to the model, in the hottest atmospheres, aluminum oxides and titanium oxides condense into high-level clouds. In exoplanets with cooler atmospheres, those clouds form deeper in the planet and are obscured by higher silicate clouds. On even cooler exoplanets, these silicate clouds also form deeper in the atmosphere, leaving clear upper atmospheres. At even cooler temperatures, ultraviolet light from the exoplanet's star converts organic molecules like methane into extremely long hydrocarbon chains that form a high-level haze akin to smog. This smog can obscure lower-lying salt clouds of potassium or sodium chloride.

For those astronomers seeking a cloudless planet to more easily study the gases in the atmosphere, Gao suggested focusing on planets between about 900 and 1,400 Kelvin, or those hotter than about 2,200 Kelvin.

"The presence of clouds has been measured in a number of exoplanet atmospheres before, but it is when we look collectively at a large sample that we can pick apart the physics and chemistry in the atmospheres of these worlds," said co-author Hannah Wakeford, an astrophysicist at the University of Bristol in the U.K. "The dominant cloud species is as common as sand -- it is essentially sand -- and it will be really exciting to be able to measure the spectral signatures of the clouds themselves for the first time with the upcoming James Webb Space Telescope (JWST)."

Future observations, such as those by NASA's JWST, scheduled for launch within a few years, should be able to confirm these predictions and perhaps shed light on the hidden cloud layers of planets closer to home. Gao said that similar exotic clouds may exist at depths within Jupiter or Saturn where the temperatures are close to those found on hot Jupiters.

"Because there are thousands of exoplanets versus just one Jupiter, we can study a bunch of them and see what the average is and how that compares to Jupiter," Gao said.

He and his colleagues plan to test the model against observational data from other exoplanets and also from brown dwarfs, which are basically gas giant planets so massive they're almost stars. They, too, have clouds.

"In studying planetary atmospheres in the solar system, we typically have the context of images. We have no such luck with exoplanets. They are just dots or shadows," said Jonathan Fortney of UC Santa Cruz. "That's a huge loss in information. But what we do have to make up for that is a much larger sample size. And that allows us to look for trends -- here, a trend in cloudiness -- with planetary temperature, something that we just don't have the luxury of in our solar system."

Credit: 
University of California - Berkeley

Produce-buying incentive program a win-win for Oregon consumers and farmers

A national program that offers financial incentives so that low-income consumers can purchase more fruits and vegetables has shown great success in Oregon, according to a recent Oregon State University study.

The Double Up Food Bucks program is one of many produce-incentive programs that pair with SNAP, the Supplemental Nutrition Assistance Program commonly referred to as food stamps. For every dollar SNAP recipients spend on eligible foods at participating farmers markets and grocery stores, they get an additional dollar they can put toward more Oregon-grown fruits and vegetables.

In OSU's study, 91% of program participants surveyed reported buying more fruits and vegetables. Nearly 70% reported eating less processed food; 81% said they had more food available at home; and nearly 88% said they felt healthier because they were eating more fresh produce. The study analyzed survey data from 1,223 people at 42 farmers markets across Oregon.

"I think this evaluation demonstrates that this program works for low-income consumers, and it's great for farmers," said study author Stephanie Grutzmacher, an assistant professor in OSU's College of Public Health and Human Sciences.

Grassroots programs to incentivize low-income shoppers to purchase locally grown produce at farmers markets began around 2006 in locations such as Washington, D.C., New York and Michigan, Grutzmacher said. The U.S. Department of Agriculture liked the idea and started to provide grants for new programs. Oregon received one of those grants in 2015 to implement a statewide program, and since then the state Legislature has provided money to continue the program, though that funding is not permanent.

The study also examined people's perceptions of farmers markets and how those perceptions affected their experience with Double Up Food Bucks. Perceptions varied significantly between different demographics.

"A lot of people have a really wide range of experiences with farmers markets; some people perceive them to be really homogenous spaces for well-to-do people, and others perceive them to be really community-centered and accessible," Grutzmacher said. "Both of those things can be true, and everything in between can be true."

Adults ages 55 and older were more likely to view farmers markets as more expensive than where they normally shopped, which meant they saw less value in the Double Up Food Bucks voucher than people who considered farmers markets affordable.

Though the study wasn't able to analyze shoppers by specific ethnicity, non-white shoppers were more likely to report overall health improvements than white shoppers.

Eating more fresh fruits and vegetables "is one of those 'should' things that people carry around a lot," Grutzmacher said. "I think when this program makes that produce more accessible to them and gives them more purchasing power, people are able to cross a 'should' off their list and they're able to roll that into their perception that their health is better."

To address the disparity in perception and experience, Grutzmacher would like to see program organizers at both the market and state level design targeted outreach strategies to increase participation among groups of people who don't perceive the vouchers to be as useful.

For people who are not familiar with farmers market pricing or style, she said, organizers could host market tours to help them learn which vegetables are in season or how to make the most cost-effective choices. Educators could even hold cooking demonstrations at the market. She pointed to the example of SNAP To It, a program started a few years ago by the OSU Extension Service in Clackamas County to lead monthly tours during market season.

"I think they can really shape people's perceptions of things like affordability by providing extra educational resources at the market," Grutzmacher said.

The study used survey data collected by Farmers Market Fund, which runs the Double Up Food Bucks program under the leadership of executive director Molly Notarianni. Lead author was OSU global health Ph.D. student Briana Rockler, working under professor Ellen Smit and Grutzmacher.

Credit: 
Oregon State University

Flow-through electrodes make hydrogen 50 times faster

image: An example of the small, flow-through electrode that Duke researchers used to produce more hydrogen from electrolysis with a penny for scale.

Image: 
Wiley Lab, Duke University

DURHAM, N.C. -- Electrolysis, passing a current through water to break it into gaseous hydrogen and oxygen, could be a handy way to store excess energy from wind or solar power. The hydrogen can be stored and used as fuel later, when the sun is down or the winds are calm.

Unfortunately, without some kind of affordable energy storage like this, billions of watts of renewable energy are wasted each year.

For hydrogen to be the solution to the storage problem, water-splitting electrolysis would have to be much more affordable and efficient, said Ben Wiley, a professor of chemistry at Duke University. And he and his team have some ideas about how to accomplish that.

Wiley and his lab recently tested three new materials that might be used as a porous, flow-through electrode to improve the efficiency of electrolysis. Their goal was to increase the surface area of the electrode for reactions, while avoiding trapping the gas bubbles that are produced.

"The maximum rate at which hydrogen is produced is limited by the bubbles blocking the electrode - literally blocking the water from getting to the surface and splitting," Wiley said.

In a paper appearing May 25 in Advanced Energy Materials, they compared three different configurations of a porous electrode through which the alkaline water can flow as the reaction occurs.

They fabricated three kinds of flow-through electrodes, each a 4 millimeter square of sponge-like material, just a millimeter thick. One was made of a nickel foam, one was a 'felt' made of nickel microfibers, and the third was a felt made of nickel-copper nanowires.

Pulsing current through the electrodes for five minutes on, five minutes off, they found that the felt made of nickel-copper nanowires initially produced hydrogen more efficiently because it had a greater surface area than the other two materials. But within 30 seconds, its efficiency plunged because the material got clogged with bubbles.

The nickel foam electrode was best at letting the bubbles escape, but it had a significantly lower surface area than the other two electrodes, making it less productive.

The sweet spot turned out to be a felt of nickel microfiber that produced more hydrogen than the nanowire felt, despite having 25 percent less surface area for the reaction.

Over the course of a 100-hour test, the microfiber felt produced hydrogen at a current density of 25,000 milliamps per square centimeter. At that rate, it would be 50 times more productive than the conventional alkaline electrolyzers currently in use for water electrolysis, the researchers calculated.

The cheapest way to make industrial quantities of hydrogen right now isn't by splitting water, but by breaking natural gas (methane) apart with very hot steam - an energy-intensive approach that creates 9 to 12 tons of C02 for every ton of hydrogen it yields, not including the energy needed to create 1000-degree Celsius steam.

Wiley said commercial producers of water electrolyzers may be able to make improvements in the structure of their electrodes based on what his team has learned. If they could greatly increase the hydrogen production rate, the cost of hydrogen produced from splitting water could go down, perhaps even enough to make it an affordable storage solution for renewable energy.

He is also working with a group of students in Duke's Bass Connections program who are exploring whether flow-through electrolysis might be scaled up to make hydrogen from India's abundant solar power.

Credit: 
Duke University

New 5G switches bring better battery life, higher bandwidth and speeds

The 5G revolution has begun, and the first lines of phones that can access the next generation of wireless speeds have already hit the shelves. Researchers at The University of Texas at Austin and the University of Lille in France have built a new component that will more efficiently allow access to the highest 5G frequencies in a way that increases devices' battery life and speeds up how quickly we can do things like stream high-definition media.

Smartphones are loaded with switches that perform a number of duties. One major task is jumping between networks and spectrum frequencies: 4G, Wi-Fi, LTE, Bluetooth, etc. The current radio-frequency (RF) switches that perform this task are always running, consuming precious processing power and battery life.

"The switch we have developed is more than 50 times more energy efficient compared to what is used today," said Deji Akinwande, a professor in the Cockrell School of Engineering's Department of Electrical and Computer Engineering who led the research. "It can transmit an HDTV stream at a 100 gigahertz frequency, and that is unheard of in broadband switch technology."

Akinwande and his research team published their findings today in the journal Nature Electronics.

"It has become clear that the existing switches consume significant amounts of power," Akinwande said. "And that power consumed is useless power."

The new switches stay off, saving battery life for other processes, unless they are actively helping a device jump between networks. They have also shown the ability to transmit data well above the baseline for 5G-level speeds.

The U.S. Defense Advanced Research Projects Agency (DARPA) has for years pushed for the development of "near-zero-power" RF switches. Prior researchers have found success on the low end of the 5G spectrum - where speeds are slower but data can travel longer distances. But, this is the first switch that can function across the spectrum from the low-end gigahertz (GHz) frequencies to high-end terahertz (THz) frequencies that could someday be key to the development of 6G.

The UT team's switches use the nanomaterial hexagonal boron nitrite (hBN). It is an emerging nanomaterial from the same family as graphene, the so-called wonder material. The structure of the switch involves a single layer of boron and nitrogen atoms in a honeycomb pattern, which Akinwande said is almost 1 million times thinner than human hair, sandwiched between a pair of gold electrodes.

The impact of these switches extends beyond smartphones. Satellite systems, smart radios, reconfigurable communications, the "internet of things" and defense technology are all examples of other potential uses for the switches.

"Radio-frequency switches are pervasive in military communication, connectivity and radar systems," said Dr. Pani Varanasi, division chief of the materials science program at the Army Research Office, an element of the U.S. Army Combat Capabilities Development Command's Army Research Laboratory that helped fund the project. "These new switches could provide large performance advantage compared to existing components and can enable longer battery life for mobile communication, and advanced reconfigurable systems."

This research spun out of a previous project that created the thinnest memory device ever producedalso using hBN. Akinwande said sponsors encouraged the researchers to find other uses for the material, and that led them to pivot to RF switches.

Credit: 
University of Texas at Austin

Stimulating immune cleanup crew offers a possibility for treating rare disorder

Owing to a rare genetic mutation, individuals with leukocyte adhesion deficiency type 1 (LAD1) experience a suite of symptoms that trace back to an immune system dysfunction. Many have recurrent bacterial infections and gum disease so serious that they often lose their teeth at an early age.

In a new study in the Journal of Leukocyte Biology, School of Dental Medicine researchers and colleagues identified a novel strategy for addressing an underlying cause of LAD1 patients' symptoms. In healthy people, the clearance of dying neutrophils, a "sweeping up" process known as efferocytosis, triggers resolution and regeneration in formerly inflamed tissues. LAD1 patients lack neutrophils in their tissues and therefore cannot resolve inflammation through the efferocytosis process. To tamp down this inflammation, the scientists used a molecule to mimic the effects of efferocytosis that are absent in LAD1. As a result, in an animal model of LAD1 disease, the treatment not only prevented inflammation in the gums as well as dental-bone loss, it actively promoted bone regeneration and a healthy tissue environment.

"This treatment may not only be relevant for LAD1," says George Hajishengallis, senior author on the work and a professor at Penn Dental Medicine. "There are other conditions where we see poor or no accumulation of neutrophils in the gingiva, such as in Papillon-Lefèvre or Chediak-Higashi syndrome. There are also several conditions that cause neutropenia, where neutrophil numbers are low, such as congenital or autoimmune-related neutropenia, or cancer- and HIV-associated neutropenia, where offering a treatment to mimic efferocytosis may also have beneficial effects."

The underlying cause of LAD1 is a defect in the recruitment of neutrophils, a type of white blood cells that is a crucial part of the immune system. LAD1 patients have neutrophils that cannot exit the bloodstream to travel to tissues. Because of this they have leukocytosis, a high white blood cell count, as neutrophils get "stuck" in the blood.

Historically, scientists believed that LAD1 patients had severe gum disease because their lack of neutrophils in the gingival tissue caused bacterial infections leading to inflammation.

"But when we analyzed tissues, especially from human patients, we didn't see any tissue-invasive infections that would justify the destruction we saw there," says Hajishengallis.

Together with his collaborators, Hajishengallis began to wonder whether the problem with the lack of neutrophils in the tissues may not be a defect in bacterial surveillance but with efferocytosis, which occurs when another type of immune cell, macrophages, enter tissues to "eat up" dying neutrophils. Earlier research he had done on efferocytosis in periodontitis, a severe gum disease, impressed upon him that the process is an active one. Rather than just a cleaning up of the tissues, macrophages switch from promoting inflammation to curbing it, secreting factors that encourage tissues to return to a normal, healthy state.

To see whether a lack of efferocytosis might be a concern in LAD1 patients, the researchers first blocked the process using an antibody that inhibits a receptor that otherwise promotes efferocytosis. Blocking one of these, the c-Mer receptor tyrosine kinase in normal mice, resulted in inflammation similar to that seen in the LAD1 mouse model.

The researchers' confidence that a lack of efferocytosis might be a culprit in LAD1 increased when transferring normal neutrophils to LAD1 mice resolved inflammation in the animals.

"This still doesn't prove that efferocytosis was the key," Hajishengallis says. "The neutrophils could have just eaten bacteria that were causing inflammation."

Stronger evidence for the role of efferocytosis came when the researchers repeated the experiment, but also transferred an antibody against the c-Mer receptor tyrosine kinase to block efferocytosis. These LAD1 mice didn't experience the protective effect of the neutrophil transfer.

Taking their finding one step further, they sought to mimic efferocytosis with a small molecule, to compensate for the fact that LAD1 patients lack neutrophils in their tissues. Knowing that dying neutrophils release compounds that activate the receptors LXR and PPAR, which are associated with the resolution of inflammation, the researchers delivered small-molecule compounds that likewise activate these receptors to the gums of LAD1 mice.

The compounds alleviated signs of inflammation and blocked further bone loss in the mouth, and, though the treatment lasted only three days, the research team saw signs that bone was actually regenerating in the treated animals.

In earlier work, Hajishengallis of Penn's School of Dental Medicine had been part of a collaboration that identified an underlying cause of the abundant inflammation seen in LAD1 patients: high levels of the pro-inflammatory signaling molecule IL-23 and its downstream target IL-17. Blocking IL-23 with an antibody helped resolve the disease.

But antibodies are expensive, and if side effects arise can take time to leave the body. He and his colleagues believe the small molecule compounds that mimic efferocytosis could achieve the same effect in a way that may be both safer and more affordable.

"These molecules, because they're not inhibiting any important functions but are rather inducing pro-resolving pathways, are less likely to cause negative effects than other strategies," says Tetsuhiro Kajikawa, the lead author of the study. "The challenge now is practical: How will these be administered?"

Local administration to the gums only could be a challenge, but since LAD1 patients have a number of other issues besides gum disease, "systemic administration could be very appropriate and even beneficial," Hajishengallis says.

Credit: 
University of Pennsylvania

Privacy flaws in security and doorbell cameras discovered by Florida Tech Student

Ring, Nest, SimpliSafe and eight other manufacturers of internet-connected doorbell and security cameras have been alerted to "systemic design flaws" discovered by Florida Tech computer science student Blake Janes that allows a shared account that appears to have been removed to actually remain in place with continued access to the video feed.

Janes discovered the mechanism for removing user accounts does not work as intended on many camera systems because it does not remove active user accounts. This could allow potential "malicious actors" to exploit the flaw to retain access to the camera system indefinitely, covertly recording audio and video in a substantial invasion of privacy or instances of electronic stalking.

The findings were presented in the paper, "Never Ending Story: Authentication and Access Control Design Flaws in Shared IoT Devices," by Janes and two Florida Tech faculty members from the university's top institute for cybersecurity research, L3Harris Institute for Assured Information, Terrence O'Connor, program chair of cybersecurity, and Heather Crawford, assistant professor in computer engineering and sciences.

Janes' work informed vendors about the vulnerabilities and offered several strategies to remediate the underlying problem. In recognizing the importance of the work, Google awarded him a $3,133 "bug bounty" for identifying a flaw in the Nest series of devices. Other vendors, including Samsung, have been communicating with Janes about recommended solutions to fix the vulnerability.

The flaw is concerning in cases where, for example, two partners are sharing a residence and then divorce. Each has smartphone apps that access the same camera. Person A removes Person B's access to the camera, but that is never relayed to Person B's device. So Person B still has access even though it has been revoked on the camera and Person A's smartphone and the account password has been changed.

The Florida Tech team found that this happens largely because the decisions about whether to grant access are done in the cloud and not locally on either the camera or the smartphones involved. This approach is preferred by manufacturers because it allows for the cameras to transmit data in a way that every camera does not need to connect to every smartphone directly.

Additionally, manufacturers designed their systems so users would not have to repeatedly respond to access requests, which could become annoying and lead them to turn off that security check, were it in place, or abandon the camera altogether.

And the security is further complicated by the fact that the potential malicious actor does not need advanced hacking tools to achieve this invasion, as the attack is achievable from the existing companion applications of the devices.

"Our analysis identified a systemic failure in device authentication and access control schemes for shared Internet of Things ecosystems," the paper concluded. "Our study suggests there is a long road ahead for vendors to implement the security and privacy
of IoT produced content."

The devices where flaws were found are: Blink Camera, Canary Camera, D-Link Camera, Geeni Mini Camera, Doorbell and Pan/Tilt Camera, Merkury Camera, Momentum Axel Camera, Nest Camera Current and Doorbell Current, NightOwl Doorbell, Ring Pro Doorbell Current and Standard Doorbell Current, SimpliSafe Camera and Doorbell, and TP-Link Kasa Camera.

Though fixes will originate with the manufacturers, if you have one of the aforementioned cameras, it is important to update to the current firmware. Additionally, customers concerned about their privacy after removing additional users should always change their passwords and power cycle their cameras.

The paper is available under the Publications section at https://research.fit.edu/iot/.

Credit: 
Florida Institute of Technology

COVID-19 news from Annals of Internal Medicine

Below please find a summary and link(s) of new coronavirus-related content published today in Annals of Internal Medicine. The summary below is not intended to substitute for the full article as a source of information. A collection of coronavirus-related content is free to the public at http://go.annals.org/coronavirus.

Mental Health Treatment for Front-Line Clinicians During and After COVID-19

The COVID-19 pandemic has placed front-line health care professionals--who were already at higher risk for negative effects of chronic stress before the pandemic--at even greater risk for depression and anxiety. A commentary from Vanderbilt University Medical Center and Veterans Affairs Tennessee Valley Health System discusses how health care professionals should seek mutual support and caring for their own mental health, including seeking help from mental health colleagues when needed. Read the full text: https://www.acpjournals.org/doi/10.7326/M20-2440.

Media contacts: A PDF for this article is not yet available. Please click the link to read full text. The lead author, Warren D. Taylor, MD, MHSc, can be reached at warren.d.taylor@vanderbilt.edu.

Credit: 
American College of Physicians

Study finds electrical fields can throw a curveball

MIT researchers have discovered a phenomenon that could be harnessed to control the movement of tiny particles floating in suspension. This approach, which requires simply applying an external electric field, may ultimately lead to new ways of performing certain industrial or medical processes that require separation of tiny suspended materials.

The findings are based on an electrokinetic version of the phenomenon that gives curveballs their curve, known as the Magnus effect. Zachary Sherman PhD '19, who is now a postdoc at the University of Texas at Austin, and MIT professor of chemical engineering James Swan describe the new phenomenon in a paper published in the journal Physical Review Letters.

The Magnus effect causes a spinning object to be pulled in a direction perpendicular to its motion, as in the curveball; it is based on aerodynamic forces and operates at macroscopic scales -- i.e. on easily visible objects -- but not on smaller particles. The new phenomenon, induced by an electric field, can propel particles down to nanometer scales, moving them along in a controlled direction without any contact or moving parts.

The discovery came about as a surprise, as Sherman was testing some new simulation software for the interactions of tiny nanoscale particles that he was developing, within magnetic and electric fields. The test case he was studying involves placing charged particles in an electrolytic liquid, which are liquids with ions, or charged atoms or molecules, in them.

It was known, he says, that when charged particles just a few tens to hundreds of nanometers across are placed in such liquids they remain suspended within it rather than settling, forming a colloid. Ions then cluster around the particles. The new software successfully simulated this ion clustering. Next, he simulated an electric field across the material. This would be expected to induce a process called electrophoresis, which would propel the particles along in the direction of the applied field. Again, the software correctly simulated the process.

Then Sherman decided to push it further, and gradually increased the strength of the electric field. "But then we saw this funny thing," he says. "If the field was strong enough, you would get normal electrophoresis for a tiny bit, but then the colloids would spontaneously start spinning." And that's where the Magnus effect comes in.

Not only were the particles spinning in the simulations as they moved along, but "those two motions coupled together, and the spinning particle would veer off of its path," he says. "It's kind of strange, because you apply a force in one direction, and then the thing moves in an orthogonal [right-angle] direction to what you've specified." It's directly analogous to what happens aerodynamically with spinning balls, he says. "If you throw a curveball in baseball, it goes in the direction you threw it, but then it also veers off. So this is a kind of a microscopic version of that well-known macroscopic Magnus effect."

When the applied field was strong enough, the charged particles took on a strong motion in the direction perpendicular to the field. This could be useful, he says, because with electrophoresis "the particle moves toward one of the electrodes, and you run into this problem where the particle will move and then it will run into the electrode, and it'll stop moving. So you can't really generate a continuous motion with just electrophoresis."

Instead, since this new effect goes at right angles to the applied field, it could be used for example to propel particles along a microchannel, simply by placing electrodes on the top and bottom. That way, he says, the particle will "just move along the channel, and it will never bump into the electrodes." That makes it, he says, "actually a more efficient way to direct the motion of microscopic particles."

There are two different kinds of examples of processes where this ability might come in handy, he says. One is to use the particle to deliver some sort of "cargo" to a specific location. For example, the particle could be attached to a therapeutic drug "and you're trying to get it to a target site that needs that drug, but you can't get the drug there directly," he says. Or the particle might contain some sort of chemical reactant or catalyst that needs to be directed to a specific channel to carry out its desired reaction.

The other example is sort of the inverse of that process: picking up some kind of target material and bringing it back. For example, a chemical reaction to generate a product might also generate a lot of unwanted byproducts. "So you need a way to get a product out," he says. These particles can be used to capture the product and then be extracted using the applied electric field. "In this way they kind of act as little vacuum cleaners," he says. "They pick up the thing you want, and then you can move them somewhere else, and then release the product where it's easier to collect."

He says this effect should apply for a wide array of particle sizes and particle materials, and the team will continue to study how different material properties affect the rotation speed or the translation speed of this effect. The basic phenomenon should apply to virtually any combination of materials for the particles and the liquid they are suspended in, as long as the two differ from each other in terms of an electrical property called the dielectric constant.

The researchers looked at materials with a very high dielectric constant, such as metal particles, suspended in a much lower-conducting electrolyte, such as water or oils. "But you might also be able to see this with any two materials that have a contrast" in dielectric constant, Sherman says, for example with two oils that don't mix and thus form suspended droplets.

Credit: 
Massachusetts Institute of Technology