Tech

New technique could streamline design of intricate fusion device

image: PPPL physicist Caoxiang Zhu

Image: 
Elle Starkman / PPPL Office of Communications

Stellarators, twisty machines that house fusion reactions, rely on complex magnetic coils that are challenging to design and build. Now, a physicist at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) has developed a mathematical technique to help simplify the design of the coils, making stellarators a potentially more cost-effective facility for producing fusion energy.

"Our main result is that we came up with a new method of identifying the irregular magnetic fields produced by stellarator coils," said physicist Caoxiang Zhu, lead author of a paper reporting the results in Nuclear Fusion. "This technique can let you know in advance which coil shapes and placements could harm the plasma's magnetic confinement, promising a shorter construction time and reduced costs."

Fusion, the power that drives the sun and stars, is the fusing of light elements in the form of plasma -- the hot, charged state of matter composed of free electrons and atomic nuclei -- that generates massive amounts of energy. Twisty, cruller-shaped stellarators are an alternative to doughnut-shaped tokamaks that are more commonly used by scientists seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.

A key benefit of stellarators is their production of highly stable plasmas that are less liable to the damaging disruptions that tokamaks can incur. But the complexity of stellarator coils has been a factor holding back development of such facilities.

The coils of a stellarator must be constructed and arranged around the vacuum chamber very precisely, since deviations from the best coil arrangement create bumps and wiggles in the magnetic field that degrade the magnetic confinement and allow the plasma to escape. These problematic magnetic fields can easily be caused by misplacement of the magnetic coils, so engineers stipulate strict tolerances for these components.

"The big challenge of building stellarators is figuring out how to make them simply and economically," said PPPL Chief Scientist Michael Zarnstorff. "Zhu's research is important because he is trying to look more carefully and quantitatively at some of the drivers of the cost. His results suggest that we can simplify the construction of stellarators and thereby make them easier and less expensive to build, by not insisting on tight tolerances for things that don't matter."

In the past, scientists have used computer simulations to determine which coil placements would be best, checking the plasma's reactions to all possible magnetic configurations before the stellarator was built. But because there are many ways for the coils to vary, "this approach requires massive computation resources and man-hours," said Zhu. "In this paper, we propose a new mathematical method to rapidly identify dangerous coil deviations that could appear during fabrication and assembly."

The method relies on a Hessian matrix, a mathematical tool that allows researchers to determine which variations of the magnetic coils can make the plasma change its properties. "The idea is to figure out which perturbations you really have to control or avoid, and which you can ignore," Zhu said.

The team recently confirmed the accuracy of the new method by using it to analyze coil placements for a configuration similar to the Columbia Non-Neutral Torus, a small fusion facility operated by Columbia University. They compared the results to those produced by past studies relying on conventional methods and found that they agreed. The team is now collaborating with researchers in China to use the method to optimize coil placement on the Chinese First Quasi-axisymmetric Stellarator (CFQS), currently under construction.

The new technique could help scientists design better stellarators, Zhu said. It could make possible ways to identify an optimal coil arrangement that no one had considered before.

Credit: 
DOE/Princeton Plasma Physics Laboratory

Scientists propose network of imaging centers to drive innovation in biological research

image: Cuttlefish (Sepia officianalis) skin; neurons in red.

Image: 
Trevor Wardill

WOODS HOLE, Mass. -- When sparks fly to innovate new technologies for imaging life at the microscopic scale, often diverse researchers are nudging each other with a kind of collegial one-upmanship.

"Look at the resolution we obtain with this microscope I've designed," the physicist says. "Great," the biologist replies, "but my research organism moves fast. Can you boost the system's speed?" "You'll have terabytes of raw data coming off that microscope system," says the computational scientist. "We'll build in algorithms to manage the data and produce the most meaningful images." Around they go, propelling a cycle of challenge and innovation that allows them to see more clearly into new dimensions of the microscopic world.

Interdisciplinary interactions are essential for driving the innovation pipeline in biological imaging, yet collegial workspaces where they can spring up and mature are lacking in the United States. Last fall, the Marine Biological Laboratory (MBL) convened a National Science Foundation workshop to identify the bottlenecks that stymie innovation in microscopy and imaging, and recommend approaches for transforming how imaging technologies are developed and deployed. The conclusions of the 79 workshop participants are summarized in a Commentary in the August issue of Nature Methods.

"We propose a network of national imaging centers that provide collaborative, interdisciplinary spaces needed for the development, application, and teaching of advanced biological imaging techniques," write the authors and workshop leaders, MBL Fellows Daniel A. Colón-Ramos of Yale University School of Medicine, Patrick La Riviere of the University of Chicago, Hari Shroff of the National Institute of Biomedical Imaging and Bioengineering, and MBL Senior Scientist Rudolf Oldenbourg.

"Creating spaces for co-locating microscopists, computational scientists and biologists, in our experience, is a key and missing ingredient in new and necessary ecosystems of innovation. Sometimes the seeds of innovation have to be planted in the same pot for them to flower and be fruitful," Colón-Ramos says.

For the proposed centers to optimally succeed, they need expert staff scientists who engage in and catalyze collaborations among the major players in the imaging ecosystem, the group concluded. The centers will provide space and expertise not only for teaching existing, high-end imaging techniques, but also for designing and testing new technologies with real-world biological applications and disseminating them promptly to staff and visiting scientists, faculty and students. All can participate in an iterative innovation process, driving discovery while becoming interdisciplinary thinkers and project developers themselves.

As a next step in transforming the vision of national imaging centers into a reality, a follow-up meeting with a variety of funding stakeholders will be scheduled in the spring of 2020. Look for an announcement at https://www.mbl.edu/nsf-workshop/. Ideas and helpful comments for the meeting may be sent to ImagingNetworks@MBL.edu.

An Imaging Initiative is in progress at the MBL, with many elements of the proposed national imaging centers already in place and others under development.

Credit: 
Marine Biological Laboratory

Scientists probe how distinct liquid organelles in cells are created

image: Like stars in the night, liquid droplets of RNA (the bright green spheres) float in a solution containing high concentrations of divalent magnesium cations. The droplets are imaged through confocal fluorescence microscopy.

Image: 
P. Onuchic, A. Milin, I. Alshareedah, A. Deniz and P. Banerjee, Scientific Reports, Aug. 21, 2019. This work is licensed under CC BY-4.0 (https://creativecommons.org/licenses/by/4.0/).

BUFFALO, N.Y. -- The interior of a human cell consists, in part, of a complex soup of millions of molecules.

One way these biological compounds stay organized is through membrane-less organelles (MLOs) -- wall-less liquid droplets made from proteins and RNA that clump together and stay separate from the rest of the cellular stew.

You can think of these fluid compartments as being akin to oil droplets in water. MLOs facilitate storage of molecules within cells and can serve as a center of biochemical activity, recruiting molecules needed to carry out essential cellular reactions.

Though these droplets are plentiful within cells, they represent an emerging field of study in cell biology. Little is known about how they are created and maintained with unique functionalities.

To address this knowledge gap, one University at Buffalo laboratory is using cutting-edge scientific techniques to probe the fundamental properties of how MLOs work. The research is led by Priya R. Banerjee, PhD, an assistant professor of physics in the UB College of Arts and Sciences.

In a paper published on Aug. 21 in Scientific Reports, Banerjee and colleagues report that MLOs may be highly sensitive to the level of divalent cations inside cells. This is important because divalent calcium and magnesium ions aid in cellular signaling and are vital to life.

In experiments, MLOs containing both proteins and RNA form when divalent cations were found in low concentrations. But when concentrations of these cations were high, liquid organelles holding only RNA molecules were favored. The tests were systematically performed using controlled model systems comprising protein and RNA molecules floating in a buffer solution.

VIDEO: http://www.buffalo.edu/news/releases/2019/08/020.html

"It's interesting because you haven't changed the basic ingredients," Banerjee says. "But when you alter the ionic environment, you find that these organelles are highly tunable. They 'switch' from one type to the other, with each type having a distinct internal design."

The study was led by Banerjee and Ashok Deniz, PhD, associate professor of integrative structural and computational biology at Scripps Research, a nonprofit medical research institution.

The team demonstrated that fluctuations in divalent cations can profoundly tune the liquid properties of MLOs, altering the internal environment of the droplet. This is important since cells are believed to control some MLO functionality by changing their interior design. The concept of tunable intracellular droplet organelles is currently being actively investigated in Banerjee's lab at UB.

In a separate paper published earlier in 2019, Banerjee and colleagues explored another fundamental property of MLOs: conditions that drive such droplets to switch from a fluid, liquidy state to a harder, gel-like state.

"The concept that protein and nucleic acid droplets can function as organelles in a cell has started shifting the paradigm of cell biology that is written in a textbook," Banerjee says. "Reports started emerging from several different laboratories across the world that MLOs are relevant in gene regulation, protection of cells during stress, immune response and many other biological functions, as well as diseases such as neurological disorders and cancer. Therefore, understanding how MLOs are formed, tuned and altered in diseases are of key importance in the field now."

Credit: 
University at Buffalo

Shift to more intense rains threatens historic Italian winery

image: Francesco Paolo Valentini, owner of Valentini Azienda Agricola S.S., one of the oldest wineries in Italy, inspects records of grape harvests that date back more than 200 years. The winery, which has produced wine the same way for hundreds of years, is threatened by a changing climate.

Image: 
Andrea Straccini

Wine lovers may appreciate a dry white, but a lack of steady rainfall brought on by a changing climate is threatening a centuries old winemaking tradition in Italy, according to an international team of scientists.

Researchers found a shift from steady, gentle rains to more intense storms over the past several decades has led to earlier grape harvests, even when seasonal rainfall totals are similar. Early harvests can prevent grapes from fully developing the complex flavors found in wines.

Intense precipitation events represent the second most important factor, behind temperature, in predicting when grapes were ready at one vineyard that's been producing wine using traditional methods since the 1650s and recording harvest dates for 200 years, the scientists said.

"Our results are consistent with the hypothesis that the increasing tendency of precipitation intensity could exacerbate the effect of global warming on some premium wines that have been produced for almost 400 years," said Piero Di Carlo, associate professor of atmospheric sciences at D'Annunzio University of Chieti-Pescara in Italy.

Because the winery doesn't use irrigation or other modern techniques, its harvest records more accurately reflect what was happening with the climate each year. Scientists gathered local meteorological data and used models to simulate what factors likely most influenced grape readiness. They recently reported their findings in the journal Science of the Total Environment.

"Because they haven't changed their techniques, a lot of other variables that may have changed harvest date are taken out of the picture," said William Brune, distinguished professor of meteorology at Penn State. "It makes it cleaner statistically to look at things like precipitation intensity and temperature, and I think that's one reason why we were able to tease these findings out."

Previous studies have established a link between higher temperatures and earlier grape harvest dates at other European wineries. Higher rainfall totals can help offset advances in harvest dates caused by rising temperatures, but the impact of rain intensity was not well understood, the scientists said.

Steady rains are better for agriculture because heavy storms cause water runoff, and plants and soils can absorb less moisture.

The findings indicate a feedback in the climate system -- the increase in temperature precipitates more intense rainfall, which further advances the grape harvest date, according to the researchers.
Three wine barrels in a row

"We really need to think more broadly about how increases in temperature may have an influence on other variables and how that amplification can affect not only this winery, but all wineries, and in fact all agriculture," Brune said. "I think it's a cautionary tale in that regard."

The winery in the study faces particularly difficult challenges because mitigation strategies like irrigation would change the way the vineyard has cultivated its grapes for hundreds of years. Moving to higher elevations may be an option, but land is limited, and such a change could have other, unforeseen impacts on the wine, the scientists said.

"If we would like to keep our excellence in terms of production of wine, but also in other agricultural sectors, we have to be careful about climate change," said Di Carlo, lead author on the study. "Even if we can use some strategies to adapt, we don't know if we can compensate for all of the effects. We could lose a valuable part of our local economy and tradition."

Credit: 
Penn State

Separate polarization and brightness channels give crabs the edge over predators

image: This is the fiddler crab Afruca tangeri.

Image: 
Kate Feller, University of Minnesota

Fiddler crabs see the polarisation of light and this gives them the edge when it comes to spotting potentials threats, such as a rival crab or a predator. Now researchers at the University of Bristol have begun to unravel how this information is processed within the crab's brain. The study, published in Science Advances today [Wednesday 21 August], has discovered that when detecting approaching objects, fiddler crabs separate polarisation and brightness information.

The key advantage of this method is that the separate visual channels provide a greater range of information for the crab. The research also suggests that when it comes to detecting predators', polarisation can provide a more reliable source of information than brightness.

The researchers from the Ecology of Vision Group in the School of Biological Sciences tested how crabs responded to visual stimuli presented on a special computer monitor developed by the lab. By changing the polarisation and brightness of the stimuli the researchers were able to test whether certain combinations of polarisation and brightness appeared to cancel each other out.

Sam Smithers, PhD student in the School of Biological Sciences and one of the authors, said: "If you look through Polaroid sunglasses at the sky, you will notice that the brightness changes when you tilt your head. This is because the light from the sky is polarised and your sunglasses allow you to see differences in polarisation as differences in brightness."

"For our experiments we tested whether fiddler crabs see the same effect. However, we discovered that the crabs detect polarisation in a completely different way to this and polarisation has no effect on how the crabs detect the brightness of a scene."

The next step for the research team is to find out what happens to the brightness and polarisation information deeper in the brain. This will help them to understand how and when polarisation and colour information provide a visual advantage to the viewer.

Credit: 
University of Bristol

20-million-year-old skull suggests complex brain evolution in monkeys, apes

video: A high-resolution computed tomography (CT) scan of the Chilecebus carrascoensis fossil skull.

Image: 
© Xijun Ni and AMNH

It has long been thought that the brain size of anthropoid primates--a diverse group of modern and extinct monkeys, humans, and their nearest kin--progressively increased over time. New research on one of the oldest and most complete fossil primate skulls from South America shows instead that the pattern of brain evolution in this group was far more checkered. The study, published today in the journal Science Advances and led by researchers from the American Museum of Natural History, the Chinese Academy of Sciences, and the University of California Santa Barbara, suggests that the brain enlarged repeatedly and independently over the course of anthropoid history, and was more complex in some early members of the group than previously recognized.

"Human beings have exceptionally enlarged brains, but we know very little about how far back this key trait started to develop," said lead author Xijun Ni, a research associate at the Museum and a researcher at the Chinese Academy of Sciences. "This is in part because of the scarcity of well-preserved fossil skulls of much more ancient relatives."

As part of a long-term collaboration with John Flynn, the Museum's Frick Curator of Fossil Mammals, Ni spearheaded a detailed study of an exceptional 20-million-year-old anthropoid fossil discovered high in the Andes mountains of Chile, the skull and only known specimen of Chilecebus carrascoensis.

"Through more than three decades of partnership and close collaboration with the National Museum of Chile, we have recovered many remarkable new fossils from unexpected places in the rugged volcanic terrain of the Andes," Flynn said. "Chilecebus is one of those rare and truly spectacular fossils, revealing new insights and surprising conclusions every time new analytical methods are applied to studying it."

Previous research by Flynn, Ni, and their colleagues on Chilecebus provided a rough idea of the animal's encephalization, or the brain size relative to body size. A high encephalization quotient (EQ) signifies a large brain for an animal of a given body size. Most primates have high EQs relative to other mammals, although some primates--especially humans and their closest relatives--have even higher EQs than others. The latest study takes this understanding one step further, illustrating the patterns across the broader anthropoid family tree. The resulting "PEQ"--or phylogenetic encephalization quotient, to correct for the effects of close evolutionary relationships--for Chilecebus is relatively small, at 0.79. Most living monkeys, by comparison, have PEQs ranging from 0.86 to 3.39, with humans coming in at an extraordinary 13.46 and having expanded brain sizes dramatically even compared to nearest relatives. With this new framework, the researchers confirmed that cerebral enlargement occurred repeatedly and independently in anthropoid evolution, in both New and Old World lineages, with occasional decreases in size.

High-resolution x-ray computed tomography (CT) scanning and 3D digital reconstruction of the inside of Chilecebus' skull gave the research team new insights into the anatomy of its brain. In modern primates, the size of the visual and olfactory centers in the brain are negatively correlated, reflecting a potential evolutionary "trade-off," meaning that visually acute primates typically have weaker senses of smell. Surprisingly, the researchers discovered that a small olfactory bulb in Chilecebus was not counterbalanced by an amplified visual system. This finding indicates that in primate evolution the visual and olfactory systems were far less tightly coupled than was widely assumed.

Other findings: The size of the opening for the optic nerve suggests that Chilecebus was diurnal. Also, the infolding (sulcus) pattern of the brain of Chilecebus, although far simpler than in most modern anthropoids, possesses at least seven pairs of sulcal grooves and is surprisingly complex for such an ancient primate.

"During his epic voyage on the Beagle, Charles Darwin explored the mouth of the canyon where Chilecebus was discovered 160 years later. Shut out of the higher cordillera by winter snow, Darwin was inspired by 'scenes of the highest interest' his vista presented. This exquisite fossil, found just a few kilometers east of where Darwin stood, would have thrilled him," said co-author André Wyss from the University of California Santa Barbara.

Credit: 
American Museum of Natural History

Ocean temperatures turbocharge April tornadoes over Great Plains region

image: Temperature and atmospheric pressure conditions that lead to enhanced flow of moist and quickly-spinning air into the Great Plains region and increased tornado occurrences in April. H and L refer to unusually high and low atmospheric pressure. Red and blue shading indicates warmer and colder ocean conditions. Inlay figure illustrates moisture conditions in southeastern US for years with more April tornados

Image: 
Jung-Eun Chu

New research, published in the journal Science Advances, has found that unusual ocean temperatures in the tropical Pacific and Atlantic can drastically increase April tornado occurrences over the Great Plains region of the United States.

2019 has seen the second highest number of January to May tornadoes in the United States since 2000, with several deadly outbreaks claiming more than 38 fatalities. Why some years are very active, whereas others are relatively calm, has remained an unresolved mystery for scientists and weather forecasters.

Climate researchers from the IBS Center for Climate Physics (ICCP), South Korea have found new evidence implicating a role for ocean temperatures in US tornado activity, particularly in April. Analyzing a large number of atmospheric data and climate computer model experiments, the scientists discovered that a cold tropical Pacific and/or a warm Gulf of Mexico are likely to generate large-scale atmospheric conditions that enhance thunderstorms and a tornado-favorable environment over the Southern Great Plains.

This particular atmospheric situation, with alternating high- and low-pressure centers located in the central Pacific, Eastern United States and over the Gulf of Mexico, is known as the negative Pacific North America (PNA) pattern (Figure 1). According to the new research, ocean temperatures can boost this weather pattern in April. The corresponding high pressure over the Gulf of Mexico then funnels quickly-rotating moist air into the Great Plains region, which in turn fuels thunderstorms and tornadoes.

"Previous studies have overlooked the temporal evolution of ocean-tornado linkages. We found a clear relationship in April, but not in May ", says Dr. Jung-Eun Chu, lead author of the study and research fellow at the ICCP.

"Extreme tornado occurrences in the past, such as those in April 2011, were consistent with this blueprint. Cooler than normal conditions in the tropical Pacific and a warm Gulf of Mexico intensify the negative PNA, which then turbocharges the atmosphere with humid air and more storm systems", explains Axel Timmermann, Director of the ICCP and Professor at Pusan National University.

"Seasonal ocean temperature forecasts for April, which many climate modeling centers issue regularly, may further help in predicting the severity of extreme weather conditions over the United States", says June-Yi Lee, Professor at Pusan National University and Coordinating Lead Author of the 6th Assessment report of the Intergovernmental Panel on Climate Change.

"How Global Warming will influence extreme weather over North America, including tornadoes still remains unknown ", says. Dr. Chu. To address this question, the researchers are currently conducting ultra-high-resolution supercomputer simulations on the institute's new supercomputer Aleph.

Credit: 
Institute for Basic Science

Study identifies main culprit behind lithium metal battery failure

image: Chengcheng Fang uses a technique that UC San Diego researchers invented to quantify inactive lithium.

Image: 
David Baillot/UC San Diego Jacobs School of Engineering

A research team led by the University of California San Diego has discovered the root cause of why lithium metal batteries fail--bits of lithium metal deposits break off from the surface of the anode during discharging and are trapped as "dead" or inactive lithium that the battery can no longer access.

The discovery, published Aug. 21 in Nature, challenges the conventional belief that lithium metal batteries fail because of the growth of a layer, called the solid electrolyte interphase (SEI), between the lithium anode and the electrolyte. The researchers made their discovery by developing a technique to measure the amounts of inactive lithium species on the anode--a first in the field of battery research--and studying their micro- and nanostructures.

The findings could pave the way for bringing rechargeable lithium metal batteries from the lab to the market.

"By figuring out the major underlying cause of lithium metal battery failure, we can rationally come up with new strategies to solve the problem," said first author Chengcheng Fang, a materials science and engineering Ph.D. student at UC San Diego. "Our ultimate goal is to enable a commercially viable lithium metal battery."

Lithium metal batteries, which have anodes made of lithium metal, are an essential part of the next generation of battery technologies. They promise twice the energy density of today's lithium-ion batteries (which usually have anodes made of graphite), so they could last longer and weigh less. This could potentially double the range of electric vehicles.

But a major issue with lithium metal batteries is low Coulombic efficiency, meaning they undergo a limited number of cycles before they stop working. That's because as the battery cycles, its stores of active lithium and electrolyte get depleted.

Battery researchers have long suspected that this is due to the growth of the solid electrolyte interphase (SEI) layer between the anode and the electrolyte. But although researchers have developed various ways to control and stabilize the SEI layer, they still have not fully resolved the problems with lithium metal batteries, explained senior author Y. Shirley Meng, a nanoengineering professor at UC San Diego.

"The cells still fail because a lot of inactive lithium is forming in these batteries. So there is another important aspect that is being overlooked," Meng said.

The culprits, Meng, Fang and colleagues found, are lithium metal deposits that break off of the anode when the battery is discharging and then get trapped in the SEI layer. There, they lose their electrical connection to the anode, becoming inactive lithium that can no longer be cycled through the battery. This trapped lithium is largely responsible for lowering the Coulombic efficiency of the cell.

Measuring the ingredients of inactive lithium

The researchers identified the culprit by creating a method to measure how much unreacted lithium metal gets trapped as inactive lithium. Water is added to a sealed flask containing a sample of inactive lithium that formed on a cycled half-cell. Any bits of unreacted lithium metal chemically react with water to produce hydrogen gas. By measuring how much gas is produced, researchers can calculate the amount of trapped lithium metal.

Inactive lithium is also made up of another component: lithium ions, which are the building blocks of the SEI layer. Their amount can also be calculated simply by subtracting the amount of unreacted lithium metal from the total amount of inactive lithium.

In tests on lithium metal half-cells, researchers found that unreacted lithium metal is the main ingredient of inactive lithium. As more of it forms, the lower the Coulombic efficiency sinks. Meanwhile, the amount of lithium ions from the SEI layer consistently stays low. These results were observed in eight different electrolytes.

"This is an important finding because it shows that the primary failure product of lithium metal batteries is unreacted metallic lithium instead of the SEI," Fang said. "This is a reliable method to quantify the two components of inactive lithium with ultra-high accuracy, which no other characterization tool has been capable of doing."

"The aggressive chemical nature of lithium metal has made this task very challenging. Parasitic reactions of many different types occur simultaneously on lithium metal, making it almost impossible to differentiate these different types of inactive lithium," said Kang Xu, whose team at the U.S. Army Combat Capabilities Development Command Army Research Laboratory provided one of the advanced electrolyte formulations to test the method. "The advanced methodology established in this work provides a very powerful tool to do this in a precise and reliable way."

The researchers hope their method could become the new standard for evaluating efficiency in lithium metal batteries.

"One of the problems battery researchers face is that testing conditions are very different across labs, so it's hard to compare data. It's like comparing apples to oranges. Our method can enable researchers to determine how much inactive lithium forms after electrochemical testing, regardless of what type of electrolyte or cell format they use," Meng said.

A closer look at inactive lithium

By studying the micro- and nanostructures of lithium deposits in different electrolytes, the researchers answer another important question: why some electrolytes improve Coulombic efficiency while others do not.

The answer has to do with how lithium deposits on the anode when the cell is charging. Some electrolytes cause lithium to form micro- and nanostructures that boost cell performance. For example, in an electrolyte specially designed by Meng's collaborators at General Motors, lithium deposits as dense, column-shaped chunks. This type of structure causes less unreacted lithium metal to get trapped in the SEI layer as inactive lithium during discharge. The result is a Coulombic efficiency of 96 percent for the first cycle.

"This excellent performance is attributed to the columnar microstructure formed on the surface of the current collector with minimum tortuosity, which significantly enhances the structural connection," said Mei Cai, whose team at General Motors developed the advanced electrolyte that enabled lithium to deposit with the "ideal" microstructure.

In contrast, when a commercial carbonate electrolyte is used, lithium deposits with a twisty, whisker-like morphology. This structure causes more lithium metal to get trapped in the SEI during the stripping process. Coulombic efficiency lowers to 85 percent.

Moving forward, the team proposes strategies to control the depositing and stripping of lithium metal. These include applying pressure on the electrode stacks; creating SEI layers that are uniform and mechanically elastic; and using 3D current collectors.

"Control of the micro- and nanostructure is key," Meng said. "We hope our insights will stimulate new research directions to bring rechargeable lithium metal batteries to the next level."

Credit: 
University of California - San Diego

Breaking up is hard to do

Physicists used to think that superconductivity - electricity flowing without resistance or loss - was an all or nothing phenomenon. But new evidence suggests that, at least in copper oxide superconductors, it's not so clear cut.

Superconductors have amazing properties, and in principle could be used to build loss-free transmission lines and magnetic trains that levitate above superconducting tracks. But most superconductors only work at temperatures close to absolute zero. This temperature, called the critical temperature, is often only a few degrees Kelvin and requires liquid helium to stay that cold, making such superconductors too expensive for most commercial uses. A few superconductors, however, have a much warmer critical temperature, closer to the temperature of liquid nitrogen (77K), which is much more affordable.

Many of these higher-temperature superconductors are based on a two-dimensional form of copper oxide.

"If we understood why copper oxide is a superconductor at such high temperatures, we might be able to synthesize a better one" that works closer to room temperature (293K), says UConn physicist Ilya Sochnikov.

Sochnikov and his colleagues at Rice University, Brookhaven National Lab and Yale recently figured out part of that puzzle, and they report their results in the 21 August of Nature.

Their discovery was about how electrons behave in copper oxide superconductors. Electrons are the particles that carry electric charge through our everyday electronics. When a bunch of electrons flow in the same direction, we call that an electric current. In a normal electric circuit, say the wiring in your house, electrons bump and jostle each other and the surrounding atoms as they flow. That wastes some energy, which leaves the circuit as heat. Over long distances, that wasted energy can really add up: long-distance transmission lines in the U.S. lose on average 5% of their electricity before reaching a consumer, according to the Energy Information Administration.

But in a superconductor below its critical temperature, electrons behave totally differently. Instead of bumping and jostling, they pair up and move in sync with the other electrons in a kind of wave. If electrons in a normal current are a rushing, uncoordinated mob, electrons in a superconductor are like dancing couples, gliding across the floor like people in a ballroom. It's this friction-free dance - coherent motion - of paired electrons that makes a superconductor what it is.

The electrons are so happy in pairs in a superconductor that it takes a certain amount of energy to pull them apart. Physicists can measure this energy with an experiment that measures how big a voltage is needed to tear an electron away from its partner. They call it the 'gap energy'. The gap energy disappears when the temperature rises above the critical temperature and the superconductor changes into an ordinary material. Physicists assumed this is because the electron pairs have broken up. And in classic, low-temperature superconductors, it's pretty clear that that's what's happening.

But Sochnikov and his colleagues wanted to know whether this was really true for copper oxides. Copper oxides behave a little differently than classic superconductors. Even when the temperature rises well above the critical level, the energy gap persists for a while, diminishing gradually. It could be a clue as to what makes them different.

The researchers set up a version of the gap energy experiment to test this. They made a precise sandwich of two slices of copper oxide superconductor separated by a thin filling of electrical insulator. Each slice was just a few nanometers thick. The researchers then applied a voltage between them. Electrons began to tunnel from one slice of copper oxide to the other, creating a current.

By measuring the noise in that current, the researchers found that a significant number of the electrons seemed to be tunneling in pairs instead of singly, even above the critical temperature. Only about half the electrons tunneled in pairs, and this number dropped as the temperature rose, but it tapered off only gradually.

"Somehow they survive," Sochnikov says, "they don't break fully." He and his colleagues are still not sure whether the paired states are the origin of the high-temperature superconductivity, or whether it's a competing state that the superconductor has to win out over as the temperature falls. But either way, their discovery puts a constraint on how high temperature superconductors happen.

"Our results have profound implications for basic condensed matter physics theory," says co-author Ivan Bozovic, group leader of the Oxide Molecular Beam Epitaxy Group in the Condensed Matter Physics and Materials Science Division at the U.S. Department of Energy's Brookhaven National Laboratory and professor of applied physics at Yale University. Sochnikov agrees.

"There's a thousand theories about copper oxide superconductors. This work allows us to narrow it down to a much smaller pool. Essentially, our results say that any theory has to pass a qualifying exam of explaining the existence of the observed electron pairs," Sochnikov says. He and his collaborators at UConn, Rice University, and Brookhaven National Laboratory plan to tackle the remaining open questions by designing even more precise materials and experiments.

Credit: 
University of Connecticut

A hallmark of superconductivity, beyond superconductivity itself

image: Rice University physicists (from left) Liyang Chen, Panpan Zhou and Doug Natelson and colleagues at Brookhaven National Laboratory and the University of Connecticut found evidence of electron pairing -- a hallmark feature of superconductivity -- at temperatures and energies well above the critical threshold where superconductivity occurs. The research appears this week in Nature.

Image: 
Photo by Jeff Fitlow/Rice University

HOUSTON -- (Aug. 21, 2019) -- Physicists have found "electron pairing," a hallmark feature of superconductivity, at temperatures and energies well above the critical threshold where superconductivity happens.

Rice University's Doug Natelson, co-corresponding author of a paper about the work in this week's Nature, said the discovery of Cooper pairs of electrons "a bit above the critical temperature won't be 'crazy surprising' to some people. The thing that's more weird is that it looks like there are two different energy scales. There's a higher energy scale where the pairs form, and there's a lower energy scale where they all decide to join hands and act collectively and coherently, the behavior that actually brings about superconductivity."

Electrical resistance is so common in the modern world that most of us take it for granted that computers, smartphones and electrical appliances warm up during use. That heating happens because electricity doesn't flow freely through the metal wires and silicon chips inside the devices. Instead, flowing electrons occasionally bump into atoms or one another, and each collision produces a tiny bit of heat.

Physicists have known since 1911 that electricity can flow without resistance in materials called superconductors. And in 1957, they figured out why: Under specific conditions, including typically very cold temperatures, electrons join together in pairs -- something that's normally forbidden due to their mutual repulsion -- and as pairs, they can flow freely.

"To get superconductivity, the general feeling is that you need pairs, and you need to achieve some sort of coherence among them," said Natelson, who partnered on the research with experts at Rice, Brookhaven National Laboratory and the University of Connecticut. "The question, for a long time, was, 'When do you get pairs?' Because in conventional superconductors as soon as you formed pairs, coherence and superconductivity would follow."

Electron pairs are named for Leon Cooper, the physicist who first described them. In addition to explaining classical superconductivity, physicists believe Cooper pairs bring about high-temperature superconductivity, an unconventional variant discovered in the 1980s. It was dubbed "high-temperature" because it occurs at temperatures that, although still very cold, are considerably higher those of classical superconductors. Physicists have long dreamed of making high-temperature superconductors that work at room temperature, a development that would radically change the way energy is made, moved and used worldwide.

But while physicists have a clear understanding of how and why electron pairing happens in classical superconductors, the same cannot be said of high-temperature superconductors like the lanthanum strontium copper oxide (LSCO) featured in the new study.

Every superconductor has a critical temperature at which electrical resistance disappears. Natelson said theories and studies of copper-oxide superconductors over that past 20 years have suggested that Cooper pairs form above this critical temperature and only become coherently mobile when the material is cooled to the critical temperature.

"If that's true, and you've already got pairs at higher temperatures, the question is, 'Can you also get coherence at those temperatures?'" Natelson said. "Can you somehow convince them to start their dance in the region known as the pseudogap, a phase space at higher temperatures and energy scales than the superconducting phase."

In the Nature study, Natelson and colleagues found evidence of this higher energy pairing in the conduction noise in ultrapure LCSO samples grown in the lab of Brookhaven's Ivan Boz?ovic?, co-corresponding author of the study.

"He grows the best material in the world, and our measurements and conclusions were only possible because of the purity of those samples," Natelson said. "He and his team made devices called tunnel junctions, and instead of just looking at the electrical current, we looked at fluctuations in the current called shot noise.

"In most cases, if you measure current, you're measuring an average and ignoring the fact that current comes in chunks of charge," Natelson said. "It's something like the difference between measuring the average daily rainfall at your home as opposed to measuring the number of raindrops that are falling at any given time."

By measuring the variation in the discrete amount of electrical charge flowing through LCSO junctions, Natelson and colleagues found that the passage of single electrons could not account for the amount of charge flowing through the junctions at temperatures and voltages well above the critical temperature where superconductivity occurred.

"Some of the charge must be coming in larger chunks, which are the pairs," he said. "That's unusual, because in a conventional superconductor, once you go above the characteristic energy scale associated with superconductivity, the pairs get ripped apart, and you only see single charges.

"It looks like LCSO contains another energy scale where the pairs form but aren't yet acting collectively," Natelson said. "People have previously offered theories about this sort of thing, but this is the first direct evidence for it."

Natelson said it's too early to say whether physicists can make use of the new knowledge to coax pairs to flow freely at higher temperatures in unconventional superconductors. But Boz?ovic? said the discovery has "profound implications" for theoretical physicists who study high-temperature superconductors and other types of condensed matter.

"In some sense, the textbook chapters have to be rewritten," Boz?ovic? said. "From this study, it appears that we have a new type of metal, in which a significant fraction of the electrical current is carried by electron pairs. On the experimental side, I expect that this finding will trigger much follow-up work -- for example, using the same technique to test other cuprates or superconductors, insulators and layer thicknesses."

Credit: 
Rice University

Measuring the charge of electrons in a high-temp superconductor

image: Brookhaven Lab scientists Myung-Geun Han (sitting front), Ivan Bozovic (sitting back), Yimei Zhu (standing back), and Anthony Bollinger of the Condensed Matter Physics and Materials Science Division built electronic transport devices that sandwich a thin layer of an insulating material in between two thicker layers of superconducting materials. They characterized these devices using an electron microscope (seen in background). In collaboration with scientists at Rice University and the University of Connecticut, the team discovered that a high percentage of electron pairs -- which are known to carry superconducting current -- exists well above expected temperature and energy ranges in a material that conducts electricity without energy loss at unusually high temperatures.

Image: 
Brookhaven National Laboratory

UPTON, NY--A team of scientists has collected experimental evidence indicating that a large concentration of electron pairs forms in a copper-oxide (cuprate) material at a much higher temperature than the "critical" one (Tc) at which it becomes superconducting, or able to conduct electricity without energy loss. They also detected these pairs way above the superconducting energy gap, or a range of energies that electrons cannot possess.

Their results, published today in the journal Nature, may point at a way to boost the superconducting properties of cuprates and to find new--and perhaps better--high-temperature superconductors (HTS). While cuprates and other HTS become superconducting at unusually high temperatures compared to conventional superconductors, they still require expensive cooling with liquid nitrogen to operate. Research aimed at understanding the mechanism behind high-temperature superconductivity could help scientists further raise Tc to an economically practical level at which large-scale applications, such as lossless power lines across the electric grid, could be enabled.

"If you don't know where to look, the phase space is infinite," said co-corresponding author Ivan Bozovic, group leader of the Oxide Molecular Beam Epitaxy Group in the Condensed Matter Physics and Materials Science (CMPMS) Division at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory and professor of applied physics at Yale University. "Our measurement spectra directly revealed a high percentage of electron pairs well above the expected temperature and energy ranges. The exciting prospect now is to leverage this knowledge to enhance superconductivity in cuprates by chemically or physically tweaking some parameters, or to search for other strange metals in which such pairs could exist."

A mysterious electronic state

In the textbook explanation of superconductivity, electrons spinning in antiparallel directions (think of two little magnets, one pointing north and the other south) pair up at and below Tc, collectively condensing into a "superfluid" that flows freely without resistance. But cuprates do not conform to this picture. For one, their "normal" (non-superconducting) and superconducting states do not resemble those of conventional metals and superconductors. In particular, their density of states (DOS)--the number of electrons for a given volume at a given energy level--is different. For the normal state of ordinary metals, the DOS is constant. Once the metal goes superconducting below Tc, a suppression in the DOS develops at low energy levels--the so-called superconducting (energy) gap. Unexpectedly, in cuprates, such a DOS depression is observed even at temperatures well above Tc, even though the material is not superconducting.

"This so-called "pseudogap" has been one of the main mysteries related to cuprates and HTS for the past 30 years," explained Bozovic. "Many proposals have been put forward to explain its nature and origin. One picture is that the pseudogap is characterized by a formation of electron pairs. In our previous comprehensive studies of the key physical properties of cuprate superconductors, we presented experimental results supporting this scenario. However, the evidence was circumstantial, so we decided to test the theory directly by measuring the charge of mobile carriers in the pseudogap."

Charge measurements

Most of the available methods to make this measurement involve nanostructuring the material--for example, as very small dots or wires--through a patterning technique called electron-beam lithography. However, because of the short coherence length of electron pairs (distance between the two electrons), these methods require devices that are sized on the scale of a few nanometers.

"Nobody to this day has been able to do lithography on this scale on cuprates and keep them superconducting," said Bozovic.

One method that does not require nanolithography is to take "shot noise" measurements in tunnel junctions, which are electronic transport devices that sandwich a thin layer of an insulating material in between two thicker layers of electrically conducting materials (top and bottom electrodes). When current is run vertically, electrons can tunnel (jump) across the insulating layer from one electrode to the other.

"If you have a statistical process like shooting a dart at a dart board, not all of the darts will hit in the same place," explained Bozovic. "They will be spread out, showing some distribution. Analogously, if we very precisely measure the direct current flowing through a conductor, we will observe that the current varies with time in a random manner, even if only slightly. In this study, the statistical fluctuations are in electric current when the electrons transport through the device. The current does not continuously flow like a waterfall. Instead, the number of electrons passing through varies with time, showing a characteristic distribution. These deviations from the average current are called the shot noise, which directly reveals the charge of the carriers."

The challenge is that in order to measure the shot noise, you need to restrict the current to be very low (otherwise these deviations may be hard or impossible to detect). The thin insulating layer can act as a barrier to provide this restriction.

In this case, the scientists built tunnel junctions with a nine-atom thick insulating layer made of lanthanum copper oxide in between two thicker superconductor layers made of lanthanum strontium copper oxide. To make the three-layered structures, they used a sophisticated synthesis technology called layer-by-layer molecular beam epitaxy, which Bozovic and his group have been perfecting at Brookhaven for nearly two decades. This fabrication is technically very challenging because superconductivity decays very close to the surface of cuprates. Thus, atomically smooth surfaces are required; otherwise, imperfections would kill the tunneling current. The scientists verified that the interfaces were atomically perfect by imaging the structures and mapping their elemental composition
with advanced electron microscopy and spectroscopy techniques developed by co-author Yimei Zhu--a physicist and leader of the Electron Microscopy and Nanostructure Group in Brookhaven's CMPMS Division--and his group members.

"The diameter of our tunnel junction devices is not nanometers--one-billionth of a meter--but micrometers," said Bozovic. "Lithography is much easier to do on this larger scale."

The scientists then measured the current-voltage characteristics of the fabricated devices over a range of temperatures, energies (voltages), and doping concentrations (dopants are elements added to a material to alter its electronic properties). They performed the measurements at Brookhaven; Rice University, where professor Douglas Natelson and his group used high-precision shot noise instrumentation; and the University of Connecticut, where professor Ilya Sochnikov used an ultracold refrigerator to access the lowest temperatures possible. For all three independent measurements, the scientists obtained plots with the same curves.

The shot noise spectra directly showed that, in these cuprate devices, preformed pairs exist well above the Tc and superconducting energy gap. The lower the doping level is, the higher the percentage of pairs.

"Our results have profound implications for basic condensed matter physics theory," said Bozovic. "In some sense, the textbook chapters have to be rewritten. From this study, it appears that we have a new type of metal, in which a significant fraction of the electrical current is carried by electron pairs. On the experimental side, I expect that this finding will trigger much follow-up work--for example, using the same technique to test other cuprates or superconductors, insulators, and layer thicknesses."

Credit: 
DOE/Brookhaven National Laboratory

Study shows hazardous patterns of prescription opioid misuse in the US

August 21, 2019 -- Among adults aged 18 years and older, 31 percent used prescription opioids only as prescribed by a physician medically and 4 percent misused them. Thus, the overwhelming majority (88 percent) of all past-12-month prescription opioid users used the drugs for medical purposes only, according to a new study at Columbia University Mailman School of Public Health and Vagelos College of Physicians and Surgeons, and the New York State Psychiatric Institute. The findings are published in the American Journal of Public Health.

For the 12 percent of misusers, almost 60 percent had misused their own prescription opioids exclusively (27 percent) or with prescribed opioids obtained without a prescription from a nonmedical source (31 percent). Most without-prescription-only misusers (88 percent) obtained their last prescription opioids from a friend or relative. Almost all prescription-only misusers (98 percent) obtained their last prescription opioid from one doctor.

"Identifying the characteristics of prescription opioids misusers compared with those who use opioids only as prescribed is crucial for understanding who is most at risk for adverse outcomes from the drugs and for targeting prevention and treatment efforts," said Denise Kandel, PhD, professor of Sociomedical Sciences in Psychiatry at Columbia Mailman School.

Using data from the 2016-2017 National Surveys on Drug Use and Health, the researchers compared exclusive medical prescription opioid users with three groups of misusers: misusers without prescriptions, misusers of ones' own prescriptions, and both types of misusers. The researchers also examined nicotine use and dependence, and past-12-month alcohol and marijuana use and disorder, defined per DSM-IV criteria for dependence or abuse.

The three most frequently misused prescription opioids were hydrocodone, oxycodone, and tramadol. Fentanyl was used medically to alleviate pain, especially among prescription-only misusers. Without-and-with-prescription misusers had the highest rates of heroin use. Misusers of opioids without prescriptions were younger than other groups.

"We found that misusers were more likely to be depressed than exclusive medical users," said Kandel. The data also showed that prescription opioid misusers were more likely to have been treated for alcohol, to have a marijuana disorder and to perceive drug use as less risky. "They also had higher rates of prescription opioid use disorder, heroin use, and benzodiazepine misuse -- a very hazardous pattern of substance use."

"Failure to obtain pain relief from medical regimen is a major motivating factor for opioid misuse and underscores the urgent need for patients' access to effective pain management," Kandel observed. "From a public health perspective, our findings suggest that strategies to reduce harm from prescription opioids must consider different types of users and misusers."

The longitudinal population data necessary to understand which prescription opioid users are most at risk for negative outcomes are unavailable. "Our study suggests that "prescription opioid misusers who misuse both their own prescriptions and prescription opioid drugs not prescribed to them may be most at risk for overdose."

Credit: 
Columbia University's Mailman School of Public Health

'Kissing loops' in RNA molecule essential for its role in tumor suppression

image: This is an artistic representation of kissing loops.

Image: 
Tobias Wüstenfeld

Human cells - like those of many other organisms - have developed mechanisms to protect us from cancer. Healthy cells produce a suite of molecules that stop harmful mutations from accumulating. The most famous guardian of our genome is the protein p53: whenever p53 becomes inactive or is malfunctioning, the risk of developing cancer increases sharply. MEG3, which has been studied in detail by Marco Marcia and his group at EMBL Grenoble, is another cancer-preventing molecule that our cells produce. Its function arises from stimulating p53. However, unlike p53, MEG3 is not a protein and belongs to a class of RNA molecules discovered within the last 20 years, called long non-coding RNAs; lncRNAs for short.

While human cells likely contain more lncRNAs than proteins, the biological importance and mechanisms of action of these RNAs remain largely obscure. Some lncRNAs, like MEG3, are linked to diseases, but scientists have not been able to decipher how they work exactly. This has triggered scepticism among some researchers in the field, says Marcia: "Because of the lack of molecular understanding of how lncRNAs work, many scientists still question the actual functional relevance of these molecules."

Why shape and function are interlinked

Marcia's aim was to change this perception, by studying the three-dimensional shape of lncRNAs. He and his group hope that knowing more about lncRNA structures will help to understand how these molecules function.

"3D structures provide the molecular map, the molecular cartography of biological molecules. When one visits a new city, one wants to know where the railway station is, where the city hall, the schools, the parks are, because those are the elements of the city that make it function properly. The same is true for biomolecules: you want to know how they are folded and structured so that you can identify their functional units," says Marcia.

Using biochemistry, cell biology and single-particle atomic force microscopy, the team studied the structure of MEG3 in great detail. The group systematically removed and modified the building blocks of MEG3, to find out which of them are essential for its functionality. This way, the researchers discovered two elements inside the molecule that are more important for its function as a tumour suppressor than others. Interestingly, these elements form hairpin structures, which biologists call 'kissing loops', that interact with each other in three dimensions.

When these kissing loops were disrupted by manipulating the building blocks of MEG3, the tumour suppression function of MEG3 was also disrupted. The findings of the group might have wider implications, explains Marcia: "The fact that the 3D structure of lncRNAs is important sheds new light on these molecules. It shows that lncRNA molecules are much more sophisticated than we thought, because they need to be controlled and folded very precisely to work properly."

Improving diagnosis and treatment of brain cancer

MEG3 is abundant in different mammalian tissues, particularly the brain and endocrine glands like the pituitary gland. Tumours in the brain and the pituitary gland can develop when MEG3 is not working properly. To date, these types of tumours can only be treated by surgery. One way of overcoming the need for invasive surgery would be to stimulate MEG3 activity in the tumour.

Designing drugs that stabilise the MEG3 kissing loops might improve its tumour suppressor function to the point where it can arrest tumour growth. Knowledge about the composition of healthy MEG3 might also help to identify people with abnormally folded MEG3 who are at higher risk of developing cancer.

The unsolved mysteries around MEG3 and other lncRNAs

Despite their meticulous work, the group has not solved the whole puzzle surrounding MEG3, emphasises Marcia: "We still need to discover the precise order of events that lead to MEG3-dependent activation and stimulation of p53."

The results provided in this new paper, which also includes contributions from colleagues at the Institut de Biologie Structurale (Grenoble), the Department CIBIO (University of Trento) and the Max Delbrück Center (Berlin), present the most detailed molecular insight into a lncRNA to date, but trigger the question whether 3D structures are equally important for the function of other lncRNAs. Marcia and his group want to continue analysing the structure and function of further lncRNAs. However, it will be impossible for them alone to study the many thousands of lncRNAs that human cells produce. "I truly hope that the methods and experimental approach we have followed will stimulate other colleagues in the community to pick up the baton from our study and help us expand the characterisation of lncRNAs," says Marcia.

Credit: 
European Molecular Biology Laboratory

Visits + phones = better outcomes for teens, young women with pelvic inflammatory disease

image: Two sexually transmitted infections, Neisseria gonorrhoeae (left) and Chlamydia trachomatis (right), which can lead to pelvic inflammatory disease (PID). A Johns Hopkins Medicine-led clinical trial shows that a patient-centered, technology-enhanced care program improves PID outcomes in underserved populations.

Image: 
N. gonorrhoeae image from the National Institute for Allergy and Infectious Diseases and Chlamydia trachomatis image in the public domain via Wikipedia Commons.

A patient-centered, community-engaged program featuring home visits by nurses and mobile phone links to caregivers works better than traditional adult-focused and patient self-managed care systems for treating and managing pelvic inflammatory disease, or PID, among historically underserved teens and young women, a Johns Hopkins Medicine study shows.

In a report on the study appearing in a recent issue of the Journal of the American Medical Association Network Open, Johns Hopkins researchers say that traditional approaches to PID treatment and follow-up care do not address significant age, race and income disparities associated with a population at high risk for recurring sexually transmitted infections (STIs) and subsequent PID.

PID, which can damage female reproductive organs, annually affects an estimated 90,000 women under age 25 in the United States. Left untreated, it can cause scarring of the Fallopian tubes, chronic pelvic pain, ectopic pregnancies, and in severe cases, infertility. According to the Centers for Disease Control and Prevention, 1 in 8 women with a history of PID experience difficulties getting pregnant.

Hoping to reduce PID numbers and impacts, the Johns Hopkins team tested an innovative care program known as technology-enhanced community health nursing (TECH-N) intervention.

"We've known for some time that PID disproportionately affects females between the ages of 13 and 25, and strikes hardest in low-income, minority and urban populations, yet we have continued to treat everyone with the disease in the same manner," says Maria Trent, M.D., M.P.H., professor of pediatrics at the Johns Hopkins University School of Medicine and lead author of the JAMA Network Open paper. "Our study shows that TECH-N gives health care providers a practical, more effective way to team with younger patients to address their specific treatment and follow-up care needs."

To define and measure the impact of a TECH-N program compared with the traditional "standard of care" in treating and managing PID, the researchers conducted a randomized clinical trial of 286 participants, ages 13 to 25, with mild to moderate PID. The patients, primarily from low income and African American populations in Baltimore were recruited over a four-year period from pediatric/adolescent medical clinics and adult emergency departments, associated with the Johns Hopkins University School of Medicine.

Initially, the study participants completed a computer-assisted audio self-interview and were randomly placed into either the standard-of-care or TECH-N intervention groups. All of the patients were tested for two STIs, Neisseria gonorrhoeae and Chlamydia trachomatis; received a full course of antibiotic medication based on federal treatment guidelines; and were given traditional discharge instructions for post-treatment self-care and follow-up visits with health care providers.

Unlike the control group only receiving standard-of-care treatment and follow-up, TECH-N participants also received daily automated text messages on their own cell phones or study-provided, prepaid phones for two weeks after their first treatment session. These messages reminded them to take their medicine and provide caregivers with confirmations that they had done so.

After the 14-day period, the TECH-N group received booster text messages weekly for a month about STI management and prevention.

Additionally, the TECH-N group members received follow-up, in-home visits from a community health nurse at five, 14, 30 and 90 days post-enrollment. During these sessions, nurses examined the patients, collected specimens for STI detection and discussed STI risk reduction tactics, such as proper condom use and partner notification/treatment.

The TECH-N group showed a significant decline in new STIs over the full 90-day study period: 28% compared with 14% for the standard-of-care group. Higher condom use, 21% to 11%, also was seen in those receiving TECH-N intervention.

"These findings show that TECH-N can provide the close, personalized follow-up and support needed after STI diagnosis to successfully treat and manage PID, and subsequently prevent longer-term complications, in a group of urban adolescents and young women currently experiencing a disparity in that care," Trent says.

In spite of economic disparities in the TECH-N study's target population, smartphone technology proved a successful means for reaching them, Trent adds.

"National data demonstrate that 95% percent of sexually active adolescent and young adult women, including those in low-income households, have access to mobile phones with text messaging and internet, and that they are online 'almost constantly,'" Trent says. "In fact, in our study, only 3% of the intervention participants required us to provide them with a cellphone at enrollment."

Based on this first trial's encouraging outcomes, Trent says TECH-N intervention should be seriously considered as an enhancement of current standard-of-care approaches.

"Urban adolescents and young women with PID often struggle to manage the disease on their own, which can escalate to more severe infections and complications such as infertility," Trent says. "Patients also may continue to engage in behaviors that lead to recurring STIs and PID incidence. This study shows the potential for TECH-N-type programs to break that cycle."

Credit: 
Johns Hopkins Medicine

New clinical data highlight unique enzymatic activity of B. infantis EVC001 in infant gut

image: B. infantis interacts with Lactoferrin, IGA or other milk proteins to produce Lactate and Acetate.

Image: 
Evolve Biosystems, Inc.

Evolve BioSystems, Inc. today announced new data published this week in Journal of Functional Foods, showing for the first time that term, breastfed infants fed activated Bifidobacterium longum subsp. infantis EVC001 (B. infantis EVC001) experience improved metabolism of protein-bound glycans from human milk, compared to matched controls. Not only do these new data provide greater mechanistic understanding of how B. infantis selectively utilizes the glycans from human milk as growth substrate but may also provide a basis for facilitating B. infantis colonization in formula fed infants via utilization of bovine derived N-glycans. The paper is available at: https://www.sciencedirect.com/science/article/pii/S1756464619304098

Although EndoBI-1, an enzyme unique to B. infantis, has previously been shown to release N-glycans from native bovine milk glycoproteins in vitro, this is the first study to clinically demonstrate this activity in breastfed infants fed B. infantis EVC001. These findings indicate that N-glycans from milk glycoproteins, whether derived from human or bovine sources, may contribute to colonization of B. infantis EVC001 in the infant gut.

B. infantis is an infant-adapted gut bacterium uniquely capable of dominating the infant gut during exclusive breastfeeding, mainly through the well documented utilization of free human milk oligosaccharides (HMO). This key infant gut symbiont has been shown to create a protective environment in the infant gut during the early months of life via pathogen suppression, resulting in reduced virulence factors, a decreased abundance of antibiotic resistance genes, and reductions in colonic mucin degradation (1-4). These new data show that B. infantis is also capable of selectively releasing protein bound N-glycans from human milk glycoproteins in vivo. N-glycans from lactoferrin, in particular, are known to contribute to the antimicrobial activity of this protein and other glycans released by this enzyme may play an additional role in supporting the growth of B. infantis EVC001 in vivo.

"We are thrilled to see that the earlier in vitro work on this unique enzymatic activity of B. infantis is now demonstrated clinically here in breastfed infants," says Dr. Steven Frese, Director of Microbiology at Evolve BioSystems and corresponding author of the study. "The important role and mechanism of this key infant gut bacterium in early life are becoming increasingly clear, but there is published evidence suggesting that the prevalence of B. infantis in US-born infants has been rapidly declining over the past 100 years (5). Therefore, these data are encouraging as we continue to develop effective methods of addressing infant gut dysbiosis. This work was really made possible because of an international collaboration with scientists at Çanakkale Onsekiz Mart University in Turkey, who have extensively characterized this enzyme."

Credit: 
MSR Communications