Tech

Russian chemists developed polymer cathodes for ultrafast batteries

Scientists are searching for lithium technology alternatives in the face of the surging demand for lithium-ion batteries and limited lithium reserves. Russian researchers from Skoltech, D. Mendeleev University, and the Institute of Problems of Chemical Physics of RAS have synthesized and tested new polymer-based cathode materials for lithium dual-ion batteries. The tests showed that the new cathodes withstand up to 25,000 operating cycles and charge in a matter of seconds, thus outperforming lithium-ion batteries. The cathodes can also be used to produce less expensive potassium dual-ion batteries. The research was published in the journal Energy Technology.

The amount of electricity consumed worldwide grows by the year, and so does the demand for energy storage solutions since many devices often operate in autonomous mode. Lithium-ion batteries can generate enormous power while showing fairly high discharge and charge rates and storage capacity per unit mass, making them a popular storage device in electronics, electric transport, and global power grids. For instance, Australia is launching a series of large-scale lithium-ion battery storage projects to manage excess solar and wind energy.

If lithium-ion batteries continue to be produced in growing quantities, the world may sooner or later run out of lithium reserves. With Congo producing 60% of cobalt for lithium-ion batteries' cathodes, cobalt prices may skyrocket. The same goes for lithium, as lithium mining's water consumption poses a great challenge for the environment. Therefore, researchers are looking for new energy storage devices relying on more accessible materials while using the same operating principle as lithium-ion batteries.

The team used a promising post-lithium dual-ion technology based on the electrochemical processes involving the electrolyte's anions and cations to attain a manifold increase in lithium-ion batteries' charging rate. Another plus is that the cathode prototypes were made of polymeric aromatic amines synthesized from various organic compounds.

"Our previous research addressed polymer cathodes for ultra-fast high-capacity batteries that can be charged and discharged in a few seconds, but we wanted more," says Filipp A. Obrezkov, a Skoltech Ph.D. student and the first author of the paper. "We used various alternatives, including linear polymers, in which each monomeric unit bonds with two neighbors only. In this study, we went on to study new branched polymers where each unit bonds with at least three other units. Together they form large mesh structures that ensure faster kinetics of the electrode processes. Electrodes made of these materials display even higher charge and discharge rates."

A standard lithium-ion cell is filled with lithium-containing electrolyte and divided into the anode and the cathode by a separator. In a charged battery, the majority of lithium atoms are incorporated in the anode's crystal structure. As the battery discharges, lithium atoms move from the anode to the cathode through the separator. The Russian team studied the dual-ion batteries in which the electrochemical processes involved the electrolyte's cations (i.e., lithium cations) and anions that get in and out of the anode and cathode material's structures, respectively.

Another novel feature is that, in some experiments, the scientists used potassium electrolytes instead of expensive lithium ones to obtain potassium dual-ion batteries.

The team synthesized two novel copolymers of dihydrophenazine with diphenylamine (PDPAPZ) and phenothiazine (PPTZPZ), which they used to produce cathodes. As anodes, they used metallic lithium and potassium. Since the cathode drives the key features of these battery prototypes called half-cells, the scientists assemble them to assess the capabilities of new cathode materials quickly.

While PPTZPZ half-cells showed average performance, PDPAPZ turned out to be more efficient: lithium half-cells with PDPAPZ were fairly quick to charge and discharge while displaying good stability and retaining up to a third of their capacity even after 25,000 operating cycles. If a regular phone battery were as stable, it could be charged and discharged daily for 70 years. PDPAPZ potassium half-cells exhibited a high energy density of 398 Wh/kg. For comparison, the value for common lithium cells is 200-250 Wh/kg, the anode and electrolyte weights included. Thus the Russian team demonstrated that polymer cathode materials could create efficient lithium and potassium dual-ion batteries.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

How dietary choice influences lifespan in fruit flies

image: A fruit fly feeding on a banana

Image: 
Sanjay Acharya (CC BY-SA 4.0 - https://creativecommons.org/licenses/by-sa/4.0)

Having a choice of foods may accelerate aging and shorten the lifespan of fruit flies, according to a study published today in the open-access eLife journal.

While early experiments have shown that calorie restriction can extend lifespan, the current study adds to the growing body of evidence that suggests other diet characteristics besides calories may also influence aging and lead to earlier death.

"It has been recognised for nearly a century that diet modulates aging," says first author Yang Lyu, Postdoctoral Fellow at the Department of Molecular & Integrative Physiology, University of Michigan Medical School, Ann Arbor, US. "For our study, we wanted to see if having a choice of foods affects metabolism and lifespan in the fruit fly Drosophila melanogaster."

Lyu and colleagues began by giving fruit flies either a diet of brewer's yeast and sugar mixed together, or a choice of the two nutrients separately. They found that having the choice between foods changed the flies' behaviour and metabolism, and shortened their lifespan, independent of how much they ate.

These changes indicated that the flies have distinct neural and metabolic states when given a choice between foods, in comparison to when they are presented with one combined food option. Further experiments by the team showed that a choice between meals causes rapid metabolic reprogramming, and this is controlled by an increase in serotonin 2A receptor signalling in the brain.

The researchers then inhibited the expression of an enzyme called glutamate dehydrogenase, which is key to these metabolic changes, and found that doing so reversed the life-shortening effects of food choice on the flies.

"Our results reveal a mechanism of aging caused by fruit flies and other organisms needing to evaluate nutrient availability to improve their survival and wellbeing," Lyu says.

Since many of the same metabolic pathways also exist in humans, the authors say it is possible that similar mechanisms may be involved in human aging. But more studies will be needed to confirm this.

"It could be that the act of making decisions about food itself is costly to the flies," concludes senior author Scott Pletcher, the William H. Howell Collegiate Professor of Physiology, and Research Professor at the Institute of Gerontology, University of Michigan Medical School. "We know that in humans, having many choices can be stressful. But while short-term stress can be beneficial, chronic stress is often detrimental to lifespan and health."

Credit: 
eLife

Alcohols exhibit quantum effects

Skoltech scientists and their colleagues from the Russian Quantum Center revealed a significant role of nuclear quantum effects in the polarization of alcohol in an external electric field. Their research findings are published in The Journal of Physical Chemistry.

Molecular liquids, such as water or alcohols, are known to be polar. Polarity results from the charge separation mechanism, the microscopic description of which still bears some open questions. In fact, the basic description of the polarization rests on a hundred-years-old concept: the dielectric polarization is connected to the molecular dipole moment due to their hydroxyl functional group (-OH). Orientations of these dipoles would explain the high polarizability of alcohols and the corresponding high dielectric constant. Still, the discrepancy between the measured dielectric constants and those determined by calculations shows that other mechanisms not considered so far may play an important role, too. As the exact mechanism of the dielectric response of alcohols is still unclear, new ideas should be proposed and tested.

"To address the problem, we experimentally investigated and compared the dielectric responses of a series of monohydric alcohols with different molecular chain lengths and found remarkable similarities which could not be explained by the conventional mechanism of rotating molecular dipoles," says Dr. Ryzhov, a Skoltech Research Scientist in charge of the experimental part of the study. "Notwithstanding the conventional wisdom, we found that the basic mechanism of the dielectric polarization in alcohols to be of a quantum mechanical nature: the tunneling of excess protons and the consequent formation of intermolecular dipoles with proton-holes. These dipoles are the actual ones that determine the dielectric response from dc up to THz, irrespective of the molecule geometry, hence orientation", adds Professor Ouerdane from the Skoltech Center for Energy Science and Technology (CEST). "Our research provides new insight into the properties of liquid dielectrics. The core assumption of our model pertains to a novel understanding of dielectric polarization phenomena in polar liquids employing nuclear quantum effects," concludes Vasily Artemov, a Senior Research Scientist at Skoltech and the leading author of the paper.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Decoding breast milk to make better baby formula (video)

image: What makes breast milk so good for babies? In this episode of Reactions, our host, Sam, chats with chemist Steven Townsend, Ph.D., who's trying to figure out which sugar molecules in breast milk make it so unique and difficult to mimic: https://youtu.be/o4_npLDyyUw.

Image: 
The American Chemical Society

WASHINGTON, Jan. 19, 2020 -- What makes breast milk so good for babies? In this episode of Reactions, our host, Sam, chats with chemist Steven Townsend, Ph.D., who's trying to figure out which sugar molecules in breast milk make it so unique and difficult to mimic: https://youtu.be/o4_npLDyyUw.

Credit: 
American Chemical Society

Light-induced twisting of Weyl nodes switches on giant electron current

image: Schematic of light-induced formation of Weyl points in a Dirac material of ZrTe5. Jigang Wang and collaborators report how coherently twisted lattice motion by laser pulses, i.e., a phononic switch, can control the crystal inversion symmetry and photogenerate giant low dissipation current with an exceptional ballistic transport protected by induced Weyl band topology.

Image: 
U.S. Department of Energy, Ames Laboratory

Scientists at the U.S. Department of Energy's Ames Laboratory and collaborators at Brookhaven National Laboratory and the University of Alabama at Birmingham have discovered a new light-induced switch that twists the crystal lattice of the material, switching on a giant electron current that appears to be nearly dissipationless. The discovery was made in a category of topological materials that holds great promise for spintronics, topological effect transistors, and quantum computing.

Weyl and Dirac semimetals can host exotic, nearly dissipationless, electron conduction properties that take advantage of the unique state in the crystal lattice and electronic structure of the material that protects the electrons from doing so. These anomalous electron transport channels, protected by symmetry and topology, don't normally occur in conventional metals such as copper. After decades of being described only in the context of theoretical physics, there is growing interest in fabricating, exploring, refining, and controlling their topologically protected electronic properties for device applications. For example, wide-scale adoption of quantum computing requires building devices in which fragile quantum states are protected from impurities and noisy environments. One approach to achieve this is through the development of topological quantum computation, in which qubits are based on "symmetry-protected" dissipationless electric currents that are immune to noise.

"Light-induced lattice twisting, or a phononic switch, can control the crystal inversion symmetry and photogenerate giant electric current with very small resistance," said Jigang Wang, senior scientist at Ames Laboratory and professor of physics at Iowa State University. "This new control principle does not require static electric or magnetic fields, and has much faster speeds and lower energy cost."

"This finding could be extended to a new quantum computing principle based on the chiral physics and dissipationless energy transport, which may run much faster speeds, lower energy cost and high operation temperature." said Liang Luo, a scientist at Ames Laboratory and first author of the paper.

Wang, Luo, and their colleagues accomplished just that, using terahertz (one trillion cycles per second) laser light spectroscopy to examine and nudge these materials into revealing the symmetry switching mechanisms of their properties.

In this experiment, the team altered the symmetry of the electronic structure of the material, using laser pulses to twist the lattice arrangement of the crystal. This light switch enables "Weyl points" in the material, causing electrons to behave as massless particles that can carry the protected, low dissipation current that is sought after.

"We achieved this giant dissipationless current by driving periodic motions of atoms around their equilibrium position in order to break crystal inversion symmetry," says Ilias Perakis, professor of physics and chair at the University of Alabama at Birmingham. "This light-induced Weyl semimetal transport and topology control principle appears to be universal and will be very useful in the development of future quantum computing and electronics with high speed and low energy consumption."

"What we've lacked until now is a low energy and fast switch to induce and control symmetry of these materials," said Qiang Li, Group leader of the Brookhaven National Laboratory's Advanced Energy Materials Group. "Our discovery of a light symmetry switch opens a fascinating opportunity to carry dissipationless electron current, a topologically protected state that doesn't weaken or slow down when it bumps into imperfections and impurities in the material."

Credit: 
DOE/Ames National Laboratory

Scientists to global policymakers: Treat fish as food to help solve world hunger

image: Women at market gather around catch from Lake Malawi

Image: 
Abigail Bennett, Michigan State University Center for Systems Integration and Sustainability

Scientists are urging global policymakers and funders to think of fish as a solution to food insecurity and malnutrition, and not just as a natural resource that provides income and livelihoods, in a newly-published paper in the peer-reviewed journal Ambio. Titled "Recognize fish as food in policy discourse and development funding," the paper argues for viewing fish from a food systems perspective to broaden the conversation on food and nutrition security and equity, especially as global food systems will face increasing threats from climate change.

The "Fish as Food" paper, authored by scientists and policy experts from Michigan State University, Duke University, Harvard University, World Bank and Environmental Defense Fund, among others, notes the global development community is not on track to meet goals for alleviating malnutrition. According to the U.N. Food and Agriculture Organization, the number of malnourished people in the world will increase from 678 million in 2018 to 841 million in 2030 if current trends continue -- an estimate not accounting for effects of the COVID-19 pandemic. Fish provide 17% of the animal protein consumed globally and are rich in micronutrients, essential fatty acids and protein essential for cognitive development and maternal and childhood health, especially for communities in developing countries where fish may be the only source of key nutrients. Yet fish is largely missing from key global food policy discussions and decision-making.

"Fish has always been food. But in this paper, we lay out an agenda for enhancing the role of fish in addressing hunger and malnutrition," says Abigail Bennett, assistant professor in the Center for Systems Integration and Sustainability in the Department of Fisheries and Wildlife at Michigan State University. "We are urging the international development community not only to see fish as food but to recognize fish as a nutrient-rich food that can make a difference for the well-being of the world's poor and vulnerable. What kinds of new knowledge, policies and interventions will be required to support that role for fish?" she adds.

The United Nations' Sustainable Development Goal 2, Zero Hunger, does not mention fisheries or aquaculture by name, nor does it offer specific guidance on fish production systems. Fish also appear underrepresented in international development funding priorities, such as by the World Bank, the paper finds.

"Fish -- and aquatic foods in general -- are largely ignored in the food policy dialogue," says Kristin Kleisner, lead senior scientist for Environmental Defense Fund Oceans program and a co-author of the paper. "This is a huge oversight, as fish offer a critical source of nutrition unparalleled by any other type of food, and it is often the only source of key nutrients for vulnerable populations around the world.

"By refocusing on nutrition, in addition to the many other benefits fisheries provide, we're amplifying a call to action for governments, international development organizations and society more broadly to invest in the sustainability of capture fisheries and aquaculture," adds Kleisner.

"Fisheries will be ever more important as the world faces mounting challenges to feed itself," says Kelly Brownell, director of the World Food Policy Center at Duke University.

Global policymakers and funders framing fish as food, the authors state, can encourage innovative policies and actions to support the role of fish in global food and nutrition security.

The paper identifies four pillars of suggested action to begin framing fish as food, not just a natural resource. These pillars are:

1. Improve metrics. There is currently a paucity of metrics to assess and communicate the contributions of fish to food and nutrition security. Governments and researchers can collaborate to develop better tools to raise the profile of fish in broader food and nutrition security policies and investment priorities.

2. Promote nutrition-sensitive fish food systems. Current management regimes emphasize the "maximum sustainable yield" for a given fishery. Managing for "optimal nutritional yield" would focus on not just rebuilding and conserving fish populations -- an important goal in and of itself -- but also on sustainably managing nutrient-rich fisheries.

3. Govern distribution. Availability, access and stability are key features of food and nutrition security. Even though fish is one of the most traded food commodities in the world, there is limited information about its distribution and links to nutrition security. There is also a need to promote equitable distribution of capital and property rights to access fisheries, particularly that recognize the importance of small-scale fisheries and roles women play in fishing and aquaculture sectors.

4. Situate fish in a food systems framework. Policymakers need the tools to conceptualize fishing and aquaculture as components of the food systems framework. A "fish as food" framing requires a better understanding of the connections among fish production and distribution, terrestrial agriculture and planetary health.

Sustainable fisheries and aquaculture are key to feeding the world and alleviating malnutrition and already provide valuable nutrition and livelihood contributions. Including a nutrition lens when illustrating the multiple benefits of sustainable fisheries production can help to elevate the importance and impact of fish as a key component of the global food system and to ensure that we do not fall behind in global food security targets.

Credit: 
Michigan State University

Research finds tiny bubbles tell tales of big volcanic eruptions

image: Sahand Hajimirza is a postdoctoral research associate in Rice University's Department of Earth, Environmental and Planetary Sciences.

Image: 
Photo courtesy of S. Hajimirza/Rice University

HOUSTON - (Jan. 19, 2021) - Microscopic bubbles can tell stories about Earth's biggest volcanic eruptions and geoscientists from Rice University and the University of Texas at Austin have discovered some of those stories are written in nanoparticles.

In an open-access study published online in Nature Communications, Rice's Sahand Hajimirza and Helge Gonnermann and UT Austin's James Gardner answered a longstanding question about explosive volcanic eruptions like the ones at Mount St. Helens in 1980, the Philippines' Mount Pinatubo in 1991 or Chile's Mount Chaitén in 2008.

Geoscientists have long sought to use tiny bubbles in erupted lava and ash to reconstruct some of the conditions, like heat and pressure, that occur in these powerful eruptions. But there's been a historic disconnect between numerical models that predict how many bubbles will form and the actual amounts of bubbles measured in erupted rocks.

Hajimirza, Gonnermann and Gardner worked for more than five years to reconcile those differences for Plinian eruptions. Named in honor of Pliny the Younger, the Roman author who described the eruption that destroyed Pompeii in A.D. 79, Plinian eruptions are some of the most intense and destructive volcanic events.

"Eruption intensity refers to the both the amount of magma that's erupted and how quickly it comes out," said Hajimirza, a postdoctoral researcher and former Ph.D. student in Gonnermann's lab at Rice's Department of Earth, Environmental and Planetary Sciences. "The typical intensity of Plinian eruptions ranges from about 10 million kilograms per second to 10 billion kilograms per second. That is equivalent to 5,000 to 5 million pickup trucks per second."

One way scientists can gauge the speed of rising magma is by studying microscopic bubbles in erupted lava and ash. Like bubbles in uncorked champagne, magma bubbles are created by a rapid decrease in pressure. In magma, this causes dissolved water to escape in the form of gas bubbles.

"As magma rises, its pressure decreases," Hajimirza said. "At some point, it reaches a pressure at which water is saturated, and further decompression causes supersaturation and the formation of bubbles."

As water escapes in the form of bubbles, the molten rock becomes less saturated. But if the magma continues to rise, decreasing pressure increases saturation.

"This feedback determines how many bubbles form," Hajimirza said. "The faster the magma rises, the higher the decompression rate and supersaturation pressure, and the more abundant the nucleated bubbles."

In Plinian eruptions, so much magma rises so fast that the number of bubbles is staggering. When Mount St. Helens erupted on May 18, 1980, for example, it spewed more than one cubic kilometer of rock and ash in nine hours, and there were about one million billion bubbles in each cubic meter of that erupted material.

"The total bubbles would be around a septillion," Hajimirza said. "That's a one followed by 24 zeros, or about 1,000 times more than all the grains of sand on all Earth's beaches."

In his Ph.D. studies, Hajimirza developed a predictive model for bubble formation and worked with Gardner to test the model in experiments at UT Austin. The new study builds upon that work by examining how magnetite crystals no larger than a few billionths of a meter could change how bubbles form at various depths.

"When bubbles nucleate, they can form in liquid, which we call homogeneous nucleation, or they can nucleate on a solid surface, which we call heterogeneous," Hajimirza said. "A daily life example would be boiling a pot of water. When bubbles form on the bottom of the pot, rather than in the liquid water, that is heterogeneous nucleation."

Bubbles from the bottom of the pot are often the first to form, because heterogeneous and homogeneous nucleation typically begin at different temperatures. In rising magma, heterogeneous bubble formation begins earlier, at lower supersaturation levels. And the surfaces where bubbles nucleate are often on tiny crystals.

"How much they facilitate nucleation depends on the type of crystals," Hajimirza said. "Magnetites, in particular, are the most effective."

In the study, Hajimirza, Gonnermann and Gardner incorporated magnetite-mediated nucleation in numerical models of bubble formation and found the models produced results that agreed with observational data from Plinian eruptions.

Hajimirza said magnetites are likely present in all Plinian magma. And while previous research on hasn't revealed enough magnetites to account for all observed bubbles, previous studies may have missed small nanocrystals that would only be revealed with transmission electron microscopy, a rarely used technique that is only now becoming more broadly available.

To find out if that's the case, Hajimirza, Gonnermann and Gardner called for a "systematic search for magnetite nanolites" in material from Plinian eruptions. That would provide observational data to better define the role of magnetites and heterogeneous nucleation in bubble formation, and could lead to better models and improved volcanic forecasts.

"Forecasting eruptions is a long-term goal for volcanologists, but it's challenging because we cannot directly observe subsurface processes," said Hajimirza. "One of the grand challenges of volcano science, as outlined by the National Academies in 2017, is improving eruption forecasting by better integration of the observational data we have with the quantitative models, like the one we developed for this study."

Credit: 
Rice University

A new carbon budget framework provides a clearer view of our climate deadlines

image: Damon Matthews: "The wide range of carbon budget estimates in the literature has contributed to both confusion and inaction in climate policy circles."

Image: 
Concordia University

Just how close are the world's countries to achieving the Paris Agreement target of keeping climate change limited to a 1.5°C increase above pre-industrial levels?

It's a tricky question with a complex answer. One approach is to use the remaining carbon budget to gauge how many more tonnes of carbon dioxide we can still emit and have a chance of staying under the target laid out by the 2015 international accord. However, estimates of the remaining carbon budget have varied considerably in previous studies because of inconsistent approaches and assumptions used by researchers.

Nature Communications Earth and Environment just published a paper by a group of researchers led by Damon Matthews, professor in the Department of Geography, Planning and Environment. In it, they present a new framework for calculating the remaining carbon budget that is able to generate a much narrower estimate and its uncertainty.

The researchers estimate that between 230 and 440 billion more tonnes of CO2 from 2020 onwards can be emitted into the atmosphere and still provide a reasonable chance of limiting global warming to 1.5°C. This is the same as five to 10 years of current emission levels.

"The wide range of carbon budget estimates in the literature has contributed to both confusion and inaction in climate policy circles," explains Matthews, the Concordia Research Chair in Climate Science and Sustainability. "This is the first time we have gone through all the uncertainties and included them in a single estimate."

Uncertainties included

Matthews identifies five key uncertain parameters affecting the remaining carbon budget.

The first is the amount of observed warming that has occurred to date; the second is the amount of CO2 that has been emitted over the past 150 years; the third uncertainty is the amount of warming we are experiencing that is due to CO2 vs. non-CO2 greenhouse gas emissions; fourth is the future non-CO2 contributions to warming; and last is the amount of warming that has not yet occurred as a result of emissions already in the atmosphere.

Using a new set of equations, the researchers were able to relate these parameters to each other and calculate a unified distribution of the remaining carbon budget.

The 440 billion tonnes of CO2 is a median estimate, however, giving us a 50/50 chance of meeting the 1.5°C target. The researchers' uncertainty range runs from 230 billion tonnes before net-zero, which would give us a 67 per cent chance of meeting the target, to 670 billion tonnes for a one-in-three chance.

These numbers are based on accounting for geophysical uncertainties (those related to scientific understanding of the climate system), but not socioeconomic ones (those relating to human decisions and socioeconomic systems). The decisions humans make in the near-term matter greatly and have the potential to either increase or decrease the size of the remaining carbon budget. In the new framework, these decisions could add (or remove) as much as 170 billion tonnes of CO2 to the median carbon budget estimate.

A window of opportunity

The COVID-19 pandemic has presented humans with an opportunity, Matthews argues. The year 2020 experienced a noticeable drop in emissions from 2019 due in large part to reduced human mobility. If we are able to direct recovery investments in ways that would continue this decrease (rather than allowing emissions to rebound) we would greatly increase our chances of remaining under the 1.5°C Paris Agreement target.

Another source of cautious optimism lies with the incoming Biden administration in the United States, which has made climate change a priority.

"I am optimistic that having national leadership in the US that can mobilize efforts on climate change will make a big difference over the coming years," Matthews adds. "The momentum is shifting in the right direction, but it is still not happening fast enough."

Credit: 
Concordia University

Moffitt researchers identify how cancer cells adapt to survive harsh tumor microenvironments

TAMPA, Fla. - Cells need energy to survive and thrive. Generally, if oxygen is available, cells will oxidize glucose to carbon dioxide, which is very efficient, much like burning gasoline in your car. However, even in the presence of adequate oxygen, many malignant cells choose instead to ferment glucose to lactic acid, which is a much less efficient process. This metabolic adaptation is referred to as the Warburg Effect, as it was first described by Otto Warburg almost a century ago. Ever since, the conditions that would evolutionarily select for cells to exhibit a Warburg Effect have been in debate, as it is much less efficient and produces toxic waste products.

"The Warburg Effect is misunderstood because it doesn't make sense that a cell would ferment glucose when it could get much more energy by oxidizing it. Our current study goes to the heart of this problem by defining the microenvironmental conditions that exist in early cancers that would select for a Warburg phenotype. This is important because such cells are much more aggressive and likely to lead to cancers that are lethal," said Mehdi Damaghi, Ph.D., study first author and research scientist in the Cancer Physiology Department at Moffitt Cancer Center.

To better understand the conditions that select for the Warburg Effect and the mechanisms where cells can express this metabolic adaptation, Moffitt researchers subjected nonmalignant cells to the harsh tumor microenvironment that is present during early carcinogenesis, known as ductal carcinoma in situ (DCIS). DCIS is an uncontrolled growth of cells within the breast ducts. It is the earliest stage at which breast cancer can be diagnosed. Although it is considered noninvasive, it can lead to invasive cancer in a fraction of cases. In a new research article published in the Proceedings of the National Academy of Sciences, the Moffitt team shows that these conditions select for cells to express a Warburg Effect.

The investigators hypothesized that the complex interplay of factors in the harsh tumor microenvironment within DCIS, such as low nutrients, low oxygen and high acidity, may lead pre-malignant cells to express a Warburg phenotype in order to survive and thrive within these hostile conditions. To test their theory, the research team subjected low glycolytic breast cancer cells to these different microenvironment selection pressures (low oxygen, high acidity, low glucose and starvation) over 12 to 18 months. After this selection, individual clones of the cancer cells were isolated and characterized for their metabolic and transcriptomic profiles.

Their results indicate that the poor metabolic conditions in the tumor microenvironment of DCIS lead these pre-malignant cells to select for a Warburg phenotype through transcriptional reprogramming. In particular, the researchers found that the activation and stabilization of the transcription factor, KLF4, allowed for cancer cells to adapt to a phenotype that can survive the harsh environment. "Although KLF4 is clearly responsible for this phenotype in this particular system, we expect that different cell lineages will come up with their own approaches to solve this need for metabolic adaptation. We call this 'functional equivalence,' " said Robert Gillies, Ph.D., senior author and chair of the Cancer Physiology Department. "We have shown clearly that this phenotype is selected by harsh microenvironmental conditions."

Credit: 
H. Lee Moffitt Cancer Center & Research Institute

Mystery of Martian glaciers revealed

image: This image of a glacier on Mars shows the abundance of boulders within the ice.

High-resolution imaging of the surface of Mars suggests that debris-covered glacier deposits formed during multiple punctuated
episodes of ice accumulation over long timescales. Debris-covered glacial landforms called lobate debris aprons
(LDA) are widespread on Mars. It has not been clear whether these LDAs formed over the past 300-800 million years during a
single long deposition period or during multiple short-lived episodes of ice accumulation.

To address this question, Joseph Levy and colleagues used high-resolution imaging to map boulders along 45 LDAs on the surface of Mars. The boulders are commonly
clustered into bands across all LDAs, similar to boulders on ancient terrestrial debris-covered glaciers.

The findings point to multiple cycles of ice accumulation and advance over the past 300-800 million
years, extending evidence for climate change on Mars beyond the 20-million-year window provided by numerical modeling.

Image: 
Joe Levy/Colgate University

In a new paper published today in the Proceedings of the National Academies of ScienceS (PNAS), planetary geologist Joe Levy, assistant professor of geology at Colgate University, reveals a groundbreaking new analysis of the mysterious glaciers of Mars.

On Earth, glaciers covered wide swaths of the planet during the last Ice Age, which reached its peak about 20,000 years ago, before receding to the poles and leaving behind the rocks they pushed behind. On Mars, however, the glaciers never left, remaining frozen on the Red Planet's cold surface for more than 300 million years, covered in debris. "All the rocks and sand carried on that ice have remained on the surface," says Levy. "It's like putting the ice in a cooler under all those sediments."

Geologists, however, haven't been able to tell whether all of those glaciers formed during one massive Martian Ice Age, or in multiple separate events over millions of years. Since ice ages result from a shift in the tilt of a planet's axis (known as obliquity), answering that question could tell scientists how Mars' orbit and climate have changed over time -- as well as what kind of rocks, gases, or even microbes might be trapped inside the ice.

"There are really good models for Mars' orbital parameters for the last 20 million years," says Levy. "After that the models tend to get chaotic."

Levy concocted a plan to examine the rocks on the surface of the glaciers as a natural experiment. Since they presumably erode over time, a steady progression of larger to smaller rocks proceeding downhill would point to a single, long ice age event.

Choosing 45 glaciers to examine, Levy acquired high-resolution images collected by the Mars Reconnaissance Orbiter satellite and set out to count the size and number of rocks. With a resolution of 25 centimeters per pixel, "you can see things the size of a dinner table," Levy says.

Even at that magnification, however, artificial intelligence can't accurately determine what is or isn't a rock on rough glacier surfaces; so Levy enlisted the help of 10 Colgate students during two summers to count and measure some 60,000 big rocks. "We did a kind of virtual field work, walking up and down these glaciers and mapping the boulders," Levy says.

Levy initially panicked when, far from a tidy progression of boulders by size, the rock sizes seemed to be distributed at random. "In fact, the boulders were telling us a different story," Levy says. "It wasn't their size that mattered; it was how they were grouped or clustered."

Since the rocks were traveling inside the glaciers, they were not eroding, he realized. At the same time, they were distributed in clear bands of debris across the glaciers' surfaces, marking the limit of separate and distinct flows of ice, formed as Mars wobbled on its axis.

Based on that data, Levy has concluded that Mars has undergone somewhere between six and 20 separate ice ages during the past 300-800 million years. Those findings appear in PNAS, written along with six current or former Colgate students; Colgate mathematics professor Will Cipolli; and colleagues from NASA, the University of Arizona, Fitchburg State University, and the University of Texas-Austin.

"This paper is the first geological evidence of what Martian orbit and obliquity might have been doing for hundreds of millions of years," Levy says. The finding that glaciers formed over time holds implications for planetary geology and even space exploration, he explains. "These glaciers are little time capsules, capturing snapshots of what was blowing around in the Martian atmosphere," he says. "Now we know that we have access to hundreds of millions of years of Martian history without having to drill down deep through the crust -- we can just take a hike along the surface."

That history includes any signs of life potentially present from Mars' distant past. "If there are any biomarkers blowing around, those are going to be trapped in the ice too." At the same time, eventual explorers to Mars who might need to depend on extracting fresh water from glaciers to survive will need to know that there may be bands of rocks inside them that will make drilling hazardous. Levy and his colleagues are now in the process of mapping the rest of the glaciers on Mars' surface, hoping with the data they have, artificial intelligence can now we trained to take over the hard work of identifying and counting boulders.

That will bring us one step closer to a complete planetary history of the Red Planet -- including the age-old question of whether Mars could ever have supported life.

"There's a lot of work to be done figuring out the details of Martian climate history," says Levy, "including when and where it was warm enough and wet enough for there to be brines and liquid water."

Credit: 
Colgate University

A little friction goes a long way toward stronger nanotube fibers

image: Rice University researchers modeled the relationship between the length of carbon nanotubes and the friction-causing crosslinks between them in a fiber and found the ratio can be used to measure the fiber's strength.

Image: 
Illustration by Evgeni Penev/Rice University

HOUSTON - (Jan. 19, 2021) - Carbon nanotube fibers are not nearly as strong as the nanotubes they contain, but Rice University researchers are working to close the gap.

A computational model by materials theorist Boris Yakobson and his team at Rice's Brown School of Engineering establishes a universal scaling relationship between nanotube length and friction between them in a bundle, parameters that can be used to fine-tune fiber properties for strength.

The model is a tool for scientists and engineers who develop conductive fibers for aerospace, automotive, medical and textile applications like smart clothing. Carbon nanotube fibers have been considered as a possible basis for a space elevator, a project Yakobson has studied.

The research is detailed in the American Chemical Society journal ACS Nano.

As grown, individual carbon nanotubes are basically rolled-up tubes of graphene, one of the strongest known materials. But when bundled, as Rice and other labs have been doing since 2013, the threadlike fibers are far weaker, about one-hundredth the strength of individual tubes, according to the researchers.

"One single nanotube is about the strongest thing you can imagine, because of its very strong carbon-carbon bonds," said Rice assistant research professor Evgeni Penev, a longtime member of the Yakobson group. "But when you start making things out of nanotubes, those things are much weaker than you would expect. Our question is, why? What can be done to resolve this disparity?"

The model demonstrates how the length of nanotubes and the friction between them are the best indicators of overall fiber strength, and suggests strategies toward making them better. One is to simply use longer nanotubes. Another is to increase the number of crosslinks between the tubes, either chemically or by electron irradiation to create defects that make carbon atoms available to bond.

The coarse-grained model quantifies the friction between nanotubes, specifically how it regulates slip when the fibers are under strain and how well connections between nanotubes are likely to recover after breaking. The balance between length and friction is important: The longer the nanotubes, the fewer crosslinks are needed, and vice versa.

"Lengthwise gaps are just a function of how long you can make the nanotubes," Penev said. "These gaps are essentially defects that cause the interfaces to slip when you start pulling on a bundle."

With that inherent weakness as a given, Penev and lead author Nitant Gupta, a Rice graduate student, began to look at the impact of crosslinks on strength. "We modeled the links as carbon dimers or short hydrocarbon chains, and when we started pulling them, we saw they would stretch and break," Penev said.

"What became clear was that the overall strength of this interface depends a lot on the capability of these crosslinks to heal," he said. "If they break and reconnect to the next available carbon as the nanotubes slip, there will be an effective friction between the tubes that makes the fiber stronger. That's the ideal case."

"We show the crosslink density and the length play similar roles, and we use the product of these two values to characterize the strength of the whole bundle," Gupta said, noting the model is available for download via the paper's supporting information.

Penev said braiding nanotubes or linking them like chains would also likely strengthen fibers. Those techniques are beyond the capabilities of the current model, but worth studying, he said.

Yakobson said there's great technological value in strengthening materials. "It's an ongoing, uphill battle in labs around the world, with every advance in GPa (gigapascal, a measure of tensile strength) a great achievement.

"Our theory puts numerous disparate data in clearer perspective, highlighting that there's still a long way to the summit of strength while also suggesting specific steps to experimenters," he said. "Or so we hope."

Credit: 
Rice University

Canadian researchers create new form of cultivated meat

image: A sample of meat cultivated by researchers at Canada's McMaster University, using cells from mice.

Image: 
McMaster University

HAMILTON, ON, Jan. 19, 2021 -- McMaster researchers have developed a new form of cultivated meat using a method that promises more natural flavour and texture than other alternatives to traditional meat from animals.

Researchers Ravi Selvaganapathy and Alireza Shahin-Shamsabadi, both of the university's School of Biomedical Engineering, have devised a way to make meat by stacking thin sheets of cultivated muscle and fat cells grown together in a lab setting. The technique is adapted from a method used to grow tissue for human transplants.

The sheets of living cells, each about the thickness of a sheet of printer paper, are first grown in culture and then concentrated on growth plates before being peeled off and stacked or folded together. The sheets naturally bond to one another before the cells die.

The layers can be stacked into a solid piece of any thickness, Selvaganapathy says, and "tuned" to replicate the fat content and marbling of any cut of meat - an advantage over other alternatives.

"We are creating slabs of meat," he says. "Consumers will be able to buy meat with whatever percentage of fat they like - just like they do with milk."

As they describe in the journal Cells Tissues Organs, the researchers proved the concept by making meat from available lines of mouse cells. Though they did not eat the mouse meat described in the research paper, they later made and cooked a sample of meat they created from rabbit cells.

"It felt and tasted just like meat," says Selvaganapathy.

There is no reason to think the same technology would not work for growing beef, pork or chicken, and the model would lend itself well to large-scale production, Selvaganapathy says.

The researchers were inspired by the meat-supply crisis in which worldwide demand is growing while current meat consumption is straining land and water resources and generating troubling levels of greenhouse gases.

"Meat production right now is not sustainable," Selvaganapathy says. "There has to be an alternative way of creating meat."

Producing viable meat without raising and harvesting animals would be far more sustainable, more sanitary and far less wasteful, the researchers point out. While other forms of cultured meat have previously been developed, the McMaster researchers believe theirs has the best potential for creating products consumers will accept, enjoy and afford.

The researchers have formed a start-up company to begin commercializing the technology.

Credit: 
McMaster University

Research news tip sheet: Story ideas from Johns Hopkins Medicine

image: In a study in mice and human cells, Johns Hopkins Medicine researchers say that they have developed a tiny, yet effective method for preventing premature birth. The vaginally delivered treatment contains nanosized (billionth of a meter) particles of drugs that easily penetrate the vaginal wall to reach the uterine muscles and prevent them from contracting. If proven effective in humans, the treatment could be one of the only clinical options available to prevent preterm labor. The FDA has recommended removing Makena (17-hydroxyprogesterone caproate), the only approved medicine for this purpose, from the market.

Image: 
Johns Hopkins Medicine

NANOTECHNOLOGY PREVENTS PREMATURE BIRTH IN MOUSE STUDIES

Media Contact: Rachel Butch, rbutch1@jhmi.edu

In a study in mice and human cells, Johns Hopkins Medicine researchers say that they have developed a tiny, yet effective method for preventing premature birth. The vaginally delivered treatment contains nanosized (billionth of a meter) particles of drugs that easily penetrate the vaginal wall to reach the uterine muscles and prevent them from contracting. If proven effective in humans, the treatment could be one of the only clinical options available to prevent preterm labor. The FDA has recommended removing Makena (17-hydroxyprogesterone caproate), the only approved medicine for this purpose, from the market.

The study was published Jan. 13, 2020, in the journal Science Translational Medicine.

There are an estimated 15 million premature births each year, making it the number one cause of infant mortality worldwide. Few indicators can predict which pregnancies will result in a preterm birth, but inflammation in the reproductive tract is a contributing factor in approximately one-third of all cases. This symptom not only puts babies at risk of being born with low birth weight and underdeveloped lungs, but also has been linked to brain injuries in the developing fetus.

The newly reported experimental therapy uses technology developed by scientists at the Johns Hopkins Center for Nanomedicine. Its active ingredients are two drugs: progesterone, a hormone that regulates female reproduction, and trichostatin A, an inhibitor of the enzyme histone deacetylase, which regulates gene expression. To prepare the treatment, the drugs are first ground down into miniature crystals, about 200-300 nanometers in diameter, or smaller than a typical bacterium. Then, the nanocrystals are coated with a stabilizing compound that keeps them from getting caught in the body's protective mucus layers, which absorb and sweep away foreign particles.

"This means we can use far less drug to efficiently reach other parts of the female reproductive tract," says Laura Ensign, Ph.D., associate professor of ophthalmology, with a secondary appointment in gynecology and obstetrics at the Johns Hopkins University School of Medicine, co-author of the study and an expert on nanomedicine and drug delivery systems.

To test their therapy, the researchers used mice that were engineered to mimic inflammation-related conditions that would lead to preterm labor in humans. They found that the experimental treatment prevented the mice from entering premature labor. Neurological tests of mouse pups born to mothers that had received the treatment revealed no abnormalities.

The researchers also applied the drug combination to human uterine cells grown in the lab. The drug combination, they report, decreased contractions in the test samples.

Additional laboratory tests of the experimental treatment are planned to evaluate its safety before considering trials in humans.

LOOKING BACK DECADES SHOWS HOSPITALIZED PATIENTS NEED MORE THAN MOVEMENT TO PREVENT DANGEROUS CLOTS

Media Contact: Michael E. Newman, mnewma25@jhmi.edu

Based on a systematic review of scientific studies dating back nearly 70 years, researchers at Johns Hopkins Medicine and collaborating institutions suggest that ambulation -- walking or exercising without assistance -- is not sufficient alone as a preventive measure against venous thromboembolism (VTE), a potentially deadly condition in which a blood clot forms, often in the deep veins of the leg, groin or arm (known as a deep vein thrombosis) and may dislodge. If that happens, the clot will travel via the bloodstream to lodge in the lungs and cause tissue damage or death from reduced oxygen (known as a pulmonary embolism).

The researchers presented their findings in the Dec. 8, 2020, issue of the Canadian Medical Association Journal Open.

According to the U.S. Centers for Disease Control and Prevention, VTE annually affects some 300,000 to 600,000 individuals in the United States, and kills between 60,000 to 100,000 of those stricken. Among the groups at highest risk are hospitalized patients, especially those who undergo surgery.

Since the 1950s, ambulation -- in the form of walking or foot/leg exercises -- as soon and as often as possible following surgery has been recommended as a means of reducing the threat of VTE in high-risk patients. The latest study was conducted to see if ambulation is enough or if other measures -- primarily medications known as anticoagulants or blood thinners -- are needed.

The researchers reviewed nearly 21,000 scientific papers from 1951 through April 2020 and used predefined scoring criteria to select the 18 most relevant studies. These consisted of eight retrospective and two prospective studies, seven randomized clinical trials and one secondary analysis of a randomized clinical trial.

"Our systematic review revealed little evidence to support ambulation alone as an adequate means of preventing VTEs," says Elliott Haut, M.D., Ph.D., associate professor of surgery at the Johns Hopkins University School of Medicine and senior author of the study.

The findings, Haut says, are concerning because ambulation is commonly thought to be effective as the sole prophylaxis used to avoid VTEs. In fact, he adds, several national and international clinical guidelines recommend this course of action.

"It's a mistaken belief that often leads to doctors discontinuing what they perceive as unnecessary medication or nurses presenting ambulation as an alternative prophylaxis to patients," says study lead author Brandyn Lau, M.P.H., assistant professor of radiology and radiological science at the Johns Hopkins University School of Medicine. "On the contrary, there is overwhelming evidence supporting medications to prevent VTE in nearly every applicable population admitted to a hospital."

Haut, Lau and their collaborators hope these findings will help change the long-standing misconception that ambulation can prevent VTEs on its own and persuade physicians to take a more comprehensive approach.

JOHNS HOPKINS GYNECOLOGIC CANCER EXPERTS TO ADDRESS DISPARITIES IN CANCER CARE

Media Contact: Marisol Martinez, mmart150@jhmi.edu

Thanks to a nearly $400,000 grant from the American Cancer Society and Pfizer, three gynecologic cancer experts from the Johns Hopkins University School of Medicine and the Johns Hopkins Bloomberg School of Public Health will address widespread race-related barriers and disparities in the delivery of care that may affect the outcomes of gynecologic cancer patients. The grant, awarded under the Addressing Racial Disparities in Cancer Care Competitive Grant Program, will enable the team to develop a scalable (adaptable) and ultimately sustainable way to identify and address the social determinants of health that contribute to these concerns.

The researchers for the Johns Hopkins grant are co-principal investigator Anna Beavis, M.D., M.P.H., and Stephanie Wethington, M.D., M.Sc., assistant professors of gynecology and obstetrics in the Division of Gynecologic Oncology at the Johns Hopkins University School of Medicine, and co-principal investigator Anne Rositch, M.S.P.H., Ph.D., associate professor of epidemiology at the Johns Hopkins Bloomberg School of Public Health.

The project expands on a pilot program started in 2017 in collaboration with Hopkins Community Connection (formerly Health Leads) that identified and addressed challenges disproportionately affecting Black patients -- such as access to basic needs like food, housing and transportation -- which have now been exacerbated by the disparate effect of the COVID-19 pandemic on the Black community. The team hopes that this expansion will be widely applied in the future and lead to more equitable outcomes for all women treated for gynecologic cancers.

"With the pandemic, our minority patients have been affected the most," says Beavis. "While basic needs have always been more prevalent in underserved populations, the pandemic has brought additional financial and other resource strain to our Black patient community, so this grant is very timely."

Gynecologic cancer is any cancer that starts in a woman's reproductive organs (cervix, ovaries, uterus, vagina and vulva). The incidence of gynecologic cancers can vary by cancer type and race or ethnicity.

According to the U.S. Centers for Disease Control and Prevention, about 94,000 women are diagnosed with gynecologic cancers each year. Black and Hispanic women get more cervical cancer associated with HPV than women of other races or ethnicities, possibly because of less access to screening tests and follow-up treatment.

Credit: 
Johns Hopkins Medicine

Fatty acid may help combat multiple sclerosis

The abnormal immune system response that causes multiple sclerosis (MS) by attacking and damaging the central nervous system can be triggered by the lack of a specific fatty acid in fat tissue, according to a new Yale study. The finding suggests that dietary change might help treat some people with the autoimmune disease.

The study was published Jan. 19 in The Journal of Clinical Investigation.

Fat tissue in patients diagnosed with MS lack normal levels of oleic acid, a monounsaturated fatty acid found at high levels in, for instance, cooking oils, meats (beef, chicken, and pork), cheese, nuts, sunflower seeds, eggs, pasta, milk, olives, and avocados, according to the study.

This lack of oleic acids leads to a loss of the metabolic sensors that activate T cells, that mediate the immune system's response to infectious disease, the Yale team found. Without the suppressing effects of these regulatory T cells, the immune system can attack healthy central nervous system cells and cause the vision loss, pain, lack of coordination and other debilitating symptoms of MS.

When researchers introduced oleic acids into the fatty tissue of MS patients in laboratory experiments, levels of regulatory T cells increased, they found.

"We've known for a while that both genetics and the environment play a role in the development of MS,'' said senior author David Hafler, William S. and Lois Stiles Edgerly Professor of Neurology and professor of immunobiology and chair of the Department of Neurology. "This paper suggests that one of environmental factors involved is diet."

Hafler noted that obesity triggers unhealthy levels of inflammation and is a known risk factor for MS, an observation that led him to study the role of diet in MS.

He stressed, however, that more study is necessary to determine whether eating a diet high in oleic acid can help some MS patients.

Credit: 
Yale University

Researchers discover mechanism behind most severe cases of a common blood disorder

With a name like glucose-6-phosphate dehydrogenase deficiency, one would think it is a rare and obscure medical condition, but that's far from the truth. Roughly 400 million people worldwide live with potential of blood disorders due to the enzyme deficiency. While some people are asymptomatic, others suffer from jaundice, ruptured red blood cells and, in the worst cases, kidney failure.

Now, a team led by researchers at the Department of Energy's SLAC National Accelerator Laboratory have uncovered the elusive mechanism behind the most severe cases of the disease: a broken chain of amino acids that warps the shape of the condition's namesake protein, G6PD. The team, led by SLAC Professor Soichi Wakatsuki, report their findings January 18th in Proceedings of the National Academy of Sciences.

An essential enzyme

G6PD's role in our health is hard to overstate. In red blood cells, which deliver oxygen to other cells, the enzyme helps remove more chemically reactive and potentially harmful oxygen molecules, such as hydrogen peroxide, and convert them to water and other more inert byproducts. If the G6PD protein isn't working properly, red blood cells also stop working properly and can, in the worst cases, rupture.

"It's quite an important enzyme in our bodies," Wakatsuki said.

Still, doctors have mostly been at a loss to treat the disease, especially in the most severe cases, known as Class I. "At the moment the treatment for these patients is blood transfusions," said Stanford professor Daria Mochly-Rosen, a senior author on the new study.

Medical researchers have known for decades that upwards of 190 different mutations can lead to some forms of G6PD deficiency, and they also worked out the structure of ordinary G6PD in various forms decades ago. Mochly-Rosen and colleagues have also made strides in identifying drugs that may treat some less-severe forms of the disease, known as Class II and III.

Unfortunately, those drugs do not work for Class I cases, which have proved especially confusing. Most of the mutations that lead to the most severe symptoms were known to occur in areas far away from the active site where it binds to substrate molecules for its enzyme reactions - the usual spot where such mutations would be thought to cause problems.

A bend in the road

To better understand what was going on, Wakatsuki and colleagues from SLAC, Stanford University, the University of Tsukuba and the University de Concepción - along with a group of Stanford undergraduates and local high school students - started with X-ray crystallography studies of four forms of G6PD associated with the worst G6PD-deficiency symptoms, conducted at Structural Molecular Biology beamlines at SLAC's Stanford Synchrotron Radiation Lightsource, Lawrence Berkeley National Laboratory's Advanced Lightsource and Argonne National Laboratory's Advanced Photon Source.

The team found that even though the mutations are far from G6PD's active site, they break up connections in a chain of amino acids that help stabilize the protein. That has a kind of domino effect on the whole protein molecule: It bends awkwardly, and a molecular arm that should help molecules bind to G6PD's active site instead wobbles unpredictably around. The team confirmed aspects of those results with a mixture of other techniques, including small angle X-ray scattering at Beam Line 4-2 at SSRL, computer simulations and cryogenic electron microscopy performed at the Stanford-SLAC Cryo-EM Center.

The results show for the first time how the most severe cases of G6PD deficiency work at the molecular level and could help researchers design new drugs to treat the disease.

"It's been some time we've been working on this," Wakatsuki said. "There's no cure so far, but this could pave the way for new therapeutics."

Mochly-Rosen agreed. "You can start to imagine what kind of drug you need," she said. "With Class I, we were stuck until we found this clue."

Credit: 
DOE/SLAC National Accelerator Laboratory