Tech

Is Earth's core lopsided? Strange goings-on in our planet's interior

image: A new model by UC Berkeley seismologists proposes that Earth's inner core grows faster on its east side (left) than on its west. Gravity equalizes the asymmetric growth by pushing iron crystals toward the north and south poles (arrows). This tends to align the long axis of iron crystals along the planet's rotation axis (dashed line), explaining the different travel times for seismic waves through the inner core.

Image: 
Graphic by Marine Lasbleis

For reasons unknown, Earth's solid-iron inner core is growing faster on one side than the other, and it has been ever since it started to freeze out from molten iron more than half a billion years ago, according to a new study by seismologists at the University of California, Berkeley.

The faster growth under Indonesia's Banda Sea hasn't left the core lopsided. Gravity evenly distributes the new growth -- iron crystals that form as the molten iron cools -- to maintain a spherical inner core that grows in radius by an average of 1 millimeter per year.

But the enhanced growth on one side suggests that something in Earth's outer core or mantle under Indonesia is removing heat from the inner core at a faster rate than on the opposite side, under Brazil. Quicker cooling on one side would accelerate iron crystallization and inner core growth on that side.

This has implications for Earth's magnetic field and its history, because convection in the outer core driven by release of heat from the inner core is what today drives the dynamo that generates the magnetic field that protects us from dangerous particles from the sun.

"We provide rather loose bounds on the age of the inner core -- between half a billion and 1.5 billion years -- that can be of help in the debate about how the magnetic field was generated prior to the existence of the solid inner core," said Barbara Romanowicz, UC Berkeley Professor of the Graduate School in the Department of Earth and Planetary Science and emeritus director of the Berkeley Seismological Laboratory (BSL). "We know the magnetic field already existed 3 billion years ago, so other processes must have driven convection in the outer core at that time."

The youngish age of the inner core may mean that, early in Earth's history, the heat boiling the fluid core came from light elements separating from iron, not from crystallization of iron, which we see today.

"Debate about the age of the inner core has been going on for a long time," said Daniel Frost, assistant project scientist at the BSL. "The complication is: If the inner core has been able to exist only for 1.5 billion years, based on what we know about how it loses heat and how hot it is, then where did the older magnetic field come from? That is where this idea of dissolved light elements that then freeze out came from."

Freezing iron

Asymmetric growth of the inner core explains a three-decade-old mystery -- that the crystallized iron in the core seems to be preferentially aligned along the rotation axis of the earth, more so in the west than in the east, whereas one would expect the crystals to be randomly oriented.

Evidence for this alignment comes from measurements of the travel time of seismic waves from earthquakes through the inner core. Seismic waves travel faster in the direction of the north-south rotation axis than along the equator, an asymmetry that geologists attribute to iron crystals -- which are asymmetric -- having their long axes preferentially aligned along Earth's axis.

If the core is solid crystalline iron, how do the iron crystals get oriented preferentially in one direction?

In an attempt to explain the observations, Frost and colleagues Marine Lasbleis of the Université de Nantes in France and Brian Chandler and Romanowicz of UC Berkeley created a computer model of crystal growth in the inner core that incorporates geodynamic growth models and the mineral physics of iron at high pressure and high temperature.

"The simplest model seemed a bit unusual -- that the inner core is asymmetric," Frost said. "The west side looks different from the east side all the way to the center, not just at the top of the inner core, as some have suggested. The only way we can explain that is by one side growing faster than the other."

The model describes how asymmetric growth -- about 60% higher in the east than the west -- can preferentially orient iron crystals along the rotation axis, with more alignment in the west than in the east, and explain the difference in seismic wave velocity across the inner core.

"What we're proposing in this paper is a model of lopsided solid convection in the inner core that reconciles seismic observations and plausible geodynamic boundary conditions," Romanowicz said.

Frost, Romanowicz and their colleagues will report their findings in this week's issue of the journal Nature Geoscience.

Probing Earth's interior with seismic waves

Earth's interior is layered like an onion. The solid iron-nickel inner core -- today 1,200 kilometers (745 miles) in radius, or about three-quarters the size of the moon -- is surrounded by a fluid outer core of molten iron and nickel about 2,400 kilometers (1,500 miles) thick. The outer core is surrounded by a mantle of hot rock 2,900 kilometers (1,800 miles) thick and overlain by a thin, cool, rocky crust at the surface.

Convection occurs both in the outer core, which slowly boils as heat from crystallizing iron comes out of the inner core, and in the mantle, as hotter rock moves upward to carry this heat from the center of the planet to the surface. The vigorous boiling motion in the liquid-iron outer core produces Earth's magnetic field.

According to Frost's computer model, which he created with the help of Lasbleis, as iron crystals grow, gravity redistributes the excess growth in the east toward the west within the inner core. That movement of crystals within the rather soft solid of the inner core -- which is close to the melting point of iron at these high pressures -- aligns the crystal lattice along the rotation axis of Earth to a greater degree in the west than in the east.

The model correctly predicts the researchers' new observations about seismic wave travel times through the inner core: The anisotropy, or difference in travel times parallel and perpendicular to the rotation axis, increases with depth, and the strongest anisotropy is offset to the west from Earth's rotation axis by about 400 kilometers (250 miles).

The model of inner core growth also provides limits on the proportion of nickel to iron in the center of the earth, Frost said. His model does not accurately reproduce seismic observations unless nickel makes up between 4% and 8% of the inner core -- which is close to the proportion in metallic meteorites that once presumably were the cores of dwarf planets in our solar system. The model also tells geologists how viscous, or fluid, the inner core is.

"We suggest that the viscosity of the inner core is relatively large, an input parameter of importance to geodynamicists studying the dynamo processes in the outer core," Romanowicz said.

Credit: 
University of California - Berkeley

Quantum-optically integrated light cage on a chip

image: a, Atoms from alkali vapors entering the light cage from all sides and interacting with the light of the central core mode. Vertical supports connect the light cage to a silicon chip. Reinforcement rings stabilize the freely suspended strand array. b, The on-chip light cage represents a novel photonic platform for integrated quantum optics as demonstrated in this work by measuring electromagnetically induced transparency (EIT) visible by the strong transmission on the cesium D1 transition.

Image: 
by Flavie Davidson-Marquis, Julian Gargiulo, Esteban Gómez-López, Bumjoon Jang, Tim Kroh, Chris Müller, Mario Ziegler, Stefan A. Maier, Harald Kübler, Markus A. Schmidt & Oliver Benson

In the rapidly growing field of hybrid quantum photonics, the realization of miniaturized, integrated quantum-optical systems with intense light-matter interaction is of great importance for both fundamental and applied research. In particular, the development of methods for reliably generating, controlling, storing and retrieving quantum states with high fidelity through coherent interaction of light and matter opened up a wide field of applications for quantum information and quantum networks. These include, for example, optical switching, quantum memories, and quantum repeaters.

One promising approach for efficient light-matter interaction is the integration of light-guiding platforms in a near-room-temperature alkali vapor. Several research groups have aimed to interface hollow-core photonic-crystal fibers or planar waveguides with atoms in vapor cells. However, when coupled to atoms, nearly all photonic structures reveal distinct limitations imposed by their design or they induce an unwanted level of complexity, especially when aiming for integrated applications.

In a new paper published in Light Science & Application, a team of scientists from Humboldt-Universität zu Berlin, Ludwig-Maximilians-Universität München, Leibniz Institute of Photonic Technology and Friedrich Schiller University, Jena, the University of Stuttgart, and Imperial College London has integrated a novel on-chip hollow-core light cage into an alkali atom vapor cell. This altogether overcomes the disadvantages of previously explored photonic structures, potentially providing a basis for quantum-storage and quantum-nonlinear applications.

The researchers adopted the compact, easy-to-handle light-guiding structure of the laterally-accessible light cage as a container for coherent light-matter interaction, thus realizing stable non-degrading performance and extreme versatility. As a result, the observation of electromagnetically-induced transparency, a prominent quantum-optical effect, inside the novel on-chip hollow-core light cage demonstrates its potential for applications in quantum optics.

Implemented by 3D nanoprinting, the compact light cage on a chip exhibits several advantages compared to other hollow-core waveguide structures. Whereas hollow-core fibers or planar waveguides suffer from inefficient vapor-filling times -- exceeding months for few centimeter long devices -- the design of the light cage allows for high-speed gas diffusion through side-wise direct access to the hollow core within minutes.

Moreover, the high degree of integration and long-term stability allows for interfacing the device with other established technology platforms like silicon photonics and fiber optics, as well as customizing reproducible devices for a large variety of applications.

„We demonstrate the peculiar coherent quantum-optical interaction between the light field in a light cage and room-temperature cesium vapor. This is the first time we have succeeded in observing electromagnetically induced transparency inside such a structure", Tim Kroh from the Department of Physics at Humboldt-Universität zu Berlin explains.

„Apart from the beautiful physics that went into the discovery and design of the light cage, and the quantum optics experiment conducted with it, the technique we used for fabricating the light cage is fascinating -- directly writing it into a polymer block with a laser beam. This very versatile method has a great future for integrated optical devices", adds Prof. Dr. Stefan Maier, who is holding the Chair in Hybrid Nanosystems at Ludwig-Maximilians-Universität München.

Further improvements of the structure are straightforward, the researchers say. The generation of broad electromagnetically-induced transparency windows for light delay of spectrally broad light pulses, as emitted from single-photon sources, lays a foundation to synchronize photon arrival in quantum networks or to realize compact quantum storage on-a-chip. „Therefore, our results represent a major step forward by an unprecedented hybrid integration of designer laser-written structures and atom cells. The freedom to produce three-dimensional structures nearly without restrictions on the silicon-technology platform will allow us to combine light-matter interaction in the light cage with other Si-chip-compatible devices, i.e. lithium niobate waveguides for modulation and frequency conversion of light, as well as direct mode coupling from and to optical fibers. Extensive control of the single-photon properties in a quantum network could ultimately be applied all on one chip", the scientists forecast.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

How quantum dots can 'talk' to each other

image: The illustration shows two quantum dots "communicating" with each other by exchanging light.

Image: 
HZB

So-called quantum dots are a new class of materials with many applications. Quantum dots are realized by tiny semiconductor crystals with dimensions in the nanometre range. The optical and electrical properties can be controlled through the size of these crystals. As QLEDs, they are already on the market in the latest generations of TV flat screens, where they ensure particularly brilliant and high-resolution colour reproduction. However, quantum dots are not only used as "dyes", they are also used in solar cells or as semiconductor devices, right up to computational building blocks, the qubits, of a quantum computer.

Now, a team led by Dr. Annika Bande at HZB has extended the understanding of the interaction between several quantum dots with an atomistic view in a theoretical publication.

Annika Bande heads the "Theory of Electron Dynamics and Spectroscopy" group at HZB and is particularly interested in the origins of quantum physical phenomena. Although quantum dots are extremely tiny nanocrystals, they consist of thousands of atoms with, in turn, multiples of electrons. Even with supercomputers, the electronic structure of such a semiconductor crystal could hardly be calculated, emphasises the theoretical chemist, who recently completed her habilitation at Freie Universität. "But we are developing methods that describe the problem approximately," Bande explains. "In this case, we worked with scaled-down quantum dot versions of only about a hundred atoms, which nonetheless feature the characteristic properties of real nanocrystals."

With this approach, after a year and a half of development and in collaboration with Prof. Jean Christophe Tremblay from the CNRS-Université de Lorraine in Metz, we succeeded in simulating the interaction of two quantum dots, each made of hundreds of atoms, which exchange energy with each other. Specifically, we have investigated how these two quantum dots can absorb, exchange and permanently store the energy controlled by light. A first light pulse is used for excitation, while the second light pulse induces the storage.

In total, we investigated three different pairs of quantum dots to capture the effect of size and geometry. We calculated the electronic structure with highest precision and simulated the electronic motion in real time at femtosecond resolution (10-15 s).

The results are also very useful for experimental research and development in many fields of application, for example for the development of qubits or to support photocatalysis, to produce green hydrogen gas by sunlight. "We are constantly working on extending our models towards even more realistic descriptions of quantum dots," says Bande, "e.g. to capture the influence of temperature and environment."

Credit: 
Helmholtz-Zentrum Berlin für Materialien und Energie

Samara Polytech has summarized all data on methods of synthesis of chromanes and chromene

image: Graphical abstract

Image: 
@SamaraPolytech

O-quinone methides have been studied at the Samara Polytech for more than ten years. Vitaly Osyanin, Doctor of Chemistry, Professor of the Department of Organic Chemistry, is in charge of scientific work in this area. The results of the latest research were published in the authoritative Russian journal "Russian Chemical Reviews" (DOI: https://doi.org/10.1070/RCR4971).

Thus Professor Osyanin and the Candidate of Chemical Sciences, Associate Professor of the Department Dmitry Osipov and the Candidate of Chemical Sciences, the graduate of the Department Anton Lukashenko prepared a review article in which the main known examples of the transformation of o-quinone methides into chromene and chromanes, which are of interest for medical chemistry, etc., are systematized.

For references:

O-quinone methides are short-lived and highly active compounds that are widely used to synthesize more complex molecules.

Samara Polytech as a flagship university offers a wide range of education and research programs and aims at development and transfer of high-quality and practically-oriented knowledge. The university has an established reputation in technical developments and focuses on quality education, scientific and pragmatic research, combining theory and practice in the leading regional businesses and enterprises. Education is conducted in 30 integrated groups of specialties and areas of training (about 200 degree programs including bachelor, master programs and 55 PhD programs) such as oil and gas, chemistry and petrochemistry, mechanics and energy, transportation, food production, defense, IT, mechanical and automotive engineering, engineering systems administration and automation, material science and metallurgy, biotechnology, industrial ecology, architecture, civil engineering and design, etc.

Credit: 
Samara Polytech (Samara State Technical University)

Skoltech researchers unveil complex defect structure of Li-ion cathode material

Skoltech scientists have studied the hydroxyl defects in LiFePO4, a widely used cathode material in commercial lithium-ion batteries, contributing to the overall understanding of the chemistry of this material. This work will help improve the LiFePO4 manufacturing process to avoid formation of adverse intrinsic structural defects which deteriorate its performance. The paper was published in the journal Inorganic Chemistry.

Lithium iron phosphate, LiFePO4, is a safe, stable and affordable cathode material for Li-ion batteries that has been very well optimized for practical applications despite its low conductivity and medium energy density. Yet scientists continue to study the various properties of this material, and in particular the impact of its defects on electrochemical performance.

"It is well known that LiFePO4 materials usually have a considerable amount of Li/Fe antisite defects. This is a type of point defect when Li and Fe atoms exchange their positions in the crystal lattice. However, before us, nobody had assumed that the PO4 part can be also defect-active in this material. We discovered that in some cases the PO4 anion can be substituted by four or five OH groups, which has a negative effect on electrochemical performance of LiFePO4-based batteries. Such defects are called OH defects or more specifically hydrogarnet-type hydroxyl defects," Dmitry Aksyonov, Skoltech Senior Research Scientist and the first author of the paper, explains.

Aksyonov, Assistant Professor Stanislav Fedotov, and Professor Artem Abakumov (CEST) with their colleagues used a joint computational and experimental approach combining density functional theory and neutron diffraction to study the hydroxyl (OH) defects in LiFePO4. They were also able to confirm their results experimentally in a LiFePO4 sample.

"The hydrogarnet OH defects are well known in geology, but not so much in materials science. The presence of OH defects in LiFePO4 could have been envisaged much earlier by drawing parallels with its structural analogues in the olivine mineral group. Therefore, the biggest takeaway from our work is probably that researchers should seek knowledge not only in their home field but in other fields as well," Aksyonov says.

Since OH defects are not trivial to detect, commercially produced LiFePO4 materials may have them as well, he notes, and it is important to have these deteriorating effects under control.

"The simplest practical outcome of this research would be to put efforts in modifying the synthesis procedure so as to fully eliminate this type of defects from the LiFePO4 materials. However, our experience tells that fighting defects makes much less sense than turning them in our favor. So, the story has all chances to be continued," Stanislav Fedotov adds.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Filter membrane renders viruses harmless

Viruses can spread not only via droplets or aerosols like the new coronavirus, but in water, too. In fact, some potentially dangerous pathogens of gastrointestinal diseases are water-borne viruses.

To date, such viruses have been removed from water using nanofiltration or reverse osmosis, but at high cost and severe impact on the environment. For example, nanofilters for viruses are made of petroleum-based raw materials, while reverse osmosis requires a relatively large amount of energy.

Environmentally friendly membrane developed

Now an international team of researchers led by Raffaele Mezzenga, Professor of Food & Soft Materials at ETH Zurich, has developed a new water filter membrane that is both highly effective and environmentally friendly. To manufacture it, the researchers used natural raw materials.

The filter membrane works on the same principle that Mezzenga and his colleagues developed for removing heavy or precious metals from water. They create the membrane using denatured whey proteins that assemble into minute filaments called amyloid fibrils. In this instance, the researchers have combined this fibril scaffold with nanoparticles of iron hydroxide (Fe-O-HO).

Manufacturing the membrane is relatively simple. To produce the fibrils, whey proteins derived from milk processing are added to acid and heated to 90 degrees Celsius. This causes the proteins to extend and attach to each other, forming fibrils. The nanoparticles can be produced in the same reaction vessel as the fibrils: the researchers raise the pH and add iron salt, causing the mixture to "disintegrate" into iron hydroxide nanoparticles, which attach to the amyloid fibrils. For this application, Mezzenga and his colleagues used cellulose to support the membrane.

This combination of amyloid fibrils and iron hydroxide nanoparticles makes the membrane a highly effective and efficient trap for various viruses present in water. The positively charged iron oxide electrostatically attracts the negatively charged viruses and inactivates them. Amyloid fibrils alone wouldn't be able to do this because, like the viral particles, they are also negatively charged at neutral pH. However, the fibrils are the ideal matrix for the iron oxide nanoparticles.

Various viruses eliminated highly efficiently

The membrane eliminates a wide range of water-borne viruses, including nonenveloped adenoviruses, retroviruses and enteroviruses. This third group can cause dangerous gastrointestinal infections, which kill around half a million people - often young children in developing and emerging countries - every year. Enteroviruses are extremely tough and acid-resistant and remain in the water for a very long time, so the filter membrane should be particularly attractive to poorer countries as a way to help prevent such infections.

Moreover, the membrane also eliminates H1N1 flu viruses and even the new SARS-CoV-2 virus from the water with great efficiency. In filtered samples, the concentration of the two viruses was below the detection limit, which is equivalent to almost complete elimination of these pathogens.

"We are aware that the new coronavirus is predominantly transmitted via droplets and aerosols, but in fact, even on this scale, the virus requires being surrounded by water. The fact that we can remove it very efficiently from water impressively underlines the broad applicability of our membrane," says Mezzenga.

While the membrane is primarily designed for use in wastewater treatment plants or for drinking water treatment, it could also be used in air filtration systems or even in masks. Since it consists exclusively of ecologically sound materials, it could simply be composted after use - and its production requires minimum energy. These traits give it an excellent environmental footprint, as the researchers also point out in their study. Because the filtration is passive, it requires no additional energy, which makes its operation carbon neutral and of possible use in any social context, from urban to rural communities.

Credit: 
ETH Zurich

Tipping elements can destabilize each other, leading to climate domino effects

Interactions in the network can lower the critical temperature thresholds beyond which individual tipping elements begin destabilizing on the long-run, according to the study - the risk already increases significantly for warming of 1.5°C to 2°C, hence within the temperature range of the Paris Agreement.

"We provide a risk analysis, not a prediction, yet our findings still raise concern," says Ricarda Winkelmann, Lead of FutureLab on Earth Resilience in the Anthropocene at the Potsdam Institute for Climate Impact Research (PIK). "We find that the interaction of these four tipping elements can make them overall more vulnerable due to mutual destabilization on the long-run. The feedbacks between them tend to lower the critical temperature thresholds of the West Antarctic Ice Sheet, the Atlantic overturning circulation, and the Amazon rainforest. In contrast, the temperature threshold for a tipping of the Greenland Ice Sheet can in fact be raised in case of a significant slow-down of the North Atlantic current heat transport. All in all, this might mean that we have less time to reduce greenhouse gas emissions and still prevent tipping processes."

A third of simulations shows domino effects already at up to 2°C global warming

Around a third of the simulations in the study show domino effects already at global warming levels of up to 2°C, where the tipping of one element triggers further tipping processes. "We're shifting the odds, and not to our favour - the risk clearly is increasing the more we heat our planet," says Jonathan Donges, also Lead of PIK's FutureLab on Earth Resilience in the Anthropocene. "It rises substantially between 1 and 3°C. If greenhouse gas emissions and the resulting climate change cannot be halted, the upper level of this warming range would most likely be crossed by the end of this century. With even higher temperatures, more tipping cascades are to be expected, with long-term devastating effects."

Tipping elements are parts of the Earth system that, once in a critical state, can undergo large and possibly irreversible changes in response to perturbations. They can seem stable until a critical threshold in forcing is exceeded. Once triggered, the actual tipping process may take a long time to unfold. The polar ice sheets for instance would take thousands of years to melt and release most of their ice masses into the oceans, yet with substantial effects: raising sea levels by many meters, threatening coastal cities such as New York, Hamburg, Mumbai or Shanghai. While this is well-known, the dynamics of interacting tipping elements were not.

"Here's just one example of the many complex interactions between the climate tipping elements: if there is substantial melt from the Greenland Ice Sheet releasing freshwater into the ocean, this can slow down the Atlantic overturning circulation which is driven by temperature and salinity differences and transports large amounts of heat from the tropics to the mid-latitudes and polar regions," explains Nico Wunderling, first author of the study. "This in turn can lead to net warming in the Southern Ocean, and hence might on the long-run destabilize parts of the Antarctic Ice Sheet. This contributes to sea-level rise, and rising waters at the fringes of the ice sheets in both hemispheres can contribute to further mutually destabilizing them."

"It would be a daring bet to hope the uncertainties play out in a good way"

Since Earth system models are currently computationally too heavy to simulate how tipping elements' interactions impact the overall stability of the climate system, the scientists use a novel network approach. "Our conceptual model is lean enough to allow us to run more than three million simulations while varying the critical temperature thresholds, interaction strengths and network structure," explains Jürgen Kurths, Head of PIK's Complexity Science Research Department. "By doing so, we could take into account the considerable uncertainties related to these characteristics of tipping interactions."

"Our analysis is conservative in the sense that several interactions and tipping elements are not yet considered," concludes Ricarda Winkelmann. "It would hence be a daring bet to hope that the uncertainties play out in a good way, given what is at stake. From a precautionary perspective rapidly reducing greenhouse gas emissions is indispensable to limit the risks of crossing tipping points in the climate system, and potentially causing domino effects."

Credit: 
Potsdam Institute for Climate Impact Research (PIK)

Researchers reveal the inner workings of a viral DNA-packaging motor

image: A trio of studies has revealed how a viral DNA packaging motor works, potentially providing insights for new therapeutics or synthetic molecular machines. Each of five proteins scrunches up in turn, dragging the DNA up along with them, before releasing back into their original helical pattern.

Image: 
Joshua Pajak, Duke University

DURHAM, N.C. - A group of researchers have discovered the detailed inner workings of the molecular motor that packages genetic material into double-stranded DNA viruses. The advance provides insight into a critical step in the reproduction cycle of viruses such as pox- herpes- and adeno-viruses. It could also give inspiration to researchers creating microscopic machines based on naturally occurring biomotors.

The research was conducted by scientists from Duke University, the University of Minnesota, the University of Massachusetts and the University of Texas Medical Branch (UTMB). The results appear online in a trilogy of papers published in Science Advances, Proceedings of the National Academy of Sciences and Nucleic Acids Research.

"There were several missing pieces of information that prevented us from understanding how these kinds of DNA packaging motors work, which hindered our ability to design therapeutics or evolve new technologies," said Gaurav Arya, professor of mechanical engineering and materials science, biomedical engineering, and chemistry at Duke. "But with new insights and simulations, we were able to piece together a model of this fantastic mechanism, which is the most detailed ever created for this kind of system."

Viruses come in many varieties, but their classification generally depends upon whether they encode their genetic blueprints into RNA or single- or double-stranded DNA. The difference matters in many ways and affects how the genetic material is packaged into new viruses. While some viruses build a protein container called a capsid around newly produced RNA or DNA, others create the capsid first and then fill it with the genetic material.

Most double-stranded DNA viruses take the latter route, which presents many challenges. DNA is negatively charged and does not want to be crammed together into a small space. And it's packaged into an extremely dense, nearly crystalline structure, which also requires a lot of force.

"The benefit of this is that, when the virus is ready to infect a new cell, the pressure helps inject DNA into the cell once it's punctured," said Joshua Pajak, a doctoral student working in Arya's laboratory. "It's been estimated that the pressure exceeds 800 PSI, which is almost ten times the pressure in a corked bottle of champagne."

Forcing DNA into a tiny capsid at that amount of pressure requires an extremely powerful motor. Until recently, researchers only had a vague sense of how that motor worked because of how difficult it is to visualize. The motor only assembles on the virus particle, which is enormous compared to the motor.

"Trying to see the motor attached to the virus is like trying to see the details in the Statue of Liberty's torch by taking a photo of the entire statue," said Pajak.

But at a recent conference, Pajak learned that Marc Morais, professor of biochemistry and molecular biology at UTMB, and Paul Jardine, professor of diagnostic and biological sciences at the University of Minnesota, had been working on this motor for years and had the equipment and skills needed to see the details. Some of their initial results appeared to match the models Pajak was building with what little information was already available. The group grew excited that their separate findings were converging toward a common mechanism and quickly set about solving the mystery of the viral motor together.

In a paper published in Science Advances, Morais and his colleagues resolved the details of the entire motor in one of its configurations. They found that the motor is made up of five proteins attached to one another in a ring-like formation. Each of these proteins are like two suction cups with a spring in between, which allows the bottom portion to move vertically in a helical formation so that it can grab onto the helical backbone of DNA.

"Because you could fit about 100,000 of these motors on the head of a pin and they're all jiggling around, getting a good look at them proved difficult," said Morais. "But after my UTMB colleagues Michael Woodson and Mark White helps us image them with a cryo-electron microscope, a general framework of the mechanism fell into place."

In a second paper, published in Nucleic Acids Research, the Morais group captured the motor in a second configuration using x-ray crystallography. This time the bottom suction cups of the motor were all scrunched up together in a planar ring, leading the researchers to imagine that the motor might move DNA into the virus by ratcheting between the two configurations.

To test this hypothesis, Pajak and Arya performed heavy-duty simulations on Anton 2, the fastest supercomputer currently available for running molecular dynamics simulations. Their results not only supported the proposed mechanism, but also provided information on how exactly the motor's cogs contort between the two configurations.

While the tops of the proteins remain statically attached to the virus particle, their bottom halves move up and down in a cyclic pattern powered by an energy-carrying molecule called ATP. Once all the proteins have moved up--dragging the DNA along with them--the proteins release the byproduct of the ATP chemical reaction. This causes the lower ring to release the DNA and reach back down into their original helical state, where they once again grab on to more ATP and DNA to repeat the process.

"Joshua pieced together lots of clues and information to create this model," said Arya. "But a model is only useful if it can predict new insights that we didn't already know."

At its core, the model is a series of mechanical actions that must fit together and take place in sequential order for everything to work properly. Pajak's simulations predicted a specific series of mechanical signals that tell the bottoms of the proteins whether or not they should be gripping the DNA. Like a line of dominoes falling, removing one of the signaling pathways from the middle should stop the chain reaction and block the signal.

To validate this prediction, the researchers turned to Jardine and colleagues Shelley Grimes and Dwight Anderson to see if removing one of the signaling dominoes actually stopped the motor from packaging DNA. A third paper, published in PNAS, shows that the sabotage worked. After mutating a domino in the signaling pathway so that it was unable to function, the motor could still bind and burn fuel just as well as ever, but it was much worse at actually packaging DNA.

"The new mechanism predicted by the high-resolution structures and the detailed predictions provided a level of detail greater than we ever previously had," said Jardine. "This allowed us to test the role of critical components of the motor, and therefore assess the validity of this new mechanism as we currently understand it."

The result is a strong indication that the model is very close to describing how the motor behaves in nature. The group plans to continue their highly integrated structural, biochemical and simulation approach to further test and refine the proposed model. They hope that that this fundamental understanding could potentially be used to someday fight disease or create a synthetic molecular motor.

"All technology is inspired by nature in one way or another," said Arya. "Now that we really know how this molecular motor works, hopefully it will inspire other researchers to create new inventions using these same mechanisms."

Credit: 
Duke University

A better way to introduce digital tech in the workplace

When bringing technologies into the workplace, it pays to be realistic. Often, for instance, bringing new digital technology into an organization does not radically improve a firm's operations. Despite high-level planning, a more frequent result is the messy process of frontline employees figuring out how they can get tech tools to help them to some degree.

That task can easily fall on overburdened workers who have to grapple with getting things done, but don't always have much voice in an organization. So isn't there a way to think systematically about implementing digital technology in the workplace?

MIT Professor Kate Kellogg thinks there is, and calls it "experimentalist governance of digital technology": Let different parts of an organization experiment with the technology -- and then centrally remove roadblocks to adopt the best practices that emerge, firm-wide.

"If you want to get value out of new digital technology, you need to allow local teams to adapt the technology to their setting," says Kellogg, the David J. McGrath Jr. Professor of Management and Innovation at the MIT Sloan School of Management. "You also need to form a central group that's tracking all these local experiments, and revising processes in response to problems and possibilities. If you just let everyone do everything locally, you're going to see resistance to the technology, particularly among frontline employees."

Kellogg's perspective comes after she conducted an 18-month close ethnographic study of a teaching hospital, examining many facets of its daily workings -- including things like the integration of technology into everyday medical practices.

Some of the insights from that organizational research now appear in a paper Kellogg has written, "Local Adaptation Without Work Intensification: Experimentalist Governance of Digital Technology for Mutually Beneficial Role Reconfiguration in Organizations," recently published online in the journal Organization Science.

In the hospital

Kellogg's on-the-ground, daily, ethnographic research took place in the primary care unit of an academic hospital in the northeastern U.S., where there were six medical teams, each consisting of seven to nine doctors, and three or four nurses and medical assistants, as well four or five receptionists.

The primary care group was transitioning to using new digital technology available in the electronic health system to provide clinical decision support, by indicating when patients needed vaccinations, diabetes tests, and pap smears. Previously, certain actions might only have been called for after visits with primary-care doctors. The software made those things part of the preclinical patient routine, as needed.

In practice, however, implementing the digital technology led to significantly more work for the medical assistants, who were in charge of using the alerts, communicating with patients -- and often assigned even more background work by doctors. When the recommendation provided by the technology was not aligned with a doctor's individual judgment about when a particular action was needed, the medical assistants would be tasked with finding out more about a patient's medical history.

"I was surprised to find that it wasn't working well," Kellogg says.

She adds: "The promise of these technologies is that they're going to automate a lot of practices and processes, but they don't do that perfectly. There often need to be people who fill the gaps between what the technology can do and what's really required, and oftentimes it's less-skilled workers who are asked to do that."

As such, Kellogg observed, the challenges of using the software were not just technological or logistical, but organizational. The primary-care unit was willing to let its different groups experiment with the software, but the people most affected by it were least-well positioned to demand changes in the hospital's routines.

"It sounds great to have all the local teams doing experimentation, but in practice ... a lot of people are asking frontline workers to do a lot of things, and they [the workers] don't have any way to push back on that without being seen as complainers," Kellogg notes.

Three types of problems

All told, Kellogg identified three types of problems regarding digital technology implementation. The first, which she calls "participation problems," are when lower-ranking employees do not feel comfortable speaking up about workplace issues. The second, "threshold problems," involve getting enough people to agree to use the solutions discovered through local experiments for the solutions to become beneficial. The third are "free rider problems," when, say, doctors benefit from medical assistants doing a wider range of work tasks, but then don't follow the proposed guidelines required to free up medical assistant time.

So, while the digital technology provided some advantages, the hospital still had to take another step in order to use it effectively: form a centralized working group to take advantage of solutions identified in local experiments, while balancing the needs of doctors with realistic expectations for medical assistants.

"What I found was this local adaptation of digital technology needed to be complemented by a central governing body," Kellogg says. "The central group could do things like introduce technical training and a new performance evaluation system for medical assistants, and quickly spread locally developed technology solutions, such as reprogrammed code with revised decision support rules."

Placing a representative of the hospital's medical assistants on this kind of governing body, for example, means "the lower-level medical assistant can speak on behalf of their counterparts, rather than [being perceived as] a resister, now [they're] being solicited for a valued opinion of what all their colleagues are struggling with," Kellogg notes.

Another tactic: Rather than demand all doctors follow the central group's recommendations, the group obtained "provisional commitments" from the doctors -- willingness to try the best practices -- and found that to be a more effective way of bringing everyone on board.

"What experimentalist governance is, you allow for all the local experimentation, you come up with solutions, but then you have a central body composed of people from different levels, and you solve participation problems and leverage opportunities that arise during local adaptation," Kellogg says.

A bigger picture

Kellogg has long done much of her research through extensive ethnographic work in medical settings. Her 2011 book "Challenging Operations," for instance, used on-the-ground research to study the controversy of the hours demanded of medical residents. This new paper, for its part, is one product of over 400 sessions Kellogg spent following medical workers around inside the primary care unit.

"The holy grail of ethnography is finding a surprise," says Kellogg. It also requires, she observes, "a diehard focus on the empirical. Let's get past abstractions and dig into a few concrete examples to really understand the more generalizable challenges and the best practices for addressing them. I was able to learn things that you wouldn't be able to learn by conducting a survey."

For all the public discussion about technology and jobs, then, there is no substitute for a granular understanding of how technology really affects workers. Kellogg says she hopes the concept of experimentalist governance could be used widely to help harness promising-but-imperfect digital technology adoption. It could also apply, she suggests, to banks, law firms, and all kinds of businesses using various forms of enterprise software to streamline processes such as human resources management, customer support, and email marketing.

"The bigger picture is, when we engage in digital transformation, we want to encourage experimentation, but we also need some kind of central governance," Kellogg says. "It's a way to solve problems that are being experienced locally and make sure that successful experiments can be diffused. ... A lot of people talk about digital technology as being either good or bad. But neither the technology itself nor the type of work being done dictates its impact. What I'm showing is that organizations need an experimentalist governance process in place to make digital technology beneficial for both managers and workers."

Credit: 
Massachusetts Institute of Technology

CO2 emissions are rebounding, but clean energy revolutions are emerging

image: In certain areas, adoption rates for solar and wind turbines, as well as electric vehicles are very high and increasing every year.

Image: 
UC San Diego

At the upcoming Conference of the Parties (COP26) in November, ample discussion is likely to focus on how the world is not on track to meet the Paris Agreement's goals of stopping warming at well below 2°C. According to a new University of California San Diego article published in Nature Energy, world diplomats will, however, find encouraging signs in emerging clean energy technology "niches"--countries, states or corporations--that are pioneering decarbonization.

"In certain areas, adoption rates for solar and wind turbines, as well as electric vehicles are very high and increasing every year," write the authors of the opinion piece Ryan Hanna, assistant research scientist at UC San Diego's Center for Energy Research and David G. Victor, professor of industrial innovation at UC San Diego's School of Global Policy and Strategy. "It's important to look to niches because this is where the real leg work of decarbonization is happening. In fact, one can think about the entire challenge of decarbonizing as one of opening and growing niches--for new technologies, policies and practices, which are all needed to address the climate crisis."

The glimmers of hope for decarbonization will be critical for diplomats at COP26, who may be set for disappointment as they begin an official "stocktaking" process of reviewing past emissions. This year, each country will report on emissions accounting for the last five years and issue new, bolder pledges to cut greenhouse gasses.

Prior to the COVID-19 pandemic, global fossil fuel emissions had been rising at about one percent per year over the previous decade. Over that same time, U.S. fossil fuel emissions fell by about one percent per year; however, that slight dip is nowhere near the decline written into the U.S.'s original pledge to the Paris Agreement.

"The abundant talk in recent years about 'the energy transition' has barely nudged dependence on conventional fossil fuels, nor has it much altered the trajectory of CO2 emissions or put the world on track to meet Paris goals," write Hanna and Victor. "Instead, policymakers should measure the real engine rooms of technological change--to niche markets."

They point to a growing number of markets where clean technology is being deployed at rates far above global and regional averages.

"Norway and California are leading on electric vehicles, Ireland on wind power and China on electric buses and new nuclear," write Hanna and Victor.

Today's pioneers in energy are those who are actually doing the hard work of creating low-carbon technologies and getting them out into the world. These leaders developing and trialing new technologies are typically small groups; however, they are critical to the climate change crises because they take on risk and reveal what is possible, thereby lowering the risk for global markets to follow.

For example, beginning in 2010, Germany launched a massive investment in solar photovoltaics, which pushed down costs, making photovoltaics more politically and economically viable around the globe. Hanna and Victor credit Germany's leadership for the expansive photovoltaic market the world has today.

Still, eliminating at least one-third of global emissions will require technologies that are, at this time, prototypes. To get more clean energy technologies deployed on a global scale requires new investments in research and development (R&D), notably in novel technologies related to electric power grids.

These investments are strong in China, mixed in Europe and are lagging along with other R&D expenditures in the U.S.

"In the U.S., a new administration serious about climate change may allow for renewed pulses of spending on R&D-for example, through a prospective infrastructure bill that could gain legislative approval this autumn-as innovation is one of the few areas of energy policy bipartisan consensus," the authors write.

The need is dire, as limiting warming to 1.5°C, per the Paris Agreement, has become increasingly out of reach. Meeting the goal now requires continuous reductions in emissions of about 6 percent annually across the whole globe.

"That is a speed and scope on par with what was delivered by global pandemic lockdowns, but previously unprecedented in history and far outside the realm of what's practical," write the authors.

They conclude that COP26, if handled by diplomats thinking like revolutionaries and not a diplomatic committee, is an opportunity to start taking stock of the industrial and agricultural revolutions that will be needed for decarbonization.

Credit: 
University of California - San Diego

Shoot better drone videos with a single word

image: Researchers from CMU, the University of Sao Paulo and Facebook AI Research developed a model that enables a drone to shoot a video based on a desired emotion or viewer reaction.

Image: 
Carnegie Mellon University

The pros make it look easy, but making a movie with a drone can be anything but.

First, it takes skill to fly the often expensive pieces of equipment smoothly and without crashing. And once you've mastered flying, there are camera angles, panning speeds, trajectories and flight paths to plan.

With all the sensors and processing power onboard a drone and embedded in its camera, there must be a better way to capture the perfect shot.

"Sometimes you just want to tell the drone to make an exciting video," said Rogerio Bonatti, a Ph.D. candidate in Carnegie Mellon University's Robotics Institute.

Bonatti was part of a team from CMU, the University of Sao Paulo and Facebook AI Research that developed a model that enables a drone to shoot a video based on a desired emotion or viewer reaction. The drone uses camera angles, speeds and flight paths to generate a video that could be exciting, calm, enjoyable or nerve-wracking -- depending on what the filmmaker tells it.

The team presented their paper on the work at the 2021 International Conference on Robotics and Automation this month.

"We are learning how to map semantics, like a word or emotion, to the motion of the camera," Bonatti said.

But before "Lights! Camera! Action!" the researchers needed hundreds of videos and thousands of viewers to capture data on what makes a video evoke a certain emotion or feeling. Bonatti and the team collected a few hundred diverse videos. A few thousand viewers then watched 12 pairs of videos and gave them scores based on how the videos made them feel.

The researchers then used the data to train a model that directed the drone to mimic the cinematography corresponding to a particular emotion. If fast moving, tight shots created excitement, the drone would use those elements to make an exciting video when the user requested it. The drone could also create videos that were calm, revealing, interesting, nervous and enjoyable, among other emotions and their combinations, like an interesting and calm video.

"I was surprised that this worked," said Bonatti. "We were trying to learn something incredibly subjective, and I was surprised that we obtained good quality data."

The team tested their model by creating sample videos, like a chase scene or someone dribbling a soccer ball, and asked viewers for feedback on how the videos felt. Bonatti said that not only did the team create videos intended to be exciting or calming that actually felt that way, but they also achieved different degrees of those emotions.

The team's work aims to improve the interface between people and cameras, whether that be helping amateur filmmakers with drone cinematography or providing on-screen directions on a smartphone to capture the perfect shot.

"This opens this door to many other applications, even outside filming or photography," Bonatti said. "We designed a model that maps emotions to robot behavior."

Credit: 
Carnegie Mellon University

Researchers design simulation tool to predict disease, pest spread

North Carolina State University researchers have developed a computer simulation tool to predict when and where pests and diseases will attack crops or forests, and also test when to apply pesticides or other management strategies to contain them.

"It's like having a bunch of different Earths to experiment on to test how something will work before spending the time, money and effort to do it," said the study's lead author Chris Jones, research scholar at North Carolina State University's Center for Geospatial Analytics.

In the journal Frontiers in Ecology and the Environment, researchers reported on their efforts to develop and test the tool, which they called "PoPS," for the Pest or Pathogen Spread Forecasting Platform. Working with the U.S. Department of Agriculture's Animal and Plant Health Inspection Service, they created the tool to forecast any type of disease or pathogen, no matter the location.

Their computer modeling system works by combining information on climate conditions suitable for spread of a certain disease or pest with data on where cases have been recorded, the reproductive rate of the pathogen or pest and how it moves in the environment. Over time, the model improves as natural resource managers add data they gather from the field. This repeated feedback with new data helps the forecasting system get better at predicting future spread, the researchers said.

"We have a tool that can be put into the hands of a non-technical user to learn about disease dynamics and management, and how management decisions will affect spread in the future," Jones said.

The tool is needed as state and federal agencies charged with controlling pests and crop diseases face an increasing number of threats to crops, trees and other important natural resources. These pests threaten food supplies and biodiversity in forests and ecosystems.

"The biggest problem is the sheer number of new pests and pathogens that are coming in," Jones said. "State and federal agencies charged with managing them have an ever-decreasing budget to spend on an ever-increasing number of pests. They have to figure out how to spend that money as wisely as possible."

Already, researchers have been using PoPS to track the spread of eight different emerging pests and diseases. In the study, they described honing the model to track sudden oak death, a disease that has killed millions of trees in California since the 1990s. A new, more aggressive strain of the disease has been detected in Oregon.

They are also improving the model to track spotted lanternfly, an invasive pest in the United States that primarily infests a certain invasive type of tree known as "tree of heaven." Spotted lanternfly has been infesting fruit crops in Pennsylvania and neighboring states since 2014. It can attack grape, apple and cherry crops, as well as almonds and walnuts.

The researchers said that just as meteorologists incorporate data into models to forecast weather, ecological scientists are using data to improve forecasting of environmental events - including pest or pathogen spread.

"There's a movement in ecology to forecast environmental conditions," said Megan Skrip, a study co-author and science communicator at the Center for Geospatial Analytics. "If we can forecast the weather, can we forecast where there will be an algal bloom, or what species will be in certain areas at certain times? This paper is one of the first demonstrations of doing this for the spread of pests and pathogens."

Credit: 
North Carolina State University

Scientists discover new approach to stabilize cathode materials

image: Brookhaven chemist Ruoqian Lin, first author of the study.

Image: 
Brookhaven National Laboratory

UPTON, NY--A team of researchers led by chemists at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory has studied an elusive property in cathode materials, called a valence gradient, to understand its effect on battery performance. The findings, published in Nature Communications, demonstrated that the valence gradient can serve as a new approach for stabilizing the structure of high-nickel-content cathodes against degradation and safety issues.

High-nickel-content cathodes have captured the attention of scientists for their high capacity, a chemical property that could power electric vehicles over much longer distances than current batteries support. Unfortunately, the high nickel content also causes these cathode materials to degrade more quickly, creating cracks and stability issues as the battery cycles.

In search of solutions to these structural problems, scientists have synthesized materials made with a nickel concentration gradient, in which the concentration of nickel gradually changes from the surface of the material to its center, or the bulk. These materials have exhibited greatly enhanced stability, but scientists have not been able to determine if the concentration gradient alone was responsible for the improvements. The concentration gradient has traditionally been inseparable from another effect called the valence gradient, or a gradual change in nickel's oxidation state from the surface of the material to the bulk.

In the new study led by Brookhaven Lab, chemists at DOE's Argonne National Laboratory synthesized a unique material that isolated the valence gradient from the concentration gradient.

"We used a very unique material that included a nickel valence gradient without a nickel concentration gradient," said Brookhaven chemist Ruoqian Lin, first author of the study. "The concentration of all three transition metals in the cathode material was the same from the surface to the bulk, but the oxidation state of nickel changed. We obtained these properties by controlling the material's atmosphere and calcination time during synthesis. With sufficient calcination time, the stronger bond strength between manganese and oxygen promotes the movement of oxygen into the material's core while maintaining a Ni2+ oxidation state for nickel at the surface, forming the valence gradient."

Once the chemists successfully synthesized a material with an isolated valence gradient, the Brookhaven researchers then studied its performance using two DOE Office of Science user facilities at Brookhaven Lab--the National Synchrotron Light Source II (NSLS-II) and the Center for Functional Nanomaterials (CFN).

At NSLS-II, an ultrabright x-ray light source, the team leveraged two cutting-edge experimental stations, the Hard X-ray Nanoprobe (HXN) beamline and the Full Field X-ray Imaging (FXI) beamline. By combining the capabilities of both beamlines, the researchers were able to visualize the atomic-scale structure and chemical makeup of their sample in 3-D after the battery operated over multiple cycles.

"Both beamlines have world-leading capabilities. You can't do this research anywhere else," said Yong Chu, leader of the imaging and microscopy program at NSLS-II and lead beamline scientist at HXN. "FXI is the fastest nanoscale beamline in the world; it's about ten times faster than any other competitor. HXN is much slower, but it's much more sensitive--it's the highest resolution x-ray imaging beamline in the world."

HXN beamline scientist Xiaojing Huang added, "At HXN, we routinely run measurements in multimodality mode, which means we collect multiple signals simultaneously. In this study, we used a fluorescence signal and a phytography signal to reconstruct a 3-D model of the sample at the nanoscale. The florescence channel provided the elemental distribution, confirming the sample's composition and uniformity. The phytography channel provided high-resolution structural information, revealing any microcracks in the sample."

Meanwhile at FXI, "the beamline showed how the valence gradient existed in this material. And because we conducted full-frame imaging at a very high data acquisition rate, we were able to study many regions and increase the statistical reliability of the study," Lin said.

At the CFN Electron Microscopy Facility, the researchers used an advanced transmission electron microscope (TEM) to visualize the sample with ultrahigh resolution. Compared to the x-ray studies, the TEM can only probe a much smaller area of the sample and is therefore less statistically reliable across the whole sample, but in turn, the data are far more detailed and visually intuitive.

By combining the data collected across all of the different facilities, the researchers were able to confirm the valence gradient played a critical role in battery performance. The valence gradient "hid" the more capacitive but less stable nickel regions in the center of the material, exposing only the more structurally sound nickel at the surface. This important arrangement suppressed the formation of cracks.

The researchers say this work highlights the positive impact concentration gradient materials can have on battery performance while offering a new, complementary approach to stabilize high-nickel-content cathode materials through the valence gradient.

"These findings give us very important guidance for future novel material synthesis and design of cathode materials, which we will apply in our studies going forward," Lin said.

Credit: 
DOE/Brookhaven National Laboratory

South Pole and East Antarctica warmer than previously thought during last ice age, two studies show

image: Emma Kahle holds ice from 1,500 meters (0.93 miles) depth, the original goal of the South Pole drilling project, in January 2016. New research uses this ice core to calculate temperature history back 54,000 years

Image: 
Eric Steig/University of Washington

The South Pole and the rest of East Antarctica is cold now and was even more frigid during the most recent ice age around 20,000 years ago -- but not quite as cold as previously believed.

University of Washington glaciologists are co-authors on two papers that analyzed Antarctic ice cores to understand the continent's air temperatures during the most recent glacial period. The results help understand how the region behaves during a major climate transition.

In one paper, an international team of researchers, including three at the UW, analyzed seven ice cores from across West and East Antarctica. The results published June 3 in Science show warmer ice age temperatures in the eastern part of the continent.

The team included authors from the U.S., Japan, the U.K., France, Switzerland, Denmark, Italy, South Korea and Russia.

"The international collaboration was critical to answering this question because it involved so many different measurements and methods from ice cores all across Antarctica," said second author T.J. Fudge, a UW assistant research professor of Earth and space sciences.

Antarctica, the coldest place on Earth today, was even colder during the last ice age. For decades, the leading science suggested ice age temperatures in Antarctica were on average as much as 9 degrees Celsius cooler than the modern era. By comparison, temperatures globally at that time averaged 5 to 6 degrees cooler than today.

Previous work showed that West Antarctica was as cold as 11 degrees C below current temperatures. The new paper in Science shows that temperatures at some locations in East Antarctica were only 4 to 5 degrees cooler, about half previous estimates.

"This is the first conclusive and consistent answer we have for all of Antarctica," said lead author Christo Buizert, an assistant professor at Oregon State University. "The surprising finding is that the amount of cooling is very different depending on where you are in Antarctica. This pattern of cooling is likely due to changes in the ice sheet elevation that happened between the ice age and today."

The findings are important because they better match results of global climate models, supporting the models' ability to reproduce major shifts in the Earth's climate.

Another paper, accepted in June in the Journal of Geophysical Research: Atmospheres and led by the UW, focuses on data from the recently completed South Pole ice core, which finished drilling in 2016. The Science paper also incorporates these results.

"With its distinct high and dry climate, East Antarctica was certainty colder than West Antarctica, but the key question was: How much did the temperature change in each region as the climate warmed?" said lead author Emma Kahle, who recently completed a UW doctorate in Earth and space sciences.

That paper, focusing on the South Pole ice core, found that ice age temperatures at the southern pole, near the Antarctic continental divide, were about 6.7 degree Celsius colder than today. The Science paper finds that across East Antarctica, ice age temperatures were on average 6.1 degrees Celsius colder than today, showing that the South Pole is representative of the region.

"Both studies show much warmer temperatures for East Antarctica during the last ice age than previous work -- the most recent 'textbook' number was 9 degrees Celsius colder than present," said Eric Steig, a UW professor of Earth and space sciences who is a co-author on both papers. "This is important because climate models tend to get warmer temperatures, so the data and models are now in better agreement."

"The findings agree well with climate model results for that time period, and thus strengthen our confidence in the ability of models to simulate Earth's climate," Kahle said.

Previous studies used water molecules contained in the layers of ice, which essentially act like a thermometer, to reconstruct past temperatures. But this method needs independent calibration against other techniques.

The new papers employ two techniques that provide the necessary calibration. The first method, borehole thermometry, takes temperatures at various depths inside the hole left by the ice drill, measuring changes through the thickness of the ice sheet. The Antarctic ice sheet is so thick that it keeps a memory of earlier, colder ice age temperatures that can be measured and reconstructed, Fudge said.

The second method examines the properties of the snowpack as it builds up and slowly transforms into ice. In East Antarctica, the snowpack can range from 50 to 120 meters (165 to 400 feet) thick, including snow from thousands of years which gradually compacts in a process that is very sensitive to the temperature.

"As we drill more Antarctic ice cores and do more research, the picture of past environmental change comes into sharper focus, which helps us better understand the whole of Earth's climate system," Fudge said.

Credit: 
University of Washington

Scientists make powerful underwater glue inspired by barnacles and mussels

image: Model airplane assembled with silk-based glue

Image: 
Marco Lo Presti, Tufts University

If you have ever tried to chip a mussel off a seawall or a barnacle off the bottom of a boat, you will understand that we could learn a great deal from nature about how to make powerful adhesives. Engineers at Tufts University have taken note, and today report a new type of glue inspired by those stubbornly adherent crustaceans in the journal Advanced Science.

Starting with the fibrous silk protein harvested from silkworms, they were able to replicate key features of barnacle and mussel glue, including protein filaments, chemical crosslinking and iron bonding. The result is a powerful non-toxic glue that sets and works as well underwater as it does in dry conditions and is stronger than most synthetic glue products now on the market.

"The composite we created works not only better underwater than most adhesives available today, it achieves that strength with much smaller quantities of material," said Fiorenzo Omenetto, Frank C. Doble Professor of Engineering at Tufts School of Engineering, director of the Tufts Silklab where the material was created, and corresponding author of the study. "And because the material is made from extracted biological sources, and the chemistries are benign - drawn from nature and largely avoiding synthetic steps or the use of volatile solvents - it could have advantages in manufacturing as well."

The Silklab "glue crew" focused on several key elements to replicate in aquatic adhesives. Mussels secrete long sticky filaments called byssus. These secretions form polymers, which embed into surfaces, and chemically cross-link to strengthen the bond. The protein polymers are made up of long chains of amino acids including one, dihydroxyphenylalanine (DOPA), a catechol-bearing amino acid that can cross-link with the other chains. The mussels add another special ingredient - iron complexes - that reinforce the cohesive strength of the byssus.

Barnacles secrete a strong cement made of proteins that form into polymers which anchor onto surfaces. The proteins in barnacle cement polymers fold their amino acid chains into beta sheets - a zig-zag arrangement that presents flat surfaces and plenty of opportunities to form strong hydrogen bonds to the next protein in the polymer, or to the surface to which the polymer filament is attaching.

Inspired by all of these molecular bonding tricks used by nature, Omenetto's team set to work replicating them, and drawing on their expertise with the chemistry of silk fibroin protein extracted from the cocoon of silkworms. Silk fibroin shares many of the shape and bonding characteristics of the barnacle cement proteins, including the ability to assemble large beta sheet surfaces. The researchers added polydopamine - a random polymer of dopamine which presents cross-linking catechols along its length, much like the mussels use to cross-link their bonding filaments. Finally, the adhesion strength is significantly enhanced by curing the adhesive with iron chloride, which secures bonds across the catechols, just like they do in natural mussel adhesives.

"The combination of silk fibroin, polydopamine and iron brings together the same hierarchy of bonding and cross-linking that makes these barnacle and mussel adhesives so strong," said Marco Lo Presti, post-doctoral scholar in Omenetto's lab and first author of the study. "We ended up with an adhesive that even looks like its natural counterpart under the microscope."

Getting the right blend of silk fibroin, polydopamine, and acidic conditions of curing with iron ions was critical to enabling the adhesive to set and work underwater, reaching strengths of 2.4 MPa (megapascals; about 350 pounds per square inch) when resisting shear forces. That's better than most existing experimental and commercial adhesives, and only slightly lower than the strongest underwater adhesive at 2.8 MPa. Yet this adhesive has the added advantage of being non-toxic, composed of all-natural materials, and requires only 1-2 mgs per square inch to achieve that bond - that's just a few drops.

"The combination of likely safety, conservative use of material, and superior strength suggests potential utility for many industrial and marine applications and could even be suitable for consumer-oriented such as model building and household use," said Prof. Gianluca Farinola, a collaborator on the study from the University of Bari Aldo Moro, and an adjunct Professor of Biomedical Engineering at Tufts. "The fact that we have already used silk fibroin as a biocompatible material for medical use is leading us to explore those applications as well," added Omenetto.

Credit: 
Tufts University