Brain

Laughter acts as a stress buffer -- and even smiling helps

People who laugh frequently in their everyday lives may be better equipped to deal with stressful events - although this does not seem to apply to the intensity of laughter. These are the findings reported by a research team from the University of Basel in the journal PLOS ONE.

It is estimated that people typically laugh 18 times a day - generally during interactions with other people and depending on the degree of pleasure they experience. Researchers have also reported differences related to time of day, age, and gender - for example, it is known that women smile more than men on average. Now, researchers from the Division of Clinical Psychology and Epidemiology of the Department of Psychology at the University of Basel have recently conducted a study on the relationship between stressful events and laughter in terms of perceived stress in everyday life.

Questions asked by app

In the intensive longitudinal study, an acoustic signal from a mobile phone app prompted participants to answer questions eight times a day at irregular intervals for a period of 14 days. The questions related to the frequency and intensity of laughter and the reason for laughing - as well as any stressful events or stress symptoms experienced - in the time since the last signal.

Using this method, the researchers working with the lead authors, Dr. Thea Zander-Schellenberg and Dr. Isabella Collins, were able to study the relationships between laughter, stressful events, and physical and psychological symptoms of stress ("I had a headache" or "I felt restless") as part of everyday life. The newly published analysis was based on data from 41 psychology students, 33 of whom were women, with an average age of just under 22.

Intensity of laughter has less influence

The first result of the observational study was expected based on the specialist literature: in phases in which the subjects laughed frequently, stressful events were associated with more minor symptoms of subjective stress. However, the second finding was unexpected. When it came to the interplay between stressful events and intensity of laughter (strong, medium or weak), there was no statistical correlation with stress symptoms. "This could be because people are better at estimating the frequency of their laughter, rather than its intensity, over the last few hours," says the research team.

Credit: 
University of Basel

TU Graz researchers synthesize nanoparticles tailored for special applications

image: The graph illustrates the stepwise synthesis of Silver-Zinc Oxide core-shell clusters.

Image: 
© IEP - TU Graz

Whether in innovative high-tech materials, more powerful computer chips, pharmaceuticals or in the field of renewable energies, nanoparticles - smallest portions of bulk material - form the basis for a whole range of new technological developments. Due to the laws of quantum mechanics, such particles measuring only a few millionths of a millimetre can behave completely differently in terms of conductivity, optics or robustness than the same material on a macroscopic scale. In addition, nanoparticles or nanoclusters have a very large catalytically effective surface area compared to their volume. For many applications this allows material savings while maintaining the same performance.

Further development of top-level research in Graz in the field of nanomaterials

Researchers at the Institute of Experimental Physics (IEP) at Graz University of Technology have developed a method for assembling nanomaterials as desired. They let superfluid helium droplets of an internal temperature of 0.4 Kelvin (i.e. minus 273 degrees Celsius) fly through a vacuum chamber and selectively introduce individual atoms or molecules into these droplets. "There, they coalesce into a new aggregate and can be deposited on different substrates," explains experimental physicist Wolfgang Ernst from TU Graz. He has been working on this so-called helium-droplet synthesis for twenty-five years now, has successively developed it further during this time, and has produced continuous research at the highest international level, mostly performed in "Cluster Lab 3", which has been set up specifically for this purpose at the IEP.

Reinforcement of catalytic properties

In Nano Research, Ernst and his team now report on the targeted formation of so-called core-shell clusters using helium-droplet synthesis. The clusters have a 3-nanometer core of silver and a 1.5-nanometer-thick shell of zinc oxide. Zinc oxide is a semiconductor that is used, for example, in radiation detectors for measuring electromagnetic radiation or in photocatalysts for breaking down organic pollutants. The special thing about the material combination is that the silver core provides a plasmonic resonance, i.e. it absorbs light and thus causes a high light field amplification. This puts electrons in an excited state in the surrounding zinc oxide, thereby forming electron-hole pairs - small portions of energy that can be used elsewhere for chemical reactions, such as catalysis processes directly on the cluster surface. "The combination of the two material properties increases the efficiency of photocatalysts immensely. In addition, it would be conceivable to use such a material in water splitting for hydrogen production," says Ernst, naming a field of application.

Nanoparticles for laser and magnetic sensors

In addition to the silver-zinc oxide combination, the researchers produced other interesting core-shell clusters with a magnetic core of the elements iron, cobalt or nickel and a shell of gold. Gold also has a plasmonic effect and also protects the magnetic core from unwanted oxidation. These nanoclusters can be influenced and controlled both by lasers and by external magnetic fields and are suitable for sensor technologies, for example. For these material combinations, temperature-dependent stability measurements as well as theoretical calculations were carried out in collaboration with the IEP theory group led by Andreas Hauser and the team of Maria Pilar de Lara Castells (Institute of Fundamental Physics at the Spanish National Research Council CSIC, Madrid) and can explain the behaviour at phase transitions such as alloy formation that deviates from macroscopic material samples. The results were published in the Journal of Physical Chemistry.

Ernst now hopes that the findings from the experiments will be rapidly transferred into new catalysts "as soon as possible".

Credit: 
Graz University of Technology

Is the Earth's transition zone deforming like the upper mantle?

image: Illustration of the dominant intracrystalline deformation mechanisms predicted in wadsleyite (Wd), ringwoodite (Rw) and majorite garnet (Mj) across the mantle transition zone compared to those of olivine in the upper mantle.

Image: 
Dr. S. Ritterbex (Ehime University)

Despite being composed of solid rocks, the Earth's mantle, which extends to a depth of ~2890 km below the crust, undergoes convective flow by removing heat from the Earth's interior. This process involves mass transfer by subduction of cold tectonic plates from and the ascent of hot plumes towards the Earth's surface, responsible for many large-scale geological features, such as Earthquakes and volcanism. Through a combination of previous seismological and mineral physics studies, it is well known that the Earth's mantle is divided (mineralogically) into two major regimes: the upper and the lower mantle, separated by the "transition zone", a boundary layer between ~410 and ~660 km depth. This transition zone influences the extent of whole mantle convection by controlling mass transfer between the upper and lower mantle. Seismic tomography studies (CT scan imaging of the Earth's interior using seismic waves) have previously revealed that while some slabs penetrate through the transition zone, others seem to stagnate either within or just below. The reason is unclear and the dynamics of the Earth's mantle across the transition zone remains poorly constrained due to the lack of understanding of its mechanical properties.

These mechanical properties depend on the ability of minerals to undergo slow plastic deformation in response to a low mechanical stress, called "creep", typically described by a parameter known as "viscosity". The dynamics of the upper mantle relies on plastic deformation of its main constituent, Mg2SiO4 olivine. The first ~300 km of the upper mantle is characterized by a strong directional dependence of the velocity of seismic waves, known as "seismic anisotropy". Therefore, it is generally believed that "dislocation creep" - a deformation mechanism inducing lattice rotation and crystallographic preferred orientations (CPO) in elastically anisotropic minerals as olivine - contributes to the overall deformation of the upper mantle. Dislocation creep is an intracrystalline deformation mechanism responsible for the transport of crystal shear, mediated by linear defects called "dislocations". It is a composite deformation mechanism that may involve both glide of dislocations along some specific crystal directions and planes and diffusion-mediated climb out of their glide planes. Indeed, recent numerical simulations of Boioli et al. (2015) have shown that deformation of Mg2SiO4 olivine crystals is accommodated by the Weertman type of dislocation creep under relevant upper mantle conditions, where climb of dislocations enables the recovery of dislocation junctions, allowing plastic strain to be efficiently produced by dislocation glide.

Entering the mantle transition zone beyond ~410 km depth with increasing pressure (P) and temperature (T), olivine transforms first into its high-P polymorph wadsleyite and at ~520 km into ringwoodite. It remains unclear if deformation processes of these more compact structures of the high-P polymorphs of olivine are similar to those of olivine (Ritterbex et al. 2015; Ritterbex et al. 2016). To address this question, researchers from the plasticity group at the University of Lille and the Geodynamics Research Center of Ehime University combined numerical simulations of thermally activated dislocation glide mobilities together with results from experimental diffusion data, and demonstrate that, in contrast to olivine at upper mantle conditions, dislocation climb velocities are exceeding those of glide in the high-P polymorphs of olivine, inducing a transition of deformation mechanism in the dislocation creep regime from Weertman creep to pure climb creep at geologic relevant stresses (Image 1). Based on plasticity modeling and constrained by diffusion data from experiments, the current investigation quantifies steady-state deformation of the main transition zone minerals wadsleyite, ringwoodite and majorite garnet as a function of grain size (Image 2).

These modelings are able to explain a number of key features associated with the mantle transition zone. It is shown that intracrystalline plasticity of wadsleyite, ringwoodite and majorite garnet by pure climb creep at geological stresses leads to an equiviscous transition zone of 10^(21±1) Pa.s if the grain size is ~0.1 mm or larger (Image 3), matching well the available inverted surface geophysical data which are typically used to constrain rheological properties of the Earth's mantle. Since pure climb creep does not induce lattice rotation and cannot produce CPO, deformation of the transition zone by this mechanism is compatible with its relative seismic isotropy compared to the upper mantle. The researchers also found that CPO is able to develop along with stress concentrations by the activation of Weertman creep (Image 3), for example in corner flows around cold subducting slabs, something that could induce an increase in subduction resistance, explaining why some slabs stall at the base of the transition zone. On the other hand, viscosity reductions are predicted if grains are smaller than ~0.1 mm when the transition zone silicates are deforming by pure atomic diffusion, commonly referred to as "diffusion creep", which might potentially influence flow dynamics in the interior of cold subducting slabs or across phase transitions (Image 3).

Future incorporation of these deformation mechanisms as a function of grain size in geodynamic convection models should enhance our understanding of the interaction between the upper and lower mantle and is expected to be helpful in constraining the geochemical evolution of the Earth.

Credit: 
Ehime University

COVID-19: Social media users more likely to believe false information

A new study led by researchers at McGill University finds that people who get their news from social media are more likely to have misperceptions about COVID-19. Those that consume more traditional news media have fewer misperceptions and are more likely to follow public health recommendations like social distancing.

In a study published in Misinformation Review, researchers looked at the behavioural effects of exposure to misinformation by combining social media analysis, news analysis, and survey research. They combed through millions of tweets, thousands of news articles, and the results of a nationally representative survey of Canadians to answer three questions: How prevalent is COVID-19 misinformation on social media and in traditional news media? Does it contribute to misperceptions about COVID-19? And does it affect behaviour?

"Platforms like Twitter and Facebook are increasingly becoming the primary sources of news and misinformation for Canadians and people around the world. In the context of a crisis like COVID-19, however, there is good reason to be concerned about the role that the consumption of social media is playing in boosting misperceptions," says co-author Aengus Bridgman, a PhD Candidate in Political Science at McGill University under the supervision of Dietlind Stolle.

Results showed that, compared to traditional news media, false or inaccurate information about COVID-19 is circulated more on social media platforms like Twitter. The researchers point to a big difference in the behaviours and attitudes of people who get their news from social media versus news media - even after taking into account demographics as well as factors like scientific literacy and socio-economic differences. Canadians who regularly consume social media are less likely to observe social distancing and to perceive COVID-19 as a threat, while the opposite is true for people that get their information from news media.

"There is growing evidence that misinformation circulating on social media poses public health risks," says co-author Taylor Owen, an Associate Professor at the Max Bell School of Public Policy at McGill University. "This makes it even more important for policy makers and social media platforms to flatten the curve of misinformation."

Credit: 
McGill University

How a crystalline sponge sheds water molecules

image: A microscope image showing a porous, crystalline material called a metal-organic framework, or MOF (the material in purple). This MOF is made from cobalt(II) sulfate heptahydrate, 5-aminoisophthalic acid and 4,4'-bipyridine, and it is shown in its hydrated state.

Image: 
Travis Mitchell

BUFFALO, N.Y. -- How does water leave a sponge?

In a new study, scientists answer this question in detail for a porous, crystalline material made from metal and organic building blocks -- specifically, cobalt(II) sulfate heptahydrate, 5-aminoisophthalic acid and 4,4'-bipyridine.

Using advanced techniques, researchers studied how this crystalline sponge changed shape as it went from a hydrated state to a dehydrated state. The observations were elaborate, allowing the team to "see" when and how three individual water molecules left the material as it dried out.

Crystalline sponges of this kind belong to a class of materials called metal-organic frameworks (MOFs), which hold potential for applications such as trapping pollutants or storing fuel at low pressures.

"This was a really nice, detailed example of using dynamic in-situ x-ray diffraction to study the transformation of a MOF crystal," says Jason Benedict, PhD, associate professor of chemistry in the University at Buffalo College of Arts and Sciences. "We initiate a reaction -- a dehydration. Then we monitor it with x-rays, solving crystal structures, and we can actually watch how this material transforms from the fully hydrated phase to the fully dehydrated phase.

"In this case, the hydrated crystal holds three independent water molecules, and the question was basically, how do you go from three to zero? Do these water molecules leave one at time? Do they all leave at once?

"And we discovered that what happens is that one water molecule leaves really quickly, which causes the crystal lattice to compress and twist, and the other two molecules wind up leaving together. They leak out at the same time, and that causes the lattice to untwist but stay compressed. All of that motion that I'm describing -- you wouldn't have any insight into that kind of motion in the absence of these sort of experiments that we are performing."

The research was published online on June 23 in the journal Structural Dynamics. Benedict led the study with first authors Ian M. Walton and Jordan M. Cox, UB chemistry PhD graduates. Other scientists from UB and the University of Chicago also contributed to the project.

Understanding how the structures of MOFs morph -- step by step -- during processes like dehydration is interesting from the standpoint of basic science, Benedict says. But such knowledge could also aid efforts to design new crystalline sponges. As Benedict explains, the more researchers can learn about the properties of such materials, the easier it will be to tailor-make novel MOFs geared toward specific tasks.

The technique the team developed and employed to study the crystal's transformation provides scientists with a powerful tool to advance research of this kind.

"Scientists often study dynamic crystals in an environment that is static," says co-author Travis Mitchell, a chemistry PhD student in Benedict's lab. "This greatly limits the scope of their observations to before and after a particular process takes place. Our findings show that observing dynamic crystals in an environment that is also dynamic allows scientists to make observations while a particular process is taking place. Our group developed a device that allows us to control the environment relative to the crystal: We are able to continuously flow fluid around the crystal as we are collecting data, which provides us with information about how and why these dynamic crystals transform."

The study was supported by the National Science Foundation (NSF) and U.S. Department of Energy, including through the NSF's ChemMatCARS facility, where much of the experimental work took place.

"These types of experiments often take days to perform on a laboratory diffractometer," Mitchell says. "Fortunately, our group was able to perform these experiments using synchrotron radiation at NSF's ChemMatCARS. With synchrotron radiation, we were able to make measurements in a matter of hours."

Credit: 
University at Buffalo

New current that transports water to major 'waterfall' discovered in deep ocean

image: The high-seas ferry MS Norröna, cited in the Nature Communications paper, measures upper ocean currents with an ADCP (acoustic Doppler current profiler) installed in its hull as it makes a weekly roundtrip between Denmark, the Faroe Islands and Iceland.

Image: 
Erik Christensen - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=45707529

An international team discovered a previously unrecognized ocean current that transports water to one of the world's largest "waterfalls" in the North Atlantic Ocean: the Faroe Bank Channel Overflow into the deep North Atlantic. While investigating the pathways that water takes to feed this major waterfall, the research team identified a surprising path of the cold and dense water flowing at depth, which led to the discovery of this new ocean current.

"This new ocean current and the path it takes toward the Faroe Bank Channel are exciting findings," said Léon Chafik, the lead author of the paper published in Nature Communications and a research scientist at Stockholm University, Sweden.

"The two discoveries reported here, in one of the best studied areas of the world ocean, is a stark reminder that we still have much to learn about the Nordic Seas," said co-author Thomas Rossby, emeritus professor at the URI Graduate School of Oceanography. "This is crucial given the absolutely fundamental role they play in the major glacial-interglacial climate swings."

Previous studies dealing with this deep flow have long assumed that these cold waters, which flow along the northern slope of the Faroes, turn directly into the Faroe-Shetland Channel (the region the water flows through before reaching the Faroe Bank Channel). Instead, Chafik and the paper's co-authors show that there exists another path into the Faroe-Shetland Channel. They show that water can take a longer path all the way to the continental margin outside Norway before turning south heading toward this major waterfall. "Revealing this newly identified path from available observations was not a straightforward process and took us a good deal of time to piece together" said Chafik.

The researchers also found this new path depends on prevailing wind conditions. "It seems that the atmospheric circulation plays a major role in orchestrating the identified flow regimes," added Chafik.

The study further reveals that much of the water that will end up in the Faroe Bank Channel is not in fact transported along the western side of the Faroe-Shetland Channel (the region the water flows through before reaching the Faroe Bank Channel), as previously thought. Instead, most of this water comes from the eastern side of the Faroe-Shetland Channel where it is transported by a jet-like and deep-reaching ocean current. "This was a curious but very exciting finding, especially since we are aware that a very similar flow structure exists in the Denmark Strait. We are pleased that we were able to identify this new ocean current both in observations and a high-resolution ocean general circulation model," said Chafik.

"Because this newly discovered flow path and ocean current play an important part in the ocean circulation at higher latitudes, its discovery adds to our limited understanding of the overturning circulation in the Atlantic Ocean," said Chafik. "This discovery would not have been possible without many institutional efforts over the years."

Credit: 
University of Rhode Island

Phillips group exactly solves experimental puzzle in high temperature superconductivity

image: Pictured left to right, Illinois Physics Professor Philip Phillips, ICMT postdoctoral fellow Edwin Huang, and Illinois Physics graduate student Luke Yeo pose for a photo on the Bardeen Quad at the University of Illinois at Urbana-Champaign.

Image: 
L. Brian Stauffer, University of Illinois at Urbana-Champaign

Urbana, IL--Forty-five years after superconductivity was first discovered in metals, the physics giving rise to it was finally explained in 1957 at the University of Illinois at Urbana-Champaign, in the Bardeen-Cooper-Schrieffer (BCS) theory of superconductivity.

Thirty years after that benchmark achievement, a new mystery confronted condensed matter physicists: the discovery in 1987 of copper-oxide or high-temperature superconductors. Now commonly known as the cuprates, this new class of materials demonstrated physics that fell squarely outside of BCS theory. The cuprates are insulators at room temperature, but transition to a superconducting phase at a much higher critical temperature than traditional BCS superconductors. (The cuprates' critical temperature can be as high as 170 Kelvin--that's -153.67?F--as opposed to the much lower critical temperature of 4 Kelvin--or -452.47?F--for mercury, a BCS superconductor.)

The discovery of high-temperature superconductors, now more than 30 years ago, seemed to promise that a host of new technologies were on the horizon. After all, the cuprates' superconducting phase can be reached using liquid nitrogen as a coolant, instead of the far costlier and rare liquid helium required to cool BCS superconductors. But until the unusual and unexpected superconducting behavior of these insulators can be theoretically explained, that promise remains largely unfulfilled.

An outpouring of both experimental and theoretical physics research has sought to uncover a satisfactory explanation for superconductivity in the cuprates. But today, this remains perhaps the most pressing unsolved question in condensed matter physics.

Now a team of theoretical physicists at the Institute for Condensed Matter Theory (ICMT) in the Department of Physics at the University of Illinois at Urbana-Champaign, led by Illinois Physics Professor Philip Phillips, has for the first time exactly solved a representative model of the cuprate problem, the 1992 Hatsugai-Kohmoto (HK) model of a doped Mott insulator.

The team has published its findings online in the journal Nature Physics on July 27, 2020.

"Aside from the obvious difference in superconducting temperatures, the cuprates start off their lives as Mott insulators, in which the electrons do not move independently as in a metal, but rather are strongly interacting," explains Phillips. "It is the strong interactions that make them insulate so well."

In their research, Phillips' team solves exactly the analog of the "Cooper pairing" problem from BCS theory, but now for a doped Mott insulator.

What is "Cooper pairing"? Leon Cooper demonstrated this key element of BCS theory: the normal state of a traditional superconducting metal is unstable to an attractive interaction between pairs of electrons. At a BCS superconductor's critical temperature, Cooper pairs of electrons travel without resistance through the metal--this is superconductivity!

"This is the first paper to show exactly that a Cooper instability exists in even a toy model of a doped Mott insulator," notes Phillips. "From this we show that superconductivity exists and that the properties differ drastically from the standard BCS theory. This problem had proven so difficult, only numerical or suggestive phenomenology was possible before our work."

Phillips credits ICMT post-doctoral Fellow Edwin Huang with writing the analogue of the BCS wave function for the superconducting state, for the Mott problem.

"The wave function is the key thing that you have to have to say a problem is solved," Phillips says. "John Robert Schrieffer's wave function turned out to be the computational workhorse of the whole BCS theory. All the calculations were done with it. For interacting electron problems, it is notoriously difficult to write a wave function. In fact, so far only two wave functions have been computed that describe interacting states of matter, one by Robert Laughlin in the fractional quantum Hall effect, and the other by Schrieffer in the context of BCS theory. So the fact that Edwin was able to do this for this problem is quite a feat."

Asked why the cuprates have proven such a mystery to physicists, Phillips explains, "In fact, it is the strong interactions in the Mott state that has prevented a solution to the problem of superconductivity in the cuprates. It has been difficult even to demonstrate the analogue of Cooper's pairing problem in any model of a doped Mott insulator."

Huang's Mott insulator wave function further enabled Phillips, Huang, and physics graduate student Luke Yeo to solve a key experimental puzzle in the cuprates, known as "the color change." Unlike metals, the cuprates exhibit an enhanced absorption of radiation at low energies with a concomitant decrease in absorption at high energies. Phillips' team has shown that this behavior arises from the remnants of what Phillips calls "Mott physics" or "Mottness" in the superconducting state.

Mottness is a term coined by Phillips to encapsulate certain collective properties of Mott insulators, first predicted shortly after World War II by British physicist and Nobel laureate Nevill Francis Mott.

In addition, the researchers have shown that the superfluid density, which has been observed to be suppressed in the cuprates relative to its value in metals, is also a direct consequence of the material's Mottness.

Further, Phillips' team has gone beyond the Cooper problem to demonstrate that the model has superconducting properties that lie outside that of BCS theory.

"For example," Phillips explains, "the ratio of the transition temperature to the energy gap in the superconducting state vastly exceeds that in the BCS theory. In addition, our work shows that the elementary excitations in the superconducting state also lie outside the BCS paradigm as they arise from the wide range of energy scales intrinsic to the Mott state."

Credit: 
University of Illinois Grainger College of Engineering

Study: A plunge in incoming sunlight may have triggered 'snowball earths'

At least twice in Earth's history, nearly the entire planet was encased in a sheet of snow and ice. These dramatic "Snowball Earth" events occurred in quick succession, somewhere around 700 million years ago, and evidence suggests that the consecutive global ice ages set the stage for the subsequent explosion of complex, multicellular life on Earth.

Scientists have considered multiple scenarios for what may have tipped the planet into each ice age. While no single driving process has been identified, it's assumed that whatever triggered the temporary freeze-overs must have done so in a way that pushed the planet past a critical threshold, such as reducing incoming sunlight or atmospheric carbon dioxide to levels low enough to set off a global expansion of ice.

But MIT scientists now say that Snowball Earths were likely the product of "rate-induced glaciations." That is, they found the Earth can be tipped into a global ice age when the level of solar radiation it receives changes quickly over a geologically short period of time. The amount of solar radiation doesn't have to drop to a particular threshold point; as long as the decrease in incoming sunlight occurs faster than a critical rate, a temporary glaciation, or Snowball Earth, will follow.

These findings, published in the Proceedings of the Royal Society A, suggest that whatever triggered the Earth's ice ages most likely involved processes that quickly reduced the amount of solar radiation coming to the surface, such as widespread volcanic eruptions or biologically induced cloud formation that could have significantly blocked out the sun's rays.

The findings may also apply to the search for life on other planets. Researchers have been keen on finding exoplanets within the habitable zone -- a distance from their star that would be within a temperature range that could support life. The new study suggests that these planets, like Earth, could also ice over temporarily if their climate changes abruptly. Even if they lie within a habitable zone, Earth-like planets may be more susceptible to global ice ages than previously thought.

"You could have a planet that stays well within the classical habitable zone, but if incoming sunlight changes too fast, you could get a Snowball Earth," says lead author Constantin Arnscheidt, a graduate student in MIT's Department of Earth, Atmospheric and Planetary Sciences (EAPS). "What this highlights is the notion that there's so much more nuance in the concept of habitability."

Arnscheidt has co-authored the paper with Daniel Rothman, EAPS professor of geophysics, and co-founder and co-director of the Lorenz Center.

A runaway snowball

Regardless of the particular processes that triggered past glaciations, scientists generally agree that Snowball Earths arose from a "runaway" effect involving an ice-albedo feedback: As incoming sunlight is reduced, ice expands from the poles to the equator. As more ice covers the globe, the planet becomes more reflective, or higher in albedo, which further cools the surface for more ice to expand. Eventually, if the ice reaches a certain extent, this becomes a runaway process, resulting in a global glaciation.

Global ice ages on Earth are temporary in nature, due to the planet's carbon cycle. When the planet is not covered in ice, levels of carbon dioxide in the atmosphere are somewhat controlled by the weathering of rocks and minerals. When the planet is covered in ice, weathering is vastly reduced, so that carbon dioxide builds up in the atmosphere, creating a greenhouse effect that eventually thaws the planet out of its ice age.

Scientists generally agree that the formation of Snowball Earths has something to do with the balance between incoming sunlight, the ice-albedo feedback, and the global carbon cycle.

"There are lots of ideas for what caused these global glaciations, but they all really boil down to some implicit modification of solar radiation coming in," Arnscheidt says. "But generally it's been studied in the context of crossing a threshold."

He and Rothman had previously studied other periods in Earth's history where the speed, or rate at which certain changes in climate occurred had a role in triggering events, such as past mass extinctions.

"In the course of this exercise, we realized there was an immediate way to make a serious point by applying such ideas of rate-induced tipping, to Snowball Earth and habitability," Rothman says.

"Be wary of speed"

The researchers developed a simple mathematical model of the Earth's climate system that includes equations to represent relations between incoming and outgoing solar radiation, the surface temperature of the Earth, the concentration of carbon dioxide in the atmosphere, and the effects of weathering in taking up and storing atmospheric carbon dioxide. The researchers were able to tune each of these parameters to observe which conditions generated a Snowball Earth.

Ultimately, they found that a planet was more likely to freeze over if incoming solar radiation decreased quickly, at a rate that was faster than a critical rate, rather than to a critical threshold, or particular level of sunlight. There is some uncertainty in exactly what that critical rate would be, as the model is a simplified representation of the Earth's climate. Nevertheless, Arnscheidt estimates that the Earth would have to experience about a 2 percent drop in incoming sunlight over a period of about 10,000 years to tip into a global ice age.

"It's reasonable to assume past glaciations were induced by geologically quick changes to solar radiation," Arnscheidt says.

The particular mechanisms that may have quickly darkened the skies over tens of thousands of years is still up for debate. One possibility is that widespread volcanoes may have spewed aerosols into the atmosphere, blocking incoming sunlight around the world. Another is that primitive algae may have evolved mechanisms that facilitated the formation of light-reflecting clouds. The results from this new study suggest scientists may consider processes such as these, that quickly reduce incoming solar radiation, as more likely triggers for Earth's ice ages.

"Even though humanity will not trigger a snowball glaciation on our current climate trajectory, the existence of such a 'rate-induced tipping point' at the global scale may still remain a cause for concern," Arnscheidt points out. "For example, it teaches us that we should be wary of the speed at which we are modifying Earth's climate, not just the magnitude of the change. There could be other such rate-induced tipping points that might be triggered by anthropogenic warming. Identifying these and constraining their critical rates is a worthwhile goal for further research."

Credit: 
Massachusetts Institute of Technology

New technique enables mineral ID of precious Antarctic micrometeorites

video: Small rock samples (0.2 -0.8mm) that contain important minerals for identifying of rocky meteorites are tested using newly developed technology.

Image: 
Naoya Imae, NIPR

The composition of Antarctic micrometeorites and other tiny but precious rocks such as those from space missions--is really hard to analyze without some sample loss. But a new technique should make it easier, cheaper and faster to characterize them while preserving more of the sample. The findings were published on the peer reviewed journal Meteoritics & Planetary Science on May 21.

Some 40,000 tons of micrometeorites, less than a millimeter in diameter, bombard the earth every year. Analyzing the composition of this type of cosmic dust can potentially reveal many secrets about the evolution of our solar system. They land everywhere on the planet, but we can't tell them apart from regular dust. Antarctic micrometeorites (AMMs) are special because this cleaner environment makes them easier to distinguish--but because Antarctica is such a remote and challenging place, AMM samples are very precious.

One of the main techniques used to identify the composition of a material, X-ray diffraction, mainly depends upon the use of X-rays produced at laboratories with synchrotrons, a type of particle accelerator, which is expensive and not always convenient.

This method is also challenging if, as is common in the case of AMMs, researchers only have a very small sample of the material needed to be investigated and want to avoid significant sample loss.

However, researchers with Japan's National Institute of Polar Research have now applied a different--and actually quite old--technique to such objects, which opens up the opportunity of much more convenient and cheaper identification of them than has previously been available, while also conserving more of the sample.

In the late 1960s, a Gandolfi x-ray diffraction camera that could rotate on two axes began to be used within X-ray crystallography, the experimental science of investigating materials via determining the molecular structure of the crystals many materials are made out of.

"There are a handful of different X-ray diffraction techniques, including using a vacuum tube that converts electrical energy into X-rays," says Naoya Imae Ph.D., a researcher who worked on applying the Gandolfi x-ray diffraction method to micro-samples, "but a Gandolfi set-up is just much easier to use and much faster."

Until now, the Gandolfi set-up had not been widely used for identification of micrometeorites.

The researchers attached a Gandolfi system to an X-ray diffractometer that had recently been delivered to the National Institute of Polar Research, and tested their set-up on very small rock samples (0.2-0.8 mm) that contained olivine and pyroxene, two minerals that are important for the identification of rocky meteorites.

The set up worked best with rock samples in the form of powders rather than "bulk" agglomerations of grains of mineral crystals.

With the test on known rock samples proven to be successful, the researchers now want to apply the technique on actual AMMs and samples taken by the Hayabusa 2 mission from near-Earth asteroid 162173 Ryugu expected to return to Earth later this year.

Credit: 
Research Organization of Information and Systems

Iron deficiency during infancy reduces vaccine efficacy

Despite the fact that global immunisation programmes are now reaching more people than ever, about 1.5 million children still die every year from diseases that vaccination could have prevented. Vaccination is also less effective in low-?income countries than in high-?income countries, although it is not yet clear why.

Babies have smaller iron reserves

Findings from two clinical studies with children in Kenya now suggest that iron deficiency during infancy may reduce the protection that vaccinations provide. In their first study, the research group led by Michael Zimmermann from the Department of Health Sciences and Technology worked in collaboration with scientists from Kenya, the UK, the Netherlands and the US. Their aim was to determine the levels of body iron and antibodies against antigens from the administered vaccines in blood samples of 303 Kenyan children followed from birth to age 18 months.

"In Switzerland, babies are born with iron stores that are normally sufficient for their first six months of life," says Zimmermann, Professor of Human Nutrition. "But in Kenya and other sub-?Saharan countries, iron reserves in babies are much lower, especially in those born to anaemic mothers or with a low birth weight." This is aggravated by infections and bloody diarrhoea, and their iron reserves are often exhausted after two to three months, he explains.

More than twice the risk

The study showed that more than half the children were already suffering from anaemia at the age of 10 weeks, and by 24 weeks, more than 90 percent had low haemoglobin and red blood cell counts. Using statistical analyses, the researchers, led by Zimmerman, were able to show the following: despite several vaccinations, the risk of finding a lack of protective antibodies against diphtheria, pneumococci and other pathogens in the blood of 18-?month-olds was more than twice as high in anaemic infants compared to those who were not anaemic.

In a second study, the researchers administered a powder containing micronutrients to 127 infants slightly over six months old on a daily basis for four months. In 85 of these children, the powder also contained iron; the other 42 children received no iron supplement. When the children were vaccinated against measles at the age of nine months - as stipulated by the Kenyan vaccination schedule - those children who also received iron as a dietary supplement developed a stronger immune response in two respects: not only did they have more measles antibodies in their blood at the age of 12 months, but their antibodies were also better at recognising the pathogens.

Iron as a dietary supplement to stave off anaemia

The World Health Organization (WHO) recommends feeding infants exclusively with breastmilk for the first six months to avoid infection with diseases transmitted in contaminated water. For that reason, Zimmermann and his research team did not give the children the dietary supplement powder until they were seven months old, although most of the vaccinations had generally been administered by this point; the measles vaccination was the only exception.

However, Zimmermann says that many places have made great progress with their water supply and health systems in recent years, which is why discussions in professional circles about updating the WHO's recommendation are becoming ever more important. He believes that adapting the recommendation would be a good move because preventing anaemia in young children by supplementing the iron in their diet would improve the protection provided by other vaccinations. In turn, this may help to prevent many of the 1.5 million annual deaths due to vaccine-?preventable diseases.

Credit: 
ETH Zurich

Probing the properties of magnetic quasi-particles

Researchers have for the first time measured a fundamental property of magnets called magnon polarisation -- and in the process, are making progress towards building low-energy devices.

The existence of magnon polarisation has been a theoretical idea in physics for almost 100 years but no one has proved its existence.

Scientists at the University of Leeds and the Tohoku University in Japan set out to try and show it exists by measuring it. Their findings have just been published in the journal Physical Review Letters (https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.027201).

Magnons are quasi-particles inside of magnetic materials which are in a continuous process of creation and destruction. They are polarised which allows then to be distinguished as clockwise or anticlockwise (circular polarisation), up or down - and left or right (linear polarisation).

There is intense interest in the polarisation properties of magnons because physicists believe it could be exploited for transporting information in low-energy electrical devices, a field of study called spintronics.

The scientists aimed to measure magnon polarisation in one of the most frequently used magnets in spintronics research, the compound yttrium iron garnet. In many magnets, only anticlockwise magnons exist. But in yttrium iron garnet, both anticlockwise and clockwise polarised magnons were predicted, making it a particularly exciting material to measure.

The team set out to make this measurement using polarised neutron scattering. This involves preparing neutrons in a specific quantum spin state ("up" or "down") and firing them at a magnet in a focused beam.

In the experiment, most neutrons passed straight through the magnet, not interacting at all - making measurements particularly difficult. But, a small number of the neutrons collided with magnons and scattered out of the magnet in all directions. A detector measured the neutrons as they flew out of the sample. By analysing the location, energy and final spin state of the neutrons, the magnon properties were revealed.

Crucially in this work, by comparing the spin state of the neutrons before and after the scattering, the clockwise or anti-clockwise polarisation of the magnons was determined.

Dr Joseph Barker, from the School of Physics and Astronom at Leeds, said: "In Physics, theories remain as predictions until experimental measurements confirm if they are correct or not. A famous example is the search for the Higgs Boson, but there are many untested theories across the sciences.

"Magnon polarisation has recently become an important topic in spintronics so it was the perfect time to try and measure it and verify that it exists."

Dr Barker added: "The experiments and analysis were difficult and complex. In fact, it took two attempts, once in the United States and then in France, to perfect the experimental method.

"We also had to create a precise computer model to ensure we understood what we were seeing correctly because the neutron scattering measurements come from a series of physical processes which cannot be untangled into individual parts."

Researchers can now focus their studies on how to exploit the polarisation of magnons for making new types of spintronic devices for low energy technology.

The research was funded by The Royal Society, Japan Society for the Promotion of Science Grant-in-Aid for Scientific Research, JST ERATO, Tohoku University GP-Spin Program, US Department of Energy and the US-Japan Cooperative Program on neutron scattering.

Credit: 
University of Leeds

Increased attention to sad faces predicts depression risk in teenagers

BINGHAMTON, NY -- Teenagers who tend to pay more attention to sad faces are more likely to develop depression, but specifically within the context of stress, according to new research from Binghamton University, State University of New York.

Researchers at Binghamton University, led by graduate student Cope Feurer and Professor of Psychology Brandon Gibb, aimed to examine whether attentional biases to emotional stimuli, assessed via eye tracking, serve as a marker of risk for depression for teenagers.

"Although previous studies from the lab have examined who is most likely to show biased attention to sad faces and whether attention to sad faces is associated with risk for depression, the current study is the first to look at whether these attention biases impact how teenagers respond to stress, both in the lab and in the real world," said Feurer.

Biased attention to sad faces is associated with depression in adults and is hypothesized to increase depression risk specifically in the presence, but not absence, of stress by modulating stress reactivity. However, few studies have tested this hypothesis, and no studies have examined the relation between attentional biases and stress reactivity during adolescence, despite evidence that this developmental window is marked by significant increases in stress and depression risk.

Seeking to address these limitations, the new study examined the impact of adolescents' sustained attention to facial displays of emotion on individual differences in both mood reactivity to real-world stress and physiological reactivity to a laboratory-based stressor. Consistent with vulnerability-stress models of attention, greater sustained attention to sad faces was associated with greater depressive reactions to real-world stress.

"If a teenager has a tendency to pay more attention to negative stimuli, then when they experience something stressful they are likely to have a less adaptive response to this stress and show greater increases in depressive symptoms," said Feurer. "For example, if two teenagers both have a fight with a friend and one teenager spends more time paying attention to negative stimuli (i.e., sad faces) than the other, then that teenager may show greater increases in depressive symptoms in response to the stressor, potentially because they are paying more attention to the stressor and how the stressor makes them feel."

The researchers believe that the biological mechanism behind this finding lies in the brain's ability to control emotional reactivity.

"Basically, if the brain has difficulty controlling how strongly a teenager responds to emotions, this makes it harder for them to look away from negative stimuli and their attention gets 'stuck,'" said Feurer. "So, when teenagers who tend to pay more attention to sad faces experience stress, they may respond more strongly to this stress, as they have difficulty disengaging their attention from negative emotions, leaving these teens at increased risk for depression."

"This is also why we believe that findings were stronger for older than younger adolescents. Specifically, the brain becomes more effective at controlling emotional reactivity as teens get older, so it may be that being able to look away from negative stimuli doesn't protect against the impact of stress until later adolescence."

There is increasing research showing that the way teenagers pay attention to emotional information can be modified through intervention, and that changing attention biases can reduce risk for depression. The current study highlights attention toward sad faces as a potential target for intervention, particularly among older teenagers, said Feurer.

The researchers recently submitted a grant that would let them look at how these attention biases change across childhood and adolescence.

"This will help us better understand how this risk factor develops and how it increases risk for depression in youth," said Gibb. "Hopefully, this will help us to develop interventions to identify risk for these types of biases so that they can be mitigated before they lead to depression."

Credit: 
Binghamton University

Text messaging: The next gen of therapy in mental health

In the U.S., it is estimated that approximately 19 percent of all adults have a diagnosable mental illness. Clinic-based services for mental health may fall short of meeting patient needs for many reasons including limited hours, difficulty accessing care and cost. In the first randomized controlled trial of its kind, a research team investigated the impact of a texting intervention as an add-on to a mental health treatment program versus one without texting. A text-messaging-based intervention can be a safe, clinically promising and feasible tool to augment care for people with serious mental illness, according to a new study published in Psychiatric Services.

Ninety-one percent of participants found the text-messaging acceptable, 94 percent indicated that it made them feel better and 87 percent said they would recommend it to a friend.

"This study is very exciting because we saw real improvement in those who utilized the text messaging-based intervention on top of normal care. This was true for individuals with some of the most serious forms of mental illness," explained co-author, William J. Hudenko, a research assistant professor in the department of psychological and brain sciences at Dartmouth, and an adjunct assistant professor of clinical psychology in Dartmouth's Geisel School of Medicine. "The results are promising, and we anticipate that people with less severe psychopathology may even do better with this type of mobile intervention."

With the COVID-19 pandemic, many people's schedules have been upended, which may prevent individuals with mental illness from having routine access to a therapist, such as parents who have children at home. "Texting can help bridge this gap, by providing a means for mental health services to be continuously delivered. A text-messaging psychotherapy is an excellent match for the current environment, as it provides asynchronous contact with a mental health therapist while increasing the amount of contact that an individual can have," explained Hudenko.

For the study, the research team examined the impact of text-messaging as an add-on to an assertive community treatment program versus the latter alone. Through an assertive community treatment program, those with serious mental illness have a designated team who helps them with life skills, such as finding a job and housing, managing medications, as well as providing daily, in-person clinic-based services. People with serious mental illness are likely though to experience symptoms each day for which they may need additional therapy. The study was a three-month pilot, which was assessor blind. There were 49 participants: 62 percent had schizophrenia/schizoaffective disorder, 24 percent had bipolar disorder and 14 percent had depression. Assessments were conducted at baseline, post-trial (three months later) and during a follow-up six months later.

Licensed mental health clinicians served as the mobile interventionists. They received a standard training program on how to engage effectively and in a personal way with participants. The mobile interventionists were monitored on a weekly basis to ensure that they were adhering to the treatment protocol. Throughout the trial, over 12,000 messages were sent, and every message was encoded, monitored and discussed with a clinician.

The results demonstrated that 95 percent initiated the intervention and texted 69 percent of possible days with an average of four texts per day. On average, participants sent roughly 165 or more text messages and received 158 or more messages. The intervention was found to be safe, as there were zero adverse events reported.

Today, there are more than 575,000 mental health therapists in the U.S. By 2025, the U.S. Department of Health and Human Services estimates that the country will be over 250,000 therapists short. "A messaging-based intervention is an incredibly scalable, cost-effective way to help manage the enormous shortage of mental health capability in the U.S.," added Hudenko.

The researchers are planning to study the impact of a messaging intervention in mental health on a much larger scale.

Credit: 
Dartmouth College

Scientists prove bird ovary tissue can be preserved in fossils

image: Photograph of the fossil bird with ovarian follicles from the Jehol Biota of China and comparison with a chicken.

Image: 
Alida Bailleul

A research team led by Dr. Alida Bailleul from the Institute of Vertebrate Paleontology and Paleoanthropology (IVPP) of the Chinese Academy of Sciences has put one controversy to rest: whether or not remnants of bird ovaries can be preserved in the fossil record.

According to the team's study published in Communications Biology on July 28, the answer to the question is "yes, they can."

The Early Cretaceous Jehol Biota is known for its exceptional avian fossils, which include thousands of nearly complete skeletons preserved fully articulated and often with associated soft tissues. Most commonly, feathers are preserved, but rare traces of organs are also sometimes fossilized.

In 2013, a group of IVPP scientists described several early bird specimens, which they interpreted as preserving maturing ovarian follicles (the egg yolk contained within a thin membrane prior to ovulation and eggshell formation).

The traces consisted of a single cluster of circular objects preserved on the left side of all specimens, just below the last thoracic vertebrae. This finding was particularly interesting from an evolutionary point of view, because modern birds only have one functional ovary, the left, whereas all other extant animals have - in normal circumstances - two functional ovaries.

Available fossil evidence indicates oviraptorosaurs – dinosaurs fairly closely related to birds – had two functional ovaries. This means birds lost the function of their right ovary at some point in their evolution. But when? If the interpretation is correct, it would mean that the function of the right ovary was lost very early in the evolution of birds, i.e., more than 120 million years ago.

However, these discoveries are controversial. In fact, several authors expressed their doubts regarding the validity of the original interpretation, proposing instead that these circular traces are actually the ingested remains of plants. Understanding the identity of these controversial traces is thus important for understanding reproductive evolution in birds, diet in enantiornithines and confuciusornithiforms (two groups of early birds) and the preservation potential of the Jehol Biota.

In order to explore the identity of the controversial traces, the team led by Dr. Bailleul extracted remains of the purported follicles from one enantiornithine and studied them using an arsenal of analytical methods including scanning electron microscopy, energy dispersive spectroscopy, traditional ground-sectioning techniques and histochemical stains applied to demineralized fossil tissues and extant hen follicles for comparison.

The results show that the tissues preserved in the fossils are virtually identical to the tissues surrounding developing egg yolks in extant birds. More precisely, Dr. Bailleul demonstrates that the fossil traces partially consist of a contractile, muscular and vascularized structure that expels follicles during ovulation (the chordae). Remnants of smooth muscle fibers, collagen fibers, and blood vessels were identified, features all consistent with the original interpretation and incompatible with the ingested seed hypothesis.

Although Dr. Bailleul has only tested one specimen with purported ovarian traces so far, these results prove that ovarian follicles can be preserved in fossils more than 120 million years old and confirm that at least some enantiornithines had a single functional ovary and oviduct. However, unlike modern birds, their follicles developed slowly as a function of their lower metabolic rate.

This research sets a new standard for studies on fossilized soft tissues in the Jehol, demonstrating that such traces can be studied on a level similar to that of extant tissues if they are exceptionally well preserved.

Credit: 
Chinese Academy of Sciences Headquarters

Government urgently needs to gauge public perception of new track and trace app

video: New research suggests that the Government urgently needs to assess public priorities and attitudes towards the track and trace app before it is rolled out.

One of the earliest studies to look at mass acceptance of tracing apps, undertaken by a group of international researchers including Lancaster University, suggests that privacy (which is generally prioritised by governments in terms of app design) is only the top consideration for a certain group of people. Others would place greater weighting on other considerations, such as how convenient it would be to use.

Image: 
Lancaster University Management School

Governments urgently need to understand public priorities before they roll out track and trace, according to new research.

One of the earliest studies to look at mass acceptance of tracing apps suggests that privacy - the factor generally considered most important by governments in their approach to the new technology - is the top consideration for only a certain group of people. Others would place greater weighting on other considerations, such as how convenient it would be to use. Promoting the wider societal benefits of an app is also proven more effective than focusing on benefits to a user's own health - suggesting governments should focus on citizens' altruistic motives.

The different approaches and perceptions of apps means that the UK as a whole may not engage with the track and trace system being introduced in autumn in the same way, so the government may quickly need to gauge 'what' we all are thinking, before the design is finalised.

Professor Monideepa Tarafdar from Lancaster University Management School is one of the authors of the new study, published in the European Journal of Information Systems. She said: "More than half of the population must install and actively use the app in order for it to be effective. In light of the urgency of the situation, and the fact the government will roll it out voluntarily, getting a true understanding of how to get the masses to accept - and crucially, use - one single app, is the most important consideration for developers.

"Our study reveals that one app simply cannot fit all - so the government needs to understand what the majority of us think about the system in order for it to be successful."

The researchers led an experimental study in Germany with 518 participants* in April 2020, when a track and trace app had been announced by government, but before it was released. They presented different versions of a fictitious app to the individual participants, gauging responses to functionality and design. Results revealed that participants could be divided into three groups - Critics, the Undecided and Advocates** - each with differing propensities to use the app, and each valuing the app features very differently.

Professor Simon Trang from the University of Goettingen said: "For the critics and undecided amongst participants, privacy was a top consideration - but did not sway the advocates of a tracing app. Crucially, we found messaging around the app protecting the user's own health were either ineffective or, in some cases, counterproductive. To achieve mass acceptance, our results suggest that communication strategies should solely focus on societal benefits such as 'download the app and help to keep the population safe.' "

Participants that were undecided were more swayed to download and use the app if it was presented as something convenient to use, whereas this was not a strong consideration for other groups.

Credit: 
Lancaster University