Tech

New design principles for spin-based quantum materials

As our lives become increasingly intertwined with technology -- whether supporting communication while working remotely or streaming our favorite show -- so too does our reliance on the data these devices create. Data centers supporting these technology ecosystems produce a significant carbon footprint -- and consume 200 terawatt hours of energy each year, greater than the annual energy consumption of Iran. To balance ecological concerns yet meet growing demand, advances in microelectronic processors -- the backbone of many Internet of Things (IoT) devices and data hubs -- must be efficient and environmentally friendly.

Northwestern University materials scientists have developed new design principles that could help spur development of future quantum materials used to advance (IoT) devices and other resource-intensive technologies while limiting ecological damage.

"New path-breaking materials and computing paradigms are required to make data centers more energy-lean in the future," said James Rondinelli, professor of materials science and engineering and the Morris E. Fine Professor in Materials and Manufacturing at the McCormick School of Engineering, who led the research.

The study marks an important step in Rondinelli's efforts to create new materials that are non-volatile, energy efficient, and generate less heat -- important aspects of future ultrafast, low-power electronics and quantum computers that can help meet the world's growing demand for data.

Rather than certain classes of semiconductors using the electron's charge in transistors to power computing, solid-state spin-based materials utilize the electron's spin and have the potential to support low-energy memory devices. In particular, materials with a high-quality persistent spin texture (PST) can exhibit a long-lived persistent spin helix (PSH), which can be used to track or control the spin-based information in a transistor.

Although many spin-based materials already encode information using spins, that information can be corrupted as the spins propagate in the active portion of the transistor. The researchers' novel PST protects that spin information in helix form, making it a potential platform where ultralow energy and ultrafast spin-based logic and memory devices operate.

The research team used quantum-mechanical models and computational methods to develop a framework to identify and assess the spin textures in a group of non-centrosymmetric crystalline materials. The ability to control and optimize the spin lifetimes and transport properties in these materials is vital to realizing the future of quantum microelectronic devices that operate with low energy consumption.

"The limiting characteristic of spin-based computing is the difficulty in attaining both long-lived and fully controllable spins from conventional semiconductor and magnetic materials," Rondinelli said. "Our study will help future theoretical and experimental efforts aimed at controlling spins in otherwise non-magnetic materials to meet future scaling and economic demands."

Rondinelli's framework used microscopic effective models and group theory to identify three materials design criteria that would produce useful spin textures: carrier density, the number of electrons propagating through an effective magnetic field, Rashba anisotropy, the ratio between intrinsic spin-orbit coupling parameters of the materials, and momentum space occupation, the PST region active in the electronic band structure. These features were then assessed using quantum-mechanical simulations to discover high-performing PSHs in a range of oxide-based materials.

The researchers used these principles and numerical solutions to a series of differential spin-diffusion equations to assess the spin texture of each material and predict the spin lifetimes for the helix in the strong spin-orbit coupling limit. They also found they could adjust and improve the PST performance using atomic distortions at the picoscale. The group determined an optimal PST material, Sr3Hf2O7, which showed a substantially longer spin lifetime for the helix than in any previously reported material.

"Our approach provides a unique chemistry-agnostic strategy to discover, identify, and assess symmetry-protected persistent spin textures in quantum materials using intrinsic and extrinsic criteria," Rondinelli said. "We proposed a way to expand the number of space groups hosting a PST, which may serve as a reservoir from which to design future PST materials, and found yet another use for ferroelectric oxides -- compounds with a spontaneous electrical polarization. Our work also will help guide experimental efforts aimed at implementing the materials in real device structures."

Credit: 
Northwestern University

Study quantifies Saharan dust reaching Amazon

image: A huge dust cloud spans the tropical North Atlantic on 20 June 2020. Over the following week, the plume affected northern South America, the Caribbean Basin, and the southern United States. A second dust outbreak can be seen in this image emerging from the west coast of North Africa.

Image: 
NASA Worldview

MIAMI--A new study by researchers at the University of Miami (UM) Rosenstiel School of Marine and Atmospheric Science and ATMO Guyane quantified the amount of Saharan dust reaching the Amazon to better understand how dust could impact soil fertility in the region. Intense tropical weathering and local biomass burning have both contributed to nutrient-poor soil in the Amazon Basin.

The research team analyzed 15 years of daily measurements of African dust transported in trade winds and collected at a coastal research station in Cayenne, French Guiana. The results showed that significant quantities of dust reach the heart of the Amazon Basin and are deposited there.

"African dust provides an important source of nutrients to enhance Amazonian soil fertility," said Joseph Prospero, professor emeritus at the UM Rosenstiel School and lead author of the study.

Every year, mineral-rich dust from North Africa's Sahara Desert is lifted into the atmosphere by winds and carried on a 5,000-mile journey across the North Atlantic to the Americas. African dust contains phosphorus and other important plant nutrients that help offset soil losses and increase Amazonian soil fertility.

This study, the first to quantify African dust transport to South America, showed that significant amounts of dust is deposited to the Amazon. The analysis also found that previous studies, which were based on limited measurements of dust, may have greatly overestimated the impact.

The Amazon Basin plays a major role in global climate. Trees and plants in the Amazon remove huge quantities of carbon dioxide from the atmosphere and store the carbon in vegetation. This removal offsets some of the man-made CO2 emitted into the atmosphere and mitigates the impact of CO2 on global climate.

The scientists also found that quantities of dust transported to South America are inversely linked to rainfall in North Africa and concluded that climate change will affect dust transport to South America.

"Changes in dust transport could affect plant growth in the Amazon and the amount of CO2 drawn from the atmosphere. This, in turn, would further impact climate," said Prospero. "Our results highlight the need for long-term monitoring to identify changes that might occur to Africa dust transport from climate change."

Credit: 
University of Miami Rosenstiel School of Marine, Atmospheric, and Earth Science

Sea ice triggered the Little Ice Age, finds a new study

image: INSTAAR Research Associate Martin Miles in a modern subarctic fjord setting.

Image: 
Martin Miles

A new study finds a trigger for the Little Ice Age that cooled Europe from the 1300s through mid-1800s, and supports surprising model results suggesting that under the right conditions sudden climate changes can occur spontaneously, without external forcing.

The study, published in Science Advances, reports a comprehensive reconstruction of sea ice transported from the Arctic Ocean through the Fram Strait, by Greenland, and into the North Atlantic Ocean over the last 1400 years. The reconstruction suggests that the Little Ice Age--which was not a true ice age but a regional cooling centered on Europe--was triggered by an exceptionally large outflow of sea ice from the Arctic Ocean into the North Atlantic in the 1300s.

While previous experiments using numerical climate models showed that increased sea ice was necessary to explain long-lasting climate anomalies like the Little Ice Age, physical evidence was missing. This study digs into the geological record for confirmation of model results.

Researchers pulled together records from marine sediment cores drilled from the ocean floor from the Arctic Ocean to the North Atlantic to get a detailed look at sea ice throughout the region over the last 1400 years.

"We decided to put together different strands of evidence to try to reconstruct spatially and temporally what the sea ice was during the past one and a half thousand years, and then just see what we found," said Martin Miles, an INSTAAR researcher who also holds an appointment with NORCE Norwegian Research Centre and Bjerknes Centre for Climate Research in Norway.

The cores included compounds produced by algae that live in sea ice, the shells of single-celled organisms that live in different water temperatures, and debris that sea ice picks up and transports over long distances. The cores were detailed enough to detect abrupt (decadal scale) changes in sea ice and ocean conditions over time.

The records indicate an abrupt increase in Arctic sea ice exported to the North Atlantic starting around 1300, peaking in midcentury, and ending abruptly in the late 1300s.

"I've always been fascinated by not just looking at sea ice as a passive indicator of climate change, but how it interacts with or could actually lead to changes in the climate system on long timescales," said Miles. "And the perfect example of that could be the Little Ice Age."

"This specific investigation was inspired by an INSTAAR colleague, Giff Miller, as well as by some of the paleoclimate reconstructions of my INSTAAR colleagues Anne Jennings, John Andrews, and Astrid Ogilvie," added Miles. Miller authored the first paper to suggest that sea ice played an essential role in sustaining the Little Ice Age.

Scientists have argued about the causes of the Little Ice Age for decades, with many suggesting that explosive volcanic eruptions must be essential for initiating the cooling period and allowing it to persist over centuries. One the hand, the new reconstruction provides robust evidence of a massive sea-ice anomaly that could have been triggered by increased explosive volcanism. One the other hand, the same evidence supports an intriguing alternate explanation.

Climate models called "control models" are run to understand how the climate system works through time without being influenced by outside forces like volcanic activity or greenhouse gas emissions. A set of recent control model experiments included results that portrayed sudden cold events that lasted several decades. The model results seemed too extreme to be realistic--so-called Ugly Duckling simulations--and researchers were concerned that they were showing problems with the models.

Miles' study found that there may be nothing wrong with those models at all.

"We actually find that number one, we do have physical, geological evidence that these several decade-long cold sea ice excursions in the same region can, in fact do, occur," he said. In the case of the Little Ice Age, "what we reconstructed in space and time was strikingly similar to the development in an Ugly Duckling model simulation, in which a spontaneous cold event lasted about a century. It involved unusual winds, sea ice export, and a lot more ice east of Greenland, just as we found in here." The provocative results show that external forcing from volcanoes or other causes may not be necessary for large swings in climate to occur. Miles continued, "These results strongly suggest...that these things can occur out of the blue due to internal variability in the climate system."

The marine cores also show a sustained, far-flung pulse of sea ice near the Norse colonies on Greenland coincident with their disappearance in the 15th century. A debate has raged over why the colonies vanished, usually agreeing only that a cooling climate pushed hard on their resilience. Miles and his colleagues would like to factor in the oceanic changes nearby: very large amounts of sea ice and cold polar waters, year after year for nearly a century.

"This massive belt of ice that comes streaming out of the Arctic--in the past and even today--goes all the way around Cape Farewell to around where these colonies were," Miles said. He would like to look more closely into oceanic conditions along with researchers who study the social sciences in relation to climate.

Credit: 
University of Colorado at Boulder

Comparing virtual and actual pants

image: Actual and virtual pants in the five fabric types

Image: 
Copyright © 2020, Emerald Publishing Limited

The designing, pattern-making and prototype-making process of the apparel industry is a necessary step to confirm the appearance of the desired 3D shape of a garment before manufacturing. However, it is a time-consuming and costly process involving designers, pattern makers and sewers, and the industry hopes to make improvements in this area. 3D garment simulation technology using patterns can visually show the design of the garment without making a prototype, slimming down the laborious and often times waste-producing design and production process. As a result, many studies are underway to explore ways to improve 3D garment simulations, but there are not enough studies that compare virtual garments to real garments to show the effectiveness and validity of the simulations. In order to utilize 3D garment simulation more effectively, it is necessary to clarify the criteria for evaluating the difference between virtual and real garments. This study clarifies the criteria for subjective evaluation of similarities and differences between virtual and actual pants, quantifying the similarities and differences based on the geometrical features along the subjective evaluation.

The study took place at Shinshu University which is home to the only Faculty of Textile Science and Technology in Japan. The research group made five pairs of actual pants for a dummy using five different fabrics of different mass and thicknesses. The virtual pants were simulated with a 3D simulator. The researchers constructed the criteria for comparing virtual and real trousers of the frontal silhouette, hem and waist and width length, and large wrinkles. The structure of the evaluation was done through variance and principal component analysis. The pants were geometrically compared using 3D scanning data. Sensory evaluation was conducted by 20 participants who compared images of the virtual and actual pants in a questionnaire.

Corresponding author of the study, Associate Professor KyoungOk Kim states, "It was difficult to set the clothing design reference for performing the simulation that is practical, reflect the physical properties of the fabric, and set up the experiment while considering the restrictions of the simulator. It was also necessary to devise a method to quantitatively evaluate the difference between the 3D simulation and the actual product." The differences between the virtual and actual pants were found to be due to the accuracy and limitation of the simulator.

The study focused on trousers, and criteria may differ depending on the type of item. They hope to continue studies to investigate the differences between common criteria and items. The goal of future research is to clarify the requirements for using 3D simulations to be useful for cutting down on waste in apparel design and production.

Credit: 
Shinshu University

What happens between the sheets?

image: X-ray photoelectron spectroscopy (XPS) measurements at the Australian Synchrotron were able to pinpoint the location of the calcium near to the silicon carbide surface

Image: 
FLEET

Adding calcium to graphene creates an extremely-promising superconductor, but where does the calcium go?

Adding calcium to a composite graphene-substrate structure creates a high transition-temperature (Tc) superconductor.

In a new study, an Australian-led team has for the first time confirmed what actually happens to those calcium atoms: surprising everyone, the calcium goes underneath both the upper graphene sheet and a lower 'buffer' sheet, 'floating' the graphene on a bed of calcium atoms.

Superconducting calcium-injected graphene holds great promise for energy-efficient electronics and transparent electronics.

STUDYING CALCIUM-DOPED GRAPHENE: THROWING OFF THE DUVET

Graphene's properties can be fine-tuned by injection of another material (a process known as 'intercalation') either underneath the graphene, or between two graphene sheets.

This injection of foreign atoms or molecules alters the electronic properties of the graphene by either increasing its conductance, decreasing interactions with the substrate, or both.

Injecting calcium into graphite creates a composite material (calcium-intercalated graphite, CaC6) with a relatively 'high' superconducting transition temperature (Tc). In this case, the calcium atoms ultimately reside between graphene sheets.

Injecting calcium into graphene on a silicon-carbide substrate also creates a high-Tc superconductor, and we always thought we knew where the calcium went in this case too...

Graphene on silicon-carbide has two layers of carbon atoms: one graphene layer on top of another 'buffer layer': a carbon layer (graphene-like in structure) that forms between the graphene and the silicon-carbide substrate during synthesis, and is non-conducting due to being partially bonded to the substrate surface.

"Imagine the silicon carbide is like a mattress with a fitted sheet (the buffer layer bonded to it) and a flat sheet (the graphene)," explains lead author Jimmy Kotsakidis.

Conventional wisdom held that calcium should inject between the two carbon layers (between two sheets), similar to injection between the graphene layers in graphite. Surprisingly, the Monash University-led team found that when injected, the calcium atoms' final destination location instead lies between buffer layer and the underlying silicon-carbide substrate (between the fitted sheet and the mattress!).

"It was quite a surprise to us when we realised that the calcium was bonding to the silicon surface of the substrate, it really went against what we thought would happen", explains Kotsakidis.

Upon injection, the calcium breaks the bonds between the buffer layer and substrate surface, thus, causing the buffer layer to 'float' above the substrate, creating a new, quasi-freestanding bilayer graphene structure (Ca-QFSBLG).

This result was unanticipated, with extensive previous studies not considering calcium intercalation underneath the buffer layer. The study thus resolves long-standing confusion and controversy regarding the position of the intercalated calcium.

Credit: 
ARC Centre of Excellence in Future Low-Energy Electronics Technologies

Biomarker predicts who will have severe COVID-19

image: Low glucocorticoid receptor (GR) expression led to excessive inflammation and lung damage by neutrophils through enhancing the expression of CXCL8 and other cytokines.

Image: 
Professor Heung Kyu Lee, KAIST. Created with Biorender.com.

- Airway cell analyses showing an activated immune axis could pinpoint the COVID-19 patients who will most benefit from targeted therapies. -

KAIST researchers have identified key markers that could help pinpoint patients who are bound to get a severe reaction to COVID-19 infection. This would help doctors provide the right treatments at the right time, potentially saving lives. The findings were published in the journal Frontiers in Immunology on August 28.

People's immune systems react differently to infection with SARS-CoV-2, the virus that causes COVID-19, ranging from mild to severe, life-threatening responses.

To understand the differences in responses, Professor Heung Kyu Lee and PhD candidate Jang Hyun Park from the Graduate School of Medical Science and Engineering at KAIST analysed ribonucleic acid (RNA) sequencing data extracted from individual airway cells of healthy controls and of mildly and severely ill patients with COVID-19. The data was available in a public database previously published by a group of Chinese researchers.

"Our analyses identified an association between immune cells called neutrophils and special cell receptors that bind to the steroid hormone glucocorticoid," Professor Lee explained. "This finding could be used as a biomarker for predicting disease severity in patients and thus selecting a targeted therapy that can help treat them at an appropriate time," he added.

Severe illness in COVID-19 is associated with an exaggerated immune response that leads to excessive airway-damaging inflammation. This condition, known as acute respiratory distress syndrome (ARDS), accounts for 70% of deaths in fatal COVID-19 infections.

Scientists already know that this excessive inflammation involves heightened neutrophil recruitment to the airways, but the detailed mechanisms of this reaction are still unclear.

Lee and Park's analyses found that a group of immune cells called myeloid cells produced excess amounts of neutrophil-recruiting chemicals in severely ill patients, including a cytokine called tumour necrosis factor (TNF) and a chemokine called CXCL8.

Further RNA analyses of neutrophils in severely ill patients showed they were less able to recruit very important T cells needed for attacking the virus. At the same time, the neutrophils produced too many extracellular molecules that normally trap pathogens, but damage airway cells when produced in excess.

The researchers additionally found that the airway cells in severely ill patients were not expressing enough glucocorticoid receptors. This was correlated with increased CXCL8 expression and neutrophil recruitment.

Glucocorticoids, like the well-known drug dexamethasone, are anti-inflammatory agents that could play a role in treating COVID-19. However, using them in early or mild forms of the infection could suppress the necessary immune reactions to combat the virus. But if airway damage has already happened in more severe cases, glucocorticoid treatment would be ineffective.

Knowing who to give this treatment to and when is really important. COVID-19 patients showing reduced glucocorticoid receptor expression, increased CXCL8 expression, and excess neutrophil recruitment to the airways could benefit from treatment with glucocorticoids to prevent airway damage. Further research is needed, however, to confirm the relationship between glucocorticoids and neutrophil inflammation at the protein level.

"Our study could serve as a springboard towards more accurate and reliable COVID-19 treatments," Professor Lee said.

Credit: 
The Korea Advanced Institute of Science and Technology (KAIST)

Unknown details identified in the Lions' Courtyard at the Alhambra

image: A novel methodology was followed based on three complementary graphic analyses: first, outstanding images from the seventeenth to the twentieth centuries were reviewed; then new computer drawings were made of their muqarnas, following the theoretical principles of their geometrical grouping; and finally, a three-dimensional scan was made to ascertain their precise current state from the point cloud obtained.

Image: 
Universidad de Sevilla

Through drawings, researchers from the University of Seville, the École Polytechnique Fédérale de Lausanne (Switzerland) and the University of Granada have identified details hitherto unknown in the muqarnas of the temples of the Lions' Courtyard at the Alhambra in Granada, a UNESCO World Heritage Site.

In order to better understand and facilitate the conservation of these fourteenth-century architectural elements, following a review of numerous repairs performed over the intervening centuries, a novel methodology was followed based on three complementary graphic analyses: first, outstanding images from the seventeenth to the twentieth centuries were reviewed; then new computer drawings were made of their muqarnas, following the theoretical principles of their geometrical grouping; and finally, a three-dimensional scan was made to ascertain their precise current state from the point cloud obtained.

The comparison of drawings has allowed us to verify for the first time that the muqarnas of the two temples have a different configuration and different number of pieces. In addition, geometric deformations have been detected in the original Nasrid design, identifying hitherto unknown pieces, plus other deformations due to the various repairs from major threats that the temples and their muqarnas have survived for centuries, despite their fragile construction.

"For the first time, this article documents and analyses details that were hitherto practically absent from the scientific literature", says Antonio Gámiz, professor at the University of Seville and co-author of this work.

The muqarnas are one of the most unique architectural episodes of the Nasrid Alhambra and of medieval Islamic art because of their sophisticated three-dimensional geometrical construction. They are small prisms that are grouped together and create a great diversity of spatial configurations, adapting their composition to very diverse architectural situations in cornices, arches, capitals and vaults. They reached a virtuous zenith during the reign of Muhammad V (1354-1359 and 1362-1391) when crucial works were undertaken in the palaces of the Alhambra.

This research was supported by the Patronato de la Alhambra and Generalife.

Credit: 
University of Seville

Humans develop more slowly than mice because our chemistry is different

Scientists from the RIKEN Center for Biosystems Dynamics Research, European Molecular Biology Laboratory (EMBL) Barcelona, Universitat Pompeu Fabra, and Kyoto University have found that the "segmentation clock"--a genetic network that governs the body pattern formation of embryos--progresses more slowly in humans than in mice because the biochemical reactions are slower in human cells. The differences in the speeds of biochemical reactions may underlie differences between species in the tempo of development.

In the early phase of the development of vertebrates, the embryo develops into a series of "segments" that eventually differentiate into different types of tissues, such as muscles or the ribs. This process is known to be governed by an oscillating biochemical process, known as the segmentation clock, which varies between species. For example, it is about two hours in mice, and about five hours in humans. Why the length of this cycle varies between species has remained a mystery, however.

To solve this mystery, the group began experiments using embryonic stem cells for mice and induced pluripotent stem (iPS) cells which they transformed into presomitic mesoderm (PSM) cells, the cells that take part in the segmentation clock.

They began by examining whether something different was happening in the network of cells or whether there was a difference in the process within cells. They found, using experiments that either blocked important signals or put cells in isolation, that the latter is true.

With the understanding that processes within cells were key, they suspected that the difference might be within the master gene--HES7--which controls the process by repressing its own promoter, and did a number of complex experiments where they swapped the genes between the human and mouse cells, but this did not change the cycle.

According to corresponding author Miki Ebisuya, who performed the work both at RIKEN BDR and EMBL Barcelona, "Failing to show a difference in the genes left us with the possibility that the difference was driven by different biochemical reactions within the cells." They looked at whether there were differences in factors such as the degradation rate of the HES7 protein, an important factor in the cycle. They looked at a number of processes including how quickly mouse and human proteins were degraded and found, confirming the hypothesis, that both proteins were degraded more slowly in human cells than in mouse cells. There were also differences in the time it took to transcribe and translate HES7 into proteins, and the time it took for HES7 introns to be spliced. "We could thus show," says Ebisuya, "that it was indeed the cellular environment in human and mouse cells that is the key to the differential biochemical reaction speeds and thus differential time scales."

She continues, "Through this we have come up with a concept that we call developmental allochrony, and the present study will help us to understand the complicated process through which vertebrates develop. One of the key remaining mysteries is exactly what is difference between the human and mouse cells that drives the difference in reaction times, and we plan to do further studies to shed light on this."

Credit: 
RIKEN

Ocean acidification puts deep-sea coral reefs at risk of collapse

image: Lophelia pertusa coral in corrosive waters off Southern California Bight. Live coral growing is on bare, exposed rock with no dead coral framework.

Image: 
National Oceanic and Atmospheric Administration

Deep-sea coral reefs face challenges as changes to ocean chemistry triggered by climate change may cause their foundations to become brittle, a study suggests.

The underlying structures of the reefs - which are home to a multitude of aquatic life - could fracture as a result of increasing ocean acidity caused by rising levels of carbon dioxide.

Hundreds of metres below the surface of the ocean in Southern California, researchers measured the lowest - therefore the most acidic - pH level ever recorded on living coral reefs. The corals were then raised in the lab for one year under the same conditions.

Scientists observed that the skeletons of dead corals, which support and hold up living corals, had become porous due to ocean acidification and rapidly become too fragile to bear the weight of the reef above them.

Previous research has shown that ocean acidification can impact coral growth, but the new study demonstrates that porosity in corals - known as coralporosis - leads to weakening of their structure at critical locations.

This causes early breakage and crumbling, experts say, that may cause whole coral ecosystems to shrink dramatically in the future, leaving them only able to support a small fraction of the marine life they are home to today.

The findings complement recent evidence of porosity in tropical corals, but demonstrate that the threat posed by ocean acidification is far greater for deep-sea coral reefs.

Research was led by University of Edinburgh scientists, under the EU-funded ATLAS and iAtlantic projects, with researchers from Heriot-Watt University and the National Oceanic and Atmospheric Administration (NOAA).

The team identified how reefs could become fractured by analysing corals from the longest-running laboratory studies to date, and by diving with submersibles off US Pacific shores to observe how coral habitat is lost as the water becomes more acidic.

Dr Sebastian Hennige, of the University of Edinburgh's School of GeoSciences, said: "This study highlights that a major threat to these wonderful deep-sea ecosystems is structural weakening caused by ocean acidification, driven by the increasing amounts of carbon dioxide we produce. While deep-sea reefs exist out of sight they are certainly not out of mind, and our work highlights how scientists from different disciplines and countries can join together to tackle global challenges."

The corals in Southern California - one the most acidified reefs studied to date - are already experiencing the effects of climate change and exist in conditions that most deep-sea reefs are expected to encounter by the end of the century, scientists say.

Dr. Peter Etnoyer, of NOAA's National Centers for Coastal Ocean Science, said: "Deep-sea corals growing off Southern California are a window into the future ocean. The region is a natural laboratory to study the effects of ocean acidification."

Submersibles were launched from NOAA ships off Southern California, and were guided by Dr. Peter Etnoyer and graduate student Leslie Wickes.

The US team sampled live corals and returned them to the laboratory for experiments. The UK team applied engineering principles to demonstrate the rapid weakening of the skeletons and discovered a striking similarity to the weakening observed in human bones from osteoporosis.

The team says that the link between osteoporosis and coralporosis opens up a range of methods and concepts that can be adapted in the challenge of monitoring and predicting the fate of such fragile deep-sea ecosystems and the life they support.

Dr. Uwe Wolfram, of Heriot-Watt University, said: "By being able to adapt strategies to coral reefs that are used routinely to monitor osteoporosis and assess bone fracture risk, we may have powerful non-invasive tools at our disposal to monitor these fragile ecosystems."

Tools developed as part of the project will aid understanding of when ocean ecosystems will change and how it will affect marine life.

This will better equip society to deal with how these vulnerable ecosystems can be protected in the future, and will support the UN Decade of Ocean Science - which starts in 2021 - to deliver the science we need, for the ocean we want, the team says.

Professor J Murray Roberts, of the University of Edinburgh's School of GeoSciences, who leads the ATLAS and iAtlantic programmes, said: "Cold-water corals are truly the cities of the deep-sea providing homes to countless other animals. If we lose the corals the city crumbles. This project is a great example of how we can work across the Atlantic and Pacific Oceans to understand the impacts of rapidly changing ocean conditions."

Credit: 
University of Edinburgh

Detaching and uplifting, not bulldozing

image: Central Alps of Switzerland have been lifted to today's height.

Image: 
ETH Zurich

For a long time, geoscientists have assumed that the Alps were formed when the Adriatic plate from the south collided with the Eurasian plate in the north. According to the textbooks, the Adriatic plate behaved like a bulldozer, thrusting rock material up in front of it into piles that formed the mountains. Supposedly, their weight subsequently pushed the underlying continental plate downwards, resulting in the formation of a sedimentary basin in the north adjacent to the mountains - the Swiss Molasse Plateau. Over time, while the mountains grew higher the basin floor sank deeper and deeper with the rest of the plate.

A few years ago, however, new geophysical and geological data led ETH geophysicist Edi Kissling and Fritz Schlunegger, a sediment specialist from the University of Bern, to express doubts about this theory. In light of the new information, the researchers postulated an alternative mechanism for the formation of the Alps.

Altitude of the Alps has barely changed

Kissling and Schlunegger pointed out that the topography and altitude of the Alps have barely changed over the past 30 million years, and yet the trench at the site of the Swiss Plateau has continued to sink and the basin extended further north. This leads the researchers to believe that the formation of the Central Alps and the sinking of the trench are not connected as previously assumed.

They argue that if the Alps and the trench indeed had formed from the impact of two plates pressing together, there would be clear indications that the Alps were steadily growing. That's because, based on the earlier understanding of how the Alps formed, the collision of the plates, the formation of the trench and the height of the mountain range are all linked.

Furthermore, seismicity observed during the past 40 years within the Swiss Alps and their northern foreland clearly documents extension across the mountain ranges rather than the compression expected for the bulldozing Adria model.

The behaviour of the Eurasian plate provides a possible new explanation. Since about 60 Ma ago, the former oceanic part of the Eurasian plate sinks beneath the continental Adriatic microplate in the south. By about 30 Ma ago, this process of subduction is so far advanced that all oceanic lithosphere has been consumed and the continental part of the Eurasian plate enters the subduction zone.

This denotes the begin of the so-called continent-continent collision with the Adriatic microplate and the European upper, lighter crust separates from the heavier, underlying lithospheric mantle. Because it weighs less, the Earth's crust surges upwards, literally creating the Alps for the first time around 30 Ma ago. While this is happening, the lithospheric mantle sinks further into the Earth's mantle, thus pulling the adjacent part of the plate downwards.

This theory is plausible because the Alps are mainly made up of gneiss and granite and their sedimentary cover rocks like limestone. These crustal rocks are significantly lighter than the Earth's mantle - into which the lower layer of the plate, the lithospheric mantle, plunges after the detachment of the two layers that form the continental plate. "In turn, this creates strong upward forces that lift the Alps out of the ground," Kissling explains. "It was these upward forces that caused the Alps to form, not the bulldozer effect as a result of two continental plates colliding," he says.

New model confirms lift hypothesis

To investigate the lift hypothesis, Luca Dal Zilio, former doctoral student in ETH geophysics professor Taras Gerya's group, has now teamed up with Kissling and other ETH researchers to develop a new model. Dal Zilio simulated the subduction zone under the Alps: the plate tectonic processes, which took place over millions of years, and the associated earthquakes.

"The big challenge with this model was bridging the time scales. It takes into account lightning-fast shifts that manifest themselves in the form of earthquakes, as well as deformations of the crust and lithospheric mantle over thousands of years," says Dal Zilio, lead author of the study recently published in the journal Geophysical Research Letters.

According to Kissling, the model is an excellent way to simulate the uplifting processes that he and his colleague are postulating. "Our model is dynamic, which gives it a huge advantage," he says, explaining that previous models took a rather rigid or mechanical approach that did not take into account changes in plate behaviour. "All of our previous observations agree with this model," he says.

The model is based on physical laws. For instance, the Eurasian plate would appear to subduct southwards. In contrast to the normal model of subduction, however, it doesn't actually move in this direction because the position of the continent remains stable. This forces the subducting lithosphere to retreat northwards, causing the Eurasian plate to exert a suction effect on the relatively small Adriatic plate.

Kissling likens the action to a sinking ship. The resulting suction effect is very strong, he explains. Strong enough to draw in the smaller Adriatic microplate so that it collides with the crust of the Eurasian plate. "So, the mechanism that sets the plates in motion is not in fact a pushing effect but a pulling one," he says, concluding that the driving force behind it is simply the pull of gravity on the subducting plate.

Rethinking seismicity

In addition, the model simulates the occurrence of earthquakes, or seismicity, in the Central Alps, the Swiss Plateau and below the Po Valley. "Our model is the first earthquake simulator for the Swiss Central Alps," says Dal Zilio. The advantage of this earthquake simulator is that it covers a very long period of time, meaning that it can also simulate very strong earthquakes that occur extremely rarely.

"Current seismic models are based on statistics," Dal Zilio says, "whereas our model uses geophysical laws and therefore also takes into account earthquakes that occur only once every few hundreds of years." Current earthquake statistics tend to underestimate such earthquakes. The new simulations therefore improve the assessment of earthquake risk in Switzerland.

Credit: 
ETH Zurich

Droughts in the Amazon rainforest can be predicted up to 18 months in advance

Droughts impact millions of people and threaten the delicate ecosystems of the Amazon rainforest in South America. Now a study within the TiPES project by Catrin Ciemer of the Potsdam Institute for Climate Impact Research (PIK), Germany, and colleagues in Environmental Research Letters has revealed how surface temperatures in two coupled areas of the tropical Atlantic Ocean can be used to accurately predict these severe climate events. Early warnings of upcoming droughts are imperative for mitigating the impact on millions of people depending on the rainforest ecosystem. TiPES is an EU H2020 project, coordinated by the University of Copenhagen.

"The fact that changes in sea surface temperatures in the Pacific and the Atlantic oceans impact rainfall patterns in the tropical parts of South America is long known. What we did was use a new complex network approach which made it possible to discriminate between areas in the oceans which have negative or positive effects on rainfall in tropical South America, explains Dr. Niklas Boers, Potsdam Institute for Climate Impact Research (PIK), who led the study.

The analysis revealed, that when two areas in the Atlantic Ocean which are situated north and south of each other start to go out of phase - that is, when temperatures are rising in one, but decreasing in the other - the Amazon is likely to experience a severe drought within 1-1.5 years.

What happens is, that sea surface temperature anomalies in the two relevant Atlantic areas shift the so-called trade winds north or south. This changes the overall moisture budget for the Amazon and can cause severe droughts in the rainforest.

"For the first time, we can accurately predict drought in the tropical regions of South America as far as 18 months in advance. The two crucial factors in this research are the selection of the precise, relevant locations in the Atlantic Ocean, and the observation that the correlation between the southern and the northern ocean regions can be used for prediction," says Niklas Boers.

Millions of people depend on the Amazon rainforest. There are numerous indigenous tribes whose way of life is already threatened by land removal. Additionally, farmers with cattle and fisheries throughout the Amazon depend on precipitation and levels in lakes and rivers for transport. If informed on beforehand, the population can plan and act in a sustainable way,

Unfortunately, though, there is no way to mitigate damage to the forest itself. And droughts are becoming an increasing problem. During two events termed "droughts of a century" in 2005 and 2010, the Amazon rainforest changed temporarily from being a carbon sink to becoming a carbon source, thus contributing to the rising levels of CO2 in the atmosphere. In 2015/16 there was yet another severe drought in the region.

Models predict the Amazon will eventually reach a tipping point due to anthropogenic climate change and deforestation after which it will permanently change into a savanna.

"It is really the combined effect of the two which is the problem. I am not expecting the Amazon to be there more or less like today at the end of my life," says Niklas Boers.

This work is part of the TiPES project, an EU Horizon2020 funded climate science project investigating tipping points in the Earth system. TiPES is coordinated from the University of Copenhagen, Denmark.

Credit: 
University of Copenhagen

Metformin for type 2 diabetes patients or not? Researchers now have the answer

Metformin is the first-line drug that can lower blood sugar levels in type 2 diabetes patients. One third of patients do not respond to metformin treatment and 5 per cent experience serious side effects, which is the reason many choose to stop medicating. Researchers at Lund University in Sweden have now identified biomarkers that can show in advance how the patient will respond to metformin treatment via a simple blood test.

"Our study constitutes an important step towards the goal of personalised care for diabetes patients because it can contribute to ensuring that the right person receives the right care as soon as there is a diagnosis", says Charlotte Ling, professor of epigenetics at Lund University, who led the study.

When diet and exercise are not enough to regulate blood sugar, metformin is the first drug introduced to treat type 2 diabetes, according to international guidelines. If it does not have the intended effect in the form of lowered blood sugar levels, or if the patient experiences serious side effects, patients then go on to trial other drugs.

"If it takes a long time for the patient to receive the correct treatment, there is a risk of complications due to the elevated blood sugar levels. Approximately 30 per cent of all patients with type 2 diabetes do not respond to metformin and should be given another drug right from the start. For this reason, it is important to be able to identify these patients upon diagnosis", says Charlotte Ling.

One third of patients experience side effects usually in the form of gastrointestinal difficulties such as nausea, stomach pain and diarrhoea. Five per cent stop taking the medicine due to severe side effects.

The study is the first pharmacoepigenetic study in diabetes, i.e. that researchers have studied how epigenetic factors, such as DNA methylation (see fact box), can be used as biomarkers to predict the effect of a drug.

"To a certain extent, pharmacoepigenetics has been used within cancer care to predict how a person will respond to a treatment, however, it has never been done in diabetes care before", says Charlotte Ling.

Researchers in the study have looked at epigenetic modifications, so-called DNA methylations, in blood from individuals diagnosed with diabetes before they started taking metformin. In a follow-up a year later, the researchers could see which patients had benefited from the treatment (with resulting lowered blood sugar levels) and whether or not they had suffered from side effects.

"By compiling the responses, we have found biomarkers that can identify already at diagnosis of diabetes which patients will benefit from and tolerate metformin, which will advance personalised therapy in type 2 diabetes", says the study's first author Sonia García-Calzón.

The study was conducted on 363 participants from three different patient cohorts (All New Diabetics in Skåne, All New Diabetics in Uppsala and Optimed from Latvia). As a next step, the researchers are planning for a new clinical study in which they will repeat the study with a larger patient group - 1000 patients will be invited to participate from all around the world.

Credit: 
Lund University

PPIs may affect responses to atezolizumab in patients with urothelial cancer

Bottom Line: Proton pump inhibitor (PPI) use was associated with worse outcomes in patients with urothelial cancer treated with the immunotherapeutic atezolizumab (Tecentriq), compared with patients who did not use PPIs.

Journal in Which the Study was Published: Clinical Cancer Research, a journal of the American Association for Cancer Research

Author: Ashley Hopkins, PhD, an early-career research fellow at Flinders University in Australia

Background: PPIs are commonly used medications for acid reflux, heartburn, and ulcers. Recent evidence indicates that PPIs cause significant changes to the gut microbiome, which plays an important role in regulating immune function, explained Hopkins. "There is growing concern that an altered gut microbiome could negatively impact the efficacy of immune checkpoint inhibitors," he said. "Given that approximately 30 percent of cancer patients use PPIs, often for extended time periods, there is an urgent need to determine if PPIs influence the efficacy of immune checkpoint inhibitors." Five immune checkpoint inhibitors are currently approved in the United States for the treatment of urothelial cancer.

How the Study was Conducted: In this study, Hopkins and colleagues evaluated how PPI use impacted survival outcomes in patients with urothelial cancer (commonly referred to as bladder cancer) who were treated with the immune checkpoint inhibitor atezolizumab or with chemotherapy. The researchers examined data from the IMvigor210 and IMvigor211 clinical trials, which evaluated atezolizumab in patients with locally advanced or metastatic urothelial cancer. IMvigor210 was a single-arm study evaluating atezolizumab in previously treated or treatment-naïve patients, while IMvigor211 was a randomized control trial that compared atezolizumab with chemotherapy in previously treated patients.

Results: Of the 429 patients enrolled in IMvigor210 who received atezolizumab treatment, 33 percent had used PPIs within the 30 days before or the 30 days after initiating atezolizumab. In IMvigor211, 31 percent of the 467 patients treated with atezolizumab and 40 percent of the 185 patients treated with chemotherapy had used PPIs within the 60-day window.

Hopkins and colleagues found that among patients treated with atezolizumab, those who used PPIs had a 68 percent greater risk of death, a 47 percent greater risk of disease progression, and a 54 percent lower objective response rate than those who did not use PPIs. PPI use was associated with worse outcomes even after adjusting for several patient and tumor characteristics. In contrast, PPI use did not significantly impact overall survival, progression-free survival, or the objective response rate for patients treated with chemotherapy.

Among PPI non-users, those treated with atezolizumab had a 31 percent lower risk of death than patients treated with chemotherapy. However, among PPI users, there were no significant differences in survival outcomes between patients treated with atezolizumab and chemotherapy, suggesting that PPI use impacted the magnitude of atezolizumab benefit.

Author's Comments: "PPIs are overused, or inappropriately used, in patients with cancer by up to 50 percent, seemingly from a perspective that they will cause no harm," said Hopkins. "The findings from this study suggest that non-critical PPI use needs to be approached very cautiously, particularly when an immune checkpoint inhibitor is being used to treat urothelial cancer."

Study Limitations: A limitation of the study is that it was a retrospective analysis that evaluated a single immune checkpoint inhibitor in one cancer type. Hopkins suggested that future research should evaluate the impact of PPI use on other immune checkpoint inhibitors, additional cancer types, and different combinations of immune checkpoint inhibitors or chemotherapy regimens.

Credit: 
American Association for Cancer Research

Could breadfruit be the next superfood? UBC researchers say yes

image: UBC Okanagan researchers say breadfruit is nutritionally sound and has the potential to improve worldwide food security issues.

Image: 
Jan Vozenilek, Copper Sky Productions, Kelowna.

A fruit used for centuries in countries around the world is getting the nutritional thumbs-up from a team of British Columbia researchers.

Breadfruit, which grows in abundance in tropical and South Pacific countries, has long been a staple in the diet of many people. The fruit can be eaten when ripe, or it can be dried and ground up into a flour and repurposed into many types of meals, explains UBC Okanagan researcher Susan Murch.

"Breadfruit is a traditional staple crop from the Pacific islands with the potential to improve worldwide food security and mitigate diabetes," says Murch, a chemistry professor in the newly-created Irving K. Barber Faculty of Science. "While people have survived on it for thousands of years there was a lack of basic scientific knowledge of the health impacts of a breadfruit-based diet in both humans and animals."

Breadfruit can be harvested, dried and ground into a gluten-free flour. For the project, researchers had four breadfruits from the same tree in Hawaii, shipped to the Murch Lab at UBC Okanagan. Doctoral student Ying Liu led the study examining the digestion and health impact of a breadfruit-based diet.

"Detailed and systematic studies of the health impacts of a breadfruit diet had not previously been conducted and we wanted to contribute to the development of breadfruit as a sustainable, environmentally-friendly and high-production crop," Liu says.

The few studies done on the product have been to examine the glycemic index of breadfruit--with a low glycemic index it is comparable to many common staples such as wheat, cassava, yam and potatoes.

"The objective of our current study was to determine whether a diet containing breadfruit flour poses any serious health concerns," explains Liu, who conducted her research with colleagues from British Columbia Institute of Technology's Natural Health and Food Products Research Group and the Breadfruit Institute of the National Tropical Botanic Garden in Hawaii.

The researchers designed a series of studies--using flour ground from dehydrated breadfruits--that could provide data on the impacts of a breadfruit-based diet fed to mice and also an enzyme digestion model.

The researchers determined that breadfruit protein was found to be easier to digest than wheat protein in the enzyme digestion model. And mice fed the breadfruit diet had a significantly higher growth rate and body weight than standard diet-fed mice.

Liu also noted mice on the breadfruit diet had a significantly higher daily water consumption compared to mice on the wheat diet. And at the end of the three-week-trial, the body composition was similar between the breadfruit and wheat diet-fed mice.

"As the first complete, fully-designed breadfruit diet study, our data showed that a breadfruit diet does not impose any toxic impact," says Liu. "Fundamental understanding of the health impact of breadfruit digestion and diets is necessary and imperative to the establishment of breadfruit as a staple or as a functional food in the future."

The use of breadfruit is nutritious and sustainable and could make inroads in food sustainability for many populations globally, she adds. For example, the average daily consumption of grain in the United States is 189 grams (6.67 ounces) per day. Liu suggests if a person ate the same amount of cooked breadfruit they can meet up to nearly 57 per cent of their daily fibre requirement, more than 34 per cent of their protein requirement and at the same time consume vitamin C, potassium, iron, calcium and phosphorus.

"Overall, these studies support the use of breadfruit as part of a healthy, nutritionally balanced diet," says Liu. "Flour produced from breadfruit is a gluten-free, low glycemic index, nutrient-dense and complete protein option for modern foods."

Credit: 
University of British Columbia Okanagan campus

New calculation refines comparison of matter with antimatter

image: A new calculation performed using the world's fastest supercomputers allows scientists to more accurately predict the likelihood of two kaon decay pathways, and compare those predictions with experimental measurements. The comparison tests for tiny differences between matter and antimatter that could, with even more computing power and other refinements, point to physics phenomena not explained by the Standard Model.

Image: 
Brookhaven National Laboratory

UPTON, NY--An international collaboration of theoretical physicists--including scientists from the U.S. Department of Energy's (DOE) Brookhaven National Laboratory (BNL) and the RIKEN-BNL Research Center (RBRC)--has published a new calculation relevant to the search for an explanation of the predominance of matter over antimatter in our universe. The collaboration, known as RBC-UKQCD, also includes scientists from CERN (the European particle physics laboratory), Columbia University, the University of Connecticut, the University of Edinburgh, the Massachusetts Institute of Technology, the University of Regensburg, and the University of Southampton. They describe their result in a paper to be published in the journal Physical Review D and has been highlighted as an "editor's suggestion."

Scientists first observed a slight difference in the behavior of matter and antimatter--known as a violation of "CP symmetry"--while studying the decays of subatomic particles called kaons in a Nobel Prize winning experiment at Brookhaven Lab in 1963 [see: https://www.bnl.gov/bnlweb/history/nobel/1980.php]. While the Standard Model of particle physics was pieced together soon after that, understanding whether the observed CP violation in kaon decays agreed with the Standard Model has proved elusive due to the complexity of the required calculations.

The new calculation gives a more accurate prediction for the likelihood with which kaons decay into a pair of electrically charged pions vs. a pair of neutral pions. Understanding these decays and comparing the prediction with more recent state-of-the-art experimental measurements made at CERN and DOE's Fermi National Accelerator Laboratory gives scientists a way to test for tiny differences between matter and antimatter, and search for effects that cannot be explained by the Standard Model.

The new calculation represents a significant improvement over the group's previous result, published in Physical Review Letters in 2015. Based on the Standard Model, it gives a range of values for what is called "direct CP symmetry violation" in kaon decays that is consistent with the experimentally measured results. That means the observed CP violation is now, to the best of our knowledge, explained by the Standard Model, but the uncertainty in the prediction needs to be further improved since there is also an opportunity to reveal any sources of matter/antimatter asymmetry lying beyond the current theory's description of our world.

"An even more accurate theoretical calculation of the Standard Model may yet lie outside of the experimentally measured range. It is therefore of great importance that we continue our progress, and refine our calculations, so that we can provide an even stronger test of our fundamental understanding," said Brookhaven Lab theorist Amarjit Soni.

Matter/antimatter imbalance

"The need for a difference between matter and antimatter is built into the modern theory of the cosmos," said Norman Christ of Columbia University. "Our current understanding is that the present universe was created with nearly equal amounts of matter and antimatter. Except for the tiny effects being studied here, matter and antimatter should be identical in every way, beyond conventional choices such as assigning negative charge to one particle and positive charge to its anti-particle. Some difference in how these two types of particles operate must have tipped the balance to favor matter over antimatter," he said.

"Any differences in matter and antimatter that have been observed to date are far too weak to explain the predominance of matter found in our current universe," he continued. "Finding a significant discrepancy between an experimental observation and predictions based on the Standard Model would potentially point the way to new mechanisms of particle interactions that lie beyond our current understanding--and which we hope to find to help to explain this imbalance."

Modeling quark interactions

All of the experiments that show a difference between matter and antimatter involve particles made of quarks, the subatomic building blocks that bind through the strong force to form protons, neutrons, and atomic nuclei--and also less-familiar particles like kaons and pions.

"Each kaon and pion is made of a quark and an antiquark, surrounded by a cloud of virtual quark-antiquark pairs, and bound together by force carriers called gluons," explained Christopher Kelly, of Brookhaven National Laboratory.

The Standard Model-based calculations of how these particles behave must therefore include all the possible interactions of the quarks and gluons, as described by the modern theory of strong interactions, known as quantum chromodynamics (QCD).

In addition, these bound particles move at close to the speed of light. That means the calculations must also include the principles of relativity and quantum theory, which govern such near-light-speed particle interactions.

"Because of the huge number of variables involved, these are some of the most complicated calculations in all of physics," noted Tianle Wang, of Columbia University.

Computational challenge

To conquer the challenge, the theorists used a computing approach called lattice QCD, which "places" the particles on a four-dimensional space-time lattice (three spatial dimensions plus time). This box-like lattice allows them to map out all the possible quantum paths for the initial kaon to decay to the final two pions. The result becomes more accurate as the number of lattice points increases. Wang noted that the "Feynman integral" for the calculation reported here involved integrating 67 million variables!

These complex calculations were done by using cutting-edge supercomputers. The first part of the work, generating samples or snapshots of the most likely quark and gluon fields, was performed on supercomputers located in the US, Japan, and the UK. The second and most complex step of extracting the actual kaon decay amplitudes was performed at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science user facility at DOE's Lawrence Berkeley National Laboratory.

But using the fastest computers is not enough; these calculations are still only possible even on these computers when using highly optimized computer codes, developed for the calculation by the authors.

"The precision of our results cannot be increased significantly by simply performing more calculations," Kelly said. "Instead, in order to tighten our test of the Standard Model we must now overcome a number of more fundamental theoretical challenges. Our collaboration has already made significant strides in resolving these issues and coupled with improvements in computational techniques and the power of near-future DOE supercomputers, we expect to achieve much improved results within the next three to five years."

Credit: 
DOE/Brookhaven National Laboratory