Earth

Mysterious glowing coral reefs are fighting to recover

image: Acropora corals. Colourful bleaching in New Caledonia.

Image: 
The Ocean Agency/XL Catlin Seaview Survey

A new study by the University of Southampton has revealed why some corals exhibit a dazzling colourful display, instead of turning white, when they suffer 'coral bleaching' - a condition which can devastate reefs and is caused by ocean warming. The scientists behind the research think this phenomenon is a sign that corals are fighting to survive.

Many coral animals live in a fragile, mutually beneficial relationship, a 'symbiosis' with tiny algae embedded in their cells. The algae gain shelter, carbon dioxide and nutrients, while the corals receive photosynthetic products to fulfil their energy needs. If temperatures rise just 1?C above the usual summer maximum, this symbiosis breaks down; the algae are lost, the coral's white limestone skeleton shines through its transparent tissue and a damaging process known as 'coral bleaching' occurs.

This condition can be fatal to the coral. Once its live tissue is gone, the skeleton is exposed to the eroding forces of the environment. Within a few years, an entire coral reef can break down and much of the biodiversity that depends on its complex structure is lost - a scenario which currently threatens the future of reefs around the world.

However, some bleaching corals undergo an, until now, mysterious transformation - emitting a range of different bright neon colours. Why this happens has now been explained by a team of scientists from the University of Southampton's Coral Reef Laboratory, who have published their detailed insights in the journal Current Biology.

The researchers conducted a series of controlled laboratory experiments at the coral aquarium facility of the University of Southampton. They found that during colourful bleaching events, corals produce what is effectively a sunscreen layer of their own, showing itself as a colourful display. Furthermore, it's thought this process encourages the coral symbionts to return.

Professor Jörg Wiedenmann, head of the University of Southampton's Coral Reef Laboratory explains: "Our research shows colourful bleaching involves a self-regulating mechanism, a so-called optical feedback loop, which involves both partners of the symbiosis. In healthy corals, much of the sunlight is taken up by the photosynthetic pigments of the algal symbionts. When corals lose their symbionts, the excess light travels back and forth inside the animal tissue -reflected by the white coral skeleton. This increased internal light level is very stressful for the symbionts and may delay or even prevent their return after conditions return to normal.

"However, if the coral cells can still carry out at least some of their normal functions, despite the environmental stress that caused bleaching, the increased internal light levels will boost the production of colourful, photoprotective pigments. The resulting sunscreen layer will subsequently promote the return of the symbionts. As the recovering algal population starts taking up the light for their photosynthesis again, the light levels inside the coral will drop and the coral cells will lower the production of the colourful pigments to their normal level."

The researchers believe corals which undergo this process are likely to have experienced episodes of mild or brief ocean-warming or disturbances in their nutrient environment - rather than extreme events.

Dr. Cecilia D'Angelo, Lecturer of Molecular Coral Biology at Southampton, comments: "Bleaching is not always a death sentence for corals, the coral animal can still be alive. If the stress event is mild enough, corals can re-establish the symbiosis with their algal partner. Unfortunately, recent episodes of global bleaching caused by unusually warm water have resulted in high coral mortality, leaving the world's coral reefs struggling for survival."

Dr. Elena Bollati, Researcher at the National University Singapore, who studied this subject during her PhD training at the University of Southampton, adds: "We reconstructed the temperature history of known colourful bleaching events around the globe using satellite imagery. These data are in excellent agreement with the conclusions of our controlled laboratory experiments, suggesting that colourful bleaching occurs in association with brief or mild episodes of heat stress."

The scientists are encouraged by recent reports suggesting colourful bleaching has occurred in some areas of the Great Barrier Reef during the most recent mass bleaching there in March-April 2020. They think this raises the hope that at least some patches of the world's largest reef system may have better recovery prospects than others, but emphasise that only a significant reduction of greenhouse gases at a global scale and sustained improvement in water quality at a regional level can save coral reefs beyond the 21st century.

Credit: 
University of Southampton

World can likely capture and store enough carbon dioxide to meet climate targets

The capture and storage of carbon dioxide (CO2) underground is one of the key components of the Intergovernmental Panel on Climate Change's (IPCC) reports on how to keep global warming to less than 2°C above pre-industrial levels by 2100.

Carbon capture and storage (CCS) would be used alongside other interventions such as renewable energy, energy efficiency, and electrification of the transportation sector.

The IPCC used models to create around 1,200 technology scenarios whereby climate change targets are met using a mix of these interventions, most of which require the use of CCS.

Now a new analysis from Imperial College London, published today in Energy & Environmental Science, suggests that no more than 2,700 Gigatonnes (Gt) of carbon dioxide (CO2) would be sufficient to meet the IPCC's global warming targets. This is far less than leading estimates by academic and industry groups of what is available, which suggest there is more than 10,000 Gt of CO2 storage space globally.

It also found that that the current rate of growth in the installed capacity of CCS is on track to meet some of the targets identified in IPCC reports, but that research and commercial efforts should focus on maintaining this growth while identifying enough underground space to store this much CO2.

CCS involves trapping CO2 at its emission source, such as fossil-fuel power stations, and storing it underground to keep it from entering the atmosphere. Together with other climate change mitigation strategies, CCS could help the world reach the climate change mitigation goals set out by the IPCC.

However, until now the amount of storage needed has not been specifically quantified.

The study has shown for the first time that the maximum storage space needed is only around 2,700 Gt, but that this amount will grow if CCS deployment is delayed. The researchers worked this out by combining data on the past 20 years of growth in CCS, information on historical rates of growth in energy infrastructure, and models commonly used to monitor the depletion of natural resources.

The research team, led by Dr Christopher Zahasky at Imperial's Department of Earth Science and Engineering, found that worldwide, there has been 8.6 per cent growth in CCS capacity over the past 20 years, putting us on a trajectory to meet many climate change mitigation scenarios that include CCS as part of the mix.

Dr Zahasky, who is now an assistant professor at the University of Wisconsin-Madison but conducted the work at Imperial, said: "Nearly all IPCC pathways to limit warming to 2°C require tens of gigatons of CO2 stored per year by mid-century. However, until now, we didn't know if these targets were achievable given historic data, or how these targets related to subsurface storage space requirements.

"We found that even the most ambitious scenarios are unlikely to need more than 2,700 Gt of CO2 storage resource globally, much less than the 10,000 Gt of storage resource that leading reports suggest is possible.?Our study shows that if climate change targets are not met by 2100, it won't be for a lack of carbon capture and storage space."

Study co-author Dr Samuel Krevor, also from the Department of Earth Science and Engineering, said: "Rather than focus our attention on looking at how much storage space is available, we decided for the first time to evaluate how much subsurface storage resource is actually needed, and how quickly it must be developed, to meet climate change mitigation targets."

The researchers say that the rate at which CO2 is stored is important in its success in climate change mitigation. The faster CO2 is stored, the less total subsurface storage resource is needed to meet storage targets. This is because it becomes harder to find new reservoirs or make further use of existing reservoirs as they become full.

They found that storing faster and sooner than current deployment might be needed to help governments meet the most ambitious climate change mitigation scenarios identified by the IPCC.

The study also demonstrates how using growth models, a common tool in resource assessment, can help industry and governments to monitor short-term CCS deployment progress and long-term resource requirements.

However, the researchers point out that meeting CCS storage requirements will not be sufficient on its own to meet the IPCC climate change mitigation targets.

Dr Krevor said: "Our analysis shows good news for CCS if we keep up with this trajectory - but there are many other factors in mitigating climate change and its catastrophic effects, like using cleaner energy and transport as well as significantly increasing the efficiency of energy use."

Credit: 
Imperial College London

Development of heat-tolerant annual ryegrass germplasm

image: Crossing block of annual ryegrass cycle 3 of selection for germination at 40 degrees C.

Image: 
Courtesy of Eric Billman

Throughout the southeastern U.S., forage production is a critical pillar of agriculture and livestock production, particularly for the cattle industry. Annual ryegrass serves as the primary forage for many late winter and early spring production systems, but grazing time is often limited due to late fall planting to avoid high soil temperatures that cause secondary seed dormancy.

In a recently published Crop Science article, researchers from Mississippi State University used recurrent phenotypic selection to develop annual ryegrass germplasm that could successfully germinate and survive under high temperature stress conditions. Screening and selections were conducted using growth chambers to limit environmental effects.

Following three years of selection, population germination at 40°C increased from 5% to 46%. Realized heritability also increased significantly between each generation, totaling h2 = 0.41 across all cycles of selection. Other aspects of plant morphology and maturity were not affected by selecting for high temperature germination.

This germplasm will potentially allow for late summer or early fall planting in the South, allowing producers more grazing days in the winter and spring. Seed increases and variety testing for cultivar release are currently ongoing as part of the forage crop breeding program at Mississippi State.

Credit: 
American Society of Agronomy

Cancer researchers gain valuable insights through a comprehensive review of Clioquinol

CQ is an old drug that was commonly used to treat fungal/protozoal infections of the gastrointestinal tract. Patients in Japan taking it developed subacute myelo-optic neuropathy which resulted in CQ being discontinued for oral use. Recently, extensive research has been done in regards to CQ's use for cancer treatment.

Dr. Q. Ping Dou and his team at the Karmanos Cancer Institute reviewed recent research literature and patents involving CQ for cancer therapy. They found that CQ had exhibited anti-tumor activities in a variety of cancer cell lines in preclinical studies. CQ inhibited the growth of cancer cells and was cytotoxic to cancer cells. However, its positive preclinical results did not translate to clinical efficacy. Results of a clinical trial showed that CQ had low therapeutic efficacy, which was probably due to CQ not effectively being absorbed by tumor cells in vivo.

In this review article, Dou's team also summarized recent findings on several CQ analogues, novel combinations of CQ with other drugs, and novel metal complexes containing CQ; all these preclinical approaches demonstrate strong anti-cancer activities. Promising examples include a gellan gum/glucosamine/CQ film that could be used for treating oral and skin cancers; the CQ analogue nitroxoline, as an anti-cancer agent demonstrated higher potency and lower neurotoxicity than CQ; combining CQ with docahexaenoic acid or disulfiram showed synergistic cytotoxicity; CQ when complexed with ruthenium or oxovanadium (IV) potently inhibited cancer cell growth. The authors of this review article also summarized applications of CQ-related patents for cancer therapy.

The first author of the review article, Raheel Khan, concluded, "Because of the poor clinical results in patients, future efforts should be focused on discovering and researching different combinations, delivery methods, and analogues of clioquinol for use in cancer therapy. The research summarized in our review article can help guide future cancer research to take advantage of the insights gained from preclinical studies on clioquinol".

Credit: 
Bentham Science Publishers

The ins and outs of sex change in medaka fish

image: The teleost fish medaka (Oryzias latices) used in this study. left: male (XY) right: female (XX).
Female turns into male after starvation during a very early stage of larva.

Image: 
TANAKA.

Larval nutrition plays a role in determining the sexual characteristics of Japanese rice fish, also called medaka (Oryzias latipes), report a team of researchers led by Nagoya University. The findings, published in the journal Biology Open, could further understanding of a rare condition in humans and other vertebrates, where they genetically belong to one sex but also have characteristics of the other.

Decades ago, scientists found that medaka fish often undergo sex reversal in the wild. This involves genetically female larvae (meaning they have two X chromosomes) going on to develop male characteristics, or vice versa. This has made medaka fish a model organism for studying environmental sex development and other biological processes they have in common with vertebrates.

Now, Nagoya University reproductive biologist Minoru Tanaka and colleagues in Japan have gained further insight into the factors that affect medaka sex reversal, potentially providing direction for future research into similar conditions in other species.

Scientists had already discovered that environmental factors, such as temperature changes in the brackish and fresh waters where medaka fish live, are likely involved in their sex reversal. Tanaka and his team wanted to know if nutrition also played a role.

They starved medaka larvae for five days. This was enough time to affect their metabolism without killing them. Three to four months later, the team examined the fish and found that 20% of the genetically female medaka had developed testes and characteristically male fins. The same did not occur in larvae that were not starved.

Further tests showed that sex reversal in the fish was associated with reduced fatty acid synthesis and lipid levels. Specifically, starvation suppressed a metabolic pathway that synthesizes an enzyme called CoA, and disrupted a gene called fasn. These disruptions led to reductions in fatty acid synthesis. The scientists also found that a male gene, called dmrt1, was involved in the female-to-male reversal.

"Overall, our findings showed that the sex of medaka fish is affected by both the external environment and internal metabolism," Tanaka says. "We believe lipids may represent a novel sex regulation system that responds to nutritional conditions."

The team next plans on identifying other internal factors involved in medaka sex reversal. Future research should try to find the tissues or organs that sense changes in the internal environment and then produce key metabolites to regulate sex differentiation.

Credit: 
Nagoya University

Surging numbers of first-generation learners being left behind in global education

'First-generation learners' - a substantial number of pupils around the world who represent the first generation in their families to receive an education - are also significantly more likely to leave school without basic literacy or numeracy skills, a study suggests.

Research by academics at the Faculty of Education, University of Cambridge, Addis Ababa University and the Ethiopian Policy Studies Institute, examined the progress of thousands of students in Ethiopia, including a large number of 'first-generation learners': children whose parents never went to school.

The numbers of such pupils have soared in many low and middle-income countries in recent decades, as access to education has widened. Primary school enrolment in Ethiopia, for example, has more than doubled since 2000, thanks to a wave of government education investment and reforms.

But the new study found that first-generation learners are much more likely to underperform in Maths and English, and that many struggle to progress through the school system.

The findings, published in the Oxford Review of Education, suggest that systems like Ethiopia's - which a generation ago catered mainly to the children of an elite minority - urgently need to adapt to prioritise the needs of first-generation learners, who often face greater disadvantages than their contemporaries.

Professor Pauline Rose, Director of the Research for Equitable Access and Learning (REAL) Centre in the Faculty of Education, and one of the paper's authors, said: "The experience of first-generation learners has largely gone under the radar. We know that high levels of parental education often benefit children, but we have considered far less how its absence is a disadvantage."

"Children from these backgrounds may, for example, have grown up without reading materials at home. Our research indicates that being a first-generation learner puts you at a disadvantage over and above being poor. New strategies are needed to prioritise these students if we really want to promote quality education for all."

The study used data from Young Lives, an international project studying childhood poverty, to assess whether there was a measurable relationship between being a first-generation learner and children's learning outcomes.

In particular, they drew on two data sets: One, from 2012/13, covered the progress of more than 13,700 Grade 4 and 5 students in various Ethiopian regions; the other, from 2016/17, covered roughly the same number and mix at Grades 7 and 8. They also drew on a sub-set of those who participated in both surveys, comprising around 3,000 students in total.

Around 12% of the entire dataset that includes those in school were first-generation learners. The researchers found that first-generation learners often come from more disadvantaged backgrounds than other pupils: for example, they are more likely to live further from school, come from poorer families, or lack access to a home computer. Regardless of their wider circumstances, however, first-generation learners were also consistently more likely to underperform at school.

For example: the research compiled the start-of-year test scores of students in Grades 7 and 8. These were standardised (or 'scaled') so that 500 represented a mean test score. Using this measure, the average test score of first-generation learners in Maths was 470, compared with 504 for non-first-generation pupils. In English, first-generation learners averaged 451, compared with 507 for their non-first-generation peers.

The attainment gap between first-generation learners and their peers was also shown to widen over time: first-generation learners from the Grade 4/5 cohort in the study, for example, were further behind their peers by the end of Grade 4 than when they began.

The authors argue that a widespread failure to consider the disadvantages faced by first-generation learners may, in part, explain why many low and middle-income countries are experiencing a so-called 'learning crisis' in which attainment in literacy and numeracy remains poor, despite widening access to education.

While this is often blamed on issues such as large class sizes or poor-quality teaching, the researchers say that it may have more to do with a surge of disadvantaged children into systems that, until recently, did not have to teach as many pupils from these backgrounds.

They suggest that many teachers may need extra training to help these pupils, who are often less well-prepared for school than those from more educated (and often wealthier) families. Curricula, assessment systems and attainment strategies may also need to be adapted to account for the fact that, in many parts of the world, the mix of students at primary school is now far more diverse than a generation ago.

Professor Tassew Woldehanna, President of Addis Ababa University and one of the paper's authors, said: "It is already widely acknowledged that when children around the world start to go back to school after the COVID-19 lockdowns, many of those from less-advantaged backgrounds will almost certainly have fallen further behind in their education compared with their peers. This data suggests that in low and middle-income countries, first-generation learners should be the target of urgent attention, given the disadvantages they already face."

"It is likely that, at the very least, a similar situation to the one we have seen in Ethiopia exists in other sub-Saharan African countries, where many of today's parents and caregivers similarly never went to school," Rose added.

"These findings show that schooling in its current form is not helping these children to catch up: if anything, it's making things slightly worse. There are ways to structure education differently, so that all children learn at an appropriate pace. But we start by accepting that as access to education widens, it is inevitable that some children will need more attention than others. That may not be due to a lack of quality in the system, but because their parents never had the same opportunities."

Credit: 
University of Cambridge

Peculiar behavior of the beetle Toramus larvae

image: Living last instar of Toramus quadriguttatus.

Image: 
The Coleopterists Society

The beetles of family Erotylidae (Coleoptera: Cucujoidea) are morphologically and biologically diversified into six subfamilies, 10 tribes, and over 3,500 species. The tribe Toramini includes 4 genera and is distributed worldwide. Although, the biology of this group is poorly investigated, Leschen (2003) reported that larvae of Toramus and Loberoschema retain exuviae on their abdomen throughout larval development.

In this study, the toramine larvae of 3 species of Toramus and 1 species of Loberoschema are fully described (Fig. 1), and the morphological character states of larval Toramini and within Toramus are discussed. We found that larvae of the genus Toramus attach their exuviae to their distal abdomen, with each exuvia from the preceding instar attached to the next. In live condition, they retain all exuviae, and these exuviae are piled vertically and directed posteriorly as seen in Fig. 2. The exuvial attachment is facilitated by modified hook-like setae with flattened shafts on abdominal tergite VIII, which are inserted into the posterior end of the ecdysial line of the exuvia of the previous instar (Fig. 3).

Why do they carry their exuviae? Among insects, debris-cloaking with feces, exuviae and/or other debris gathered from the habitat occurs in immature insects belonging to assassin bugs (Reduviidae: Hemiptera), lacewings (Chrysopidae: Neuroptera), two families of small beetles (Derodontidae and Anamorphidae: Coleoptera), tortoise beetles (Cassidinae: Chrysomelidae: Coleoptera), geometer moths (Geometridae: Lepidopter) and owlet moths (Noctuidae: Lepidoptera). There have been few studies on the function of debris-cloaking and the exuvial retention of Toramus and anamorphines is distinctive in forming a vertical pile-like "tail" (Fig. 2). We hypothesized that the exuvial attachment of Toramus serves as a kind of autotomy, whereby the exuviae are removed by a predator, thus leaving the body of the larva unharmed -- "exuvial autotomy".

The loss of a tail (caudal autotomy) is well known in lizards, and the loss of an appendage in arthropods (appendage autotomy) has evolved independently and several times within each group. In general, "true" autotomy occurs along a weakened portion, called the breakage plane, to shed body parts. Although we observed no such specific areas of weakness on the hook-like setae, exuvial autotomy, if it exists in these beetles, may not require breakage planes because the anchoring setae of such small beetle larvae may be fragile enough and easily broken, either by moving through the environment or under attack by potential predators. Actually, among the larvae we examined, some larvae with fractured hook-like setae were observed.

In behavioral tests using the Toramus larvae and spiders as potential predators, the preliminary results provide little support for the hypothesis that exuvial retainment acts as a predatory deterrence. Proper assessment of its defensive function in toramines requires more comprehensive observational studies involving larvae and potential predators.

Credit: 
Ehime University

Oldest connection with Native Americans identified near Lake Baikal in Siberia

image: Excavation in 1976 of the Ust'-Kyakhta-3 site located on right bank of the Selenga River in the vicinity of Ust-Kyakhta village in the Kyakhtinski Region of the Republic of Buryatia (Russia).

Image: 
A. P. Okladnikov

Using human population genetics, ancient pathogen genomics and isotope analysis, a team of researchers assessed the population history of the Lake Baikal region, finding the deepest con-nection to date between the peoples of Siberia and the Americas. The current study, published in the journal Cell, also demonstrates human mobility, and hence connectivity, across Eurasia during the Early Bronze Age.

Modern humans have lived near Lake Baikal since the Upper Paleolithic, and have left behind a rich archaeological record. Ancient genomes from the region have revealed multiple genetic turnovers and admixture events, indicating that the transition from the Neolithic to the Bronze Age was facilitated by human mobility and complex cultural interactions. The nature and timing of these interactions, however, remains largely unknown.

A new study published in the journal Cell reports the findings of 19 newly sequenced ancient human genomes from the region of Lake Baikal, including one of the oldest reported from that region. Led by the Department of Archaeogenetics at the Max Planck Institute for the Science of Human History, the study illuminates the population history of the region, revealing deep connections with the First Peoples of the Americas, dating as far back as the Upper Paleolithic period, as well as connectivity across Eurasia during the Early Bronze Age.

The deepest link between peoples

"This study reveals the deepest link between Upper Paleolithic Siberians and First Americans," says He Yu, first author of the study. "We believe this could shed light on future studies about Native American population history."

Past studies have indicated a connection between Siberian and American populations, but a 14,000-year-old individual analysed in this study is the oldest to carry the mixed ancestry present in Native Americans. Using an extremely fragmented tooth excavated in 1962 at the Ust-Kyahta-3 site, re-searchers generated a shotgun-sequenced genome enabled by cutting edge techniques in molecular biology.

This individual from southern Siberia, along with a younger Mesolithic one from northeastern Sibe-ria, shares the same genetic mixture of Ancient North Eurasian (ANE) and Northeast Asian (NEA) ancestry found in Native Americans, and suggests that the ancestry which later gave rise to Native Americans in North- and South America was much more widely distributed than previously assumed. Evidence suggests that this population experienced frequent genetic contacts with NEA populations, resulting in varying admixture proportions across time and space.

"The Upper Paleolithic genome will provide a legacy to study human genetic history in the future," says Cosimo Posth, a senior author of the paper. Further genetic evidence from Upper Paleolithic Siberian groups is necessary to determine when and where the ancestral gene pool of Native Ameri-cans came together.

A web of prehistoric connections

In addition to this transcontinental connection, the study presents connectivity within Eurasia as evi-denced in both human and pathogen genomes as well as stable isotope analysis. Combining these lines of evidence, the researchers were able to produce a detailed description of the population histo-ry in the Lake Baikal region.

The presence of Eastern European steppe-related ancestry is evidence of contact between southern Siberian and western Eurasian steppe populations in the preamble to the Early Bronze Age, an era characterized by increasing social and technological complexity. The surprising presence of Yersinia pestis, the plague-causing pathogen, points to further wide-ranging contacts.

Although spreading of Y. pestis was postulated to be facilitated by migrations from the steppe, the two individuals here identified with the pathogen were genetically northeastern Asian-like. Isotope analysis of one of the infected individuals revealed a non-local signal, suggesting origins outside the region of discovery. In addition, the strains of Y. pestis the pair carried is most closely related to a contemporaneous strain identified in an individual from the Baltic region of northeastern Europe, further supporting the high mobility of those Bronze age pathogens and likely also people.

"This easternmost appearance of ancient Y. pestis strains is likely suggestive of long-range mobility during the Bronze Age," says Maria Spyrou, one of the study's coauthors. "In the future, with the generation of additional data we hope to delineate the spreading patterns of plague in more detail." concludes Johannes Krause, senior author of the study.

Credit: 
Max Planck Institute of Geoanthropology

NIST team builds hybrid quantum system by entangling molecule with atom

image: NIST physicist James Chin-wen Chou adjusts one of the laser beams used to manipulate an atom and a molecule in experiments that could help build hybrid quantum information systems.

Image: 
Burrus/NIST

Physicists at the National Institute of Standards and Technology (NIST) have boosted their control of the fundamental properties of molecules at the quantum level by linking or "entangling" an electrically charged atom and an electrically charged molecule, showcasing a way to build hybrid quantum information systems that could manipulate, store and transmit different forms of data.

Described in a Nature paper posted online May 20, the new NIST method could help build large-scale quantum computers and networks by connecting quantum bits (qubits) based on otherwise incompatible hardware designs and operating frequencies. Mixed-platform quantum systems could offer versatility like that of conventional computer systems, which, for example, can exchange data among an electronic processor, an optical disc, and a magnetic hard drive.

The NIST experiments successfully entangled the properties of an electron in the atomic ion with the rotational states of the molecule so that measurements of one particle would control the properties of the other. The research builds on the same group's 2017 demonstration of quantum control of a molecule, which extended techniques long used to manipulate atoms to the more complicated and potentially more fruitful arena offered by molecules, composed of multiple atoms bonded together.

Molecules have various internal energy levels, like atoms, but also rotate and vibrate at many different speeds and angles. Molecules could therefore act as mediators in quantum systems by converting quantum information across a wide range of qubit frequencies ranging from a few thousand to a few trillion cycles per second. With vibration, molecules could offer even higher qubit frequencies.

"We proved the atomic ion and molecular ion are entangled, and we also showed you get a broad selection of qubit frequencies in the molecule," NIST physicist James (Chin-wen) Chou said.

A qubit represents the digital data bits 0 and 1 in terms of two different quantum states, such as low- and high-energy levels in an atom. A qubit can also exist in a "superposition" of both states at once. The NIST researchers entangled two
energy levels of a calcium atomic ion with two different pairs of rotational states of a calcium hydride molecular ion, which is a calcium ion bonded to a hydrogen atom. The molecular qubit had a transition frequency -- the speed of cycling between two rotational states -- of either low energy at 13.4 kilohertz (kHz, thousands of cycles per second) or high energy at 855 billion cycles per second (gigahertz or GHz).

"Molecules provide a selection of transition frequencies and we can choose from many types of molecules, so that is a huge range of qubit frequencies we can bring into quantum information science," Chou said. "We are taking advantage of transitions found in nature so the results will be the same for everyone."

The experiments used a specific formula of blue and infrared laser beams of various intensities, orientations and pulse sequences to cool, entangle and measure the quantum states of the ions.

First, the NIST researchers trapped and cooled the two ions to their lowest-energy states. The pair repelled each other due to their physical proximity and positive electric charges, and the repulsion acted like a spring locking their motion. Laser pulses added energy to the molecule's rotation and created a superposition of low-energy and high-energy rotational states, which also set off a shared motion, so the two ions began rocking or swinging in unison, in this case in opposite directions.

The molecule's rotation was thus entangled with its motion. More laser pulses exploited the two ions' shared motion to induce the atomic ion into a superposition of low and high energy levels. In this way, entanglement was transferred from the motion to encompass the atom. The researchers determined the state of the atomic ion by shining a laser on it and measuring its fluorescence, or how much light it scattered.

The NIST researchers demonstrated the technique with two sets of the molecule's rotational properties, successfully achieving entanglement 87% of the time with a low-energy pair (qubit) and 76% of the time with a higher-energy pair. In the low-energy case, the molecule rotated at two slightly different angles, like a top, but in both states at once. In the high-energy case, the molecule was spinning at two rates simultaneously, separated by a large difference in speed.

The new work was made possible by the quantum logic techniques shown in the 2017 experiment. The researchers applied pulses of infrared laser light to drive switching between two of more than 100 possible rotational states of the molecule. The researchers knew this transition occurred because a certain amount of energy was added to the two ions' shared motion. The researchers knew the ions were entangled based on the light signals given off by the atomic ion.

The new methods could be used with a wide range of molecular ions composed of different elements, offering a broad selection of qubit properties.

The approach could connect different types of qubits operating at different frequencies, such as atoms and superconducting systems or light particles, including those in telecommunications and microwave components. In addition to applications in quantum information, the new techniques may also be useful in making quantum sensors or performing quantum-enhanced chemistry.

Credit: 
National Institute of Standards and Technology (NIST)

Elucidating the mechanism of a light-driven sodium pump

image: Petr Skopintsev (left), Jörg Standfuss (centre) and Christopher Milne (right) at the Alvra experimental station at the X-ray free-electron laser SwissFEL

Image: 
Paul Scherer Institute/Mahir Dzambegovic

Researchers at the Paul Scherrer Institute PSI have succeeded for the first time in recording, in action, a light-driven sodium pump from bacterial cells. The findings promise progress in the development of new methods in neurobiology. The researchers used the new X-ray free-electron laser SwissFEL for their investigations. They have published their findings today in the journal Nature.

Sodium, which is contained in ordinary table salt, plays an essential role in the vital processes of most biological cells. Many cells build up a concentration gradient between their interior and the environment. For this purpose, special pumps in the cell membrane transport sodium out of the cell. With the help of such a concentration gradient, cells of the small intestine or the kidneys, for example, absorb certain sugars.

Such sodium pumps are also found in the membranes of bacteria. They belong to the family of the so-called rhodopsins. These are special proteins that are activated by light. For example, rhodopsins transport sodium out of the cell in the case of bacteria living in the ocean, such as Krokinobacter eikastus. The crucial component of rhodopsin is the so-called retinal, a form of vitamin A. It is of central importance for humans, animals, certain algae and many bacteria. In the retina of the human eye, for example, retinal initiates the visual process when it changes shape under the influence of light.

Lightning-fast movie making

Researchers at the Paul Scherrer Institute PSI have now succeeded capturing images of the sodium pump of Krokinobacter eikastus in action and documenting the molecular changes necessary for sodium transport. To do this, they used a technique called serial femtosecond crystallography. A femtosecond is one-quadrillionth of a second; a millisecond is the thousandth part. The sample to be examined - in this case a crystallised sodium pump - is struck first by a laser and then by an X-ray beam. In the case of bacterial rhodopsin, the laser activates the retinal, and the subsequent X-ray beam provides data on structural changes within the entire protein molecule. Since SwissFEL produces 100 of these femtosecond X-ray pulses per second, recordings can be made with high temporal resolution. "We can only achieve temporal resolution in the femtosecond range at PSI with the help of SwissFEL", says Christopher Milne, who helped to develop the Alvra experimental station where the recordings were made. "One of the challenges is to inject the crystals into the setup so that they meet the pulses of the laser and the X-ray beam with pinpoint accuracy."

Pump in action

In the current experiment, the time intervals between the laser and X-ray pulses were between 800 femtoseconds and 20 milliseconds. Each X-ray pulse creates a single image of a protein crystal. And just as a cinema film ultimately consists of a large number of individual photographs that are strung together in a series and played back rapidly, the individual pictures obtained with the help of SwissFEL can be put together to form a kind of film.

"The process that we were able to observe in our experiment, and which roughly corresponds to the transport of a sodium ion through a cell membrane, takes a total of 20 milliseconds", explains Jörg Standfuss, who heads the group for time-resolved crystallography in the Biology and Chemistry Division at PSI . "Besides elucidating the transport process, we were also able to show how the sodium pump achieves its specificity for sodium through small changes in its structure." This ensures that only sodium ions, and no other positively charged ions, are transported. With these investigations, the researchers also revealed the molecular changes through which the pump prevents sodium ions that have been transported out of the cell from flowing back into it.

Advances in optogenetics and neurobiology

Since sodium concentration differences also play a special role in the way nerve cells conduct stimuli, neurons have powerful sodium pumps in their membranes. If more sodium flows into the cell's interior, a stimulus is transmitted. These pumps then transport the excess sodium in the cell to the outside again.

Since the sodium pump of Krokinobacter eikastus is driven by light, researchers can now use it for so-called optogenetics. With this technology, cells, in this case nerve cells, are genetically modified in such a way that they can be controlled by light. The pump is installed in nerve cells using methods of molecular genetics. If it is then activated by light, a neuron can no longer transmit stimuli, for example, since this would require an increase in the sodium concentration in the nerve cell. However, bacterial rhodopsin prevents this by continuously transporting sodium out of the cell. Thus active sodium pumps render a neuron inactive.

"If we understand exactly what is going on in the sodium pump of the bacterium, it can help to improve experiments in optogenetics", says Petr Skopintsev, a PhD candidate in the time-resolved crystallography group. "For example, it can be used to identify variants of bacterial rhodopsin that work more effectively than the form that is usually found in Krokinobacter." In addition, the researchers hope to gain insights into how individual mutations can change the ion pumps so that they then transport ions other than sodium.

Credit: 
Paul Scherrer Institute

Release of a new soil moisture product (2002-2011) for mainland China

image: Spatial variation of (a) precipitation, (b) evaporation, (c) near-surface soil moisture, (d) soil porosity in Mainland China. The values in (a-c) are averaged over July of 2002-2011, and Figure (d) is from Shangguan et al. (2013). The grids covering large lakes and near-coastal areas are removed. Low soil moisture content in part of the southwestern region (oval-covered part), reflecting the region's predominance of coarse-grained purple soils with low soil porosity (d)

Image: 
©Science China Press

As one of the so-called essential climate variable (ECV), soil moisture plays an important role in the water-energy cycle and land-atmosphere interactions. While quite some microwave-based satellite missions have made soil moisture retrieval on top of their other objectives, it is still tough work in obtaining high-quality soil moisture products at regional scales mainly due to the impacts of vegetation and surface roughness. Land data assimilation that constrains model predictions with passive microwave signals, on the other hand, can be an effective way to reduce uncertainties in soil moisture estimation, but it also faces challenges from such as biased meteorological forcing, uncertain model parameters, and inadequate in-situ soil moisture observations for validations.

Considering the above, over the past decade, researchers from Tsinghua University have established a dual-pass data assimilation system (LDAS) that auto-calibrates key model parameters, developed a China Meteorological Forcing Dataset with high spatiotemporal resolution to drive the LDAS, established four soil moisture observation networks consisting more than 100 stations with other teams to provide ground truth, and eventually produced a gridded soil moisture dataset by assimilating signals of a passive microwave satellite. This new product consists of 2002-2011 daily soil moisture content in the surface layer, root layer and deep layers, and holds a spatial resolution of 0.25° over China. Evaluation against aforementioned soil moisture networks suggests higher consistency of this dataset as compared to remote sensing retrievals and land surface simulations. The product is now officially publicized in National Tibetan Plateau Data Center, and has already contributed to studies involving land-atmosphere interaction, permafrost degradation, and alpine ecology, etc.

Based on the newly released product, the authors further reveal that: (1) spatial pattern of soil moisture is basically consistent with that of precipitation and evaporation, i.e., wetter in southeastern China and drier in northwestern China (Figure 1a-c); (2) parts of the southwestern China have relative low soil moisture content, indicating unique feature of this specific region, which is dominated by coarse-grained purple soils and characterized by low soil porosity (Figure 1c-d); (3) soil moisture exhibits stronger seasonal variability in the climatic transition zone of China than in humid regions and arid regions, and larger interannual variability in the relatively drier north than in the relatively wetter south with more pronounced in spring and autumn. These findings shall contribute to the understanding of climate, hydrology and ecology-related spatial patterns and interannual variabilities in China.

Credit: 
Science China Press

At the crossroads

image: Hematopoietic stem cells in the bone marrow (left) give rise to a variety of blood cell types. This includes white blood cells such as T cells and dendritic cells responsible for our immune defense (upper right panel) or red blood cells responsible for oxygen transport (lower right panel). The epigenetic regulator MOF pushes hematopoietic stem cells to mature into the disc-shaped red blood cells.

Image: 
© MPI of Immunobiology and Epigenetics, C.Pessoa Rodrigues, Freiburg

On average, the human body contains 35 trillion red blood cells (RBCs). Approximately three million of these small disc-shaped cells die in one second. But in this second, the same number is also produced to maintain the level of active RBCs. Interestingly, all of these cells undergo a multi-level differentiation process called erythropoiesis. They start from hematopoietic stem cells (HSCs), the precursors to every blood cell including all types of immune cells, and differentiate then, firstly, into multipotent progenitor cells (MPPs) followed by a gradual process of specialization into mature red blood cells.

If this differentiation process fails, it can be detrimental to our health. For instance, if fewer HSCs choose to follow the RBCs roadmap, the individual will be prone to develop anemia. Abnormalities in the immune cell roadmap, on the other hand, have been associated with the onset of leukemia.

Epigenetic modulation in early hematopoiesis

The lab of Asifa Akhtar at the MPI of Immunobiology and Epigenetics in Freiburg investigates what governs the differentiation process of blood cells. Now, the teams has identified how the enzyme MOF, an epigenetic regulator, orchestrate the HSC fate in erythropoiesis.

"One of the most important intrinsic cues governing cell developmental processes is the modulation of the chromatin landscape," says Asifa Akhtar. In our cells, DNA is packaged around histone proteins to make the chromatin structure. This packaging plays a crucial role in cell type-specific gene regulation and, of course, also in erythroid differentiation. In its default state chromatin is not "permissive", meaning genes are switched off. But shifting histones opens the chromatin and promotes gene expression.

Epigenetic regulator guides HSCs on the right path

The enzyme MOF is known to directly trigger the "opening" of chromatin by acetylating the H4 histone on one specific site (K16ac). When the lab tracked the MOF occupancy during the erythropoiesis in mice, they found that the enzyme dynamically orchestrates erythropoiesis by regulating chromatin accessibility of HSCs and RBC progenitors. "Our data shows that the correct dosage and timing of Mof during blood cell development is essential to prime the chromatin for activation of the erythroid development program. This process ensures the correct transcription factor network that will be pivotal for the erythroid branch," says first-author Cecilia Pessoa Rodrigues.

The Max Planck researchers are convinced that these findings could mean considerable progress in our understanding of the erythroid lineage commitment and may give rise to new therapeutic approaches in diseases such as leukemia or anemia. Although the precise consequence of MOF depletion in humans remains unanswered, it is already known that a balanced and controlled activity of epigenetic regulators is essential for the normal development of hematopoietic cells. "It is not surprising that low levels of MOF are linked to acute myeloid leukemia (AML). We anticipate this could be explained by chromatin acetylation imbalance that is critical for relevant factors required for normal hematopoiesis. Revealing the right levels of chromatin accessibility and, subsequently, the gene regulatory mechanisms that fine-tune the differentiation trajectories will be insightful for further understanding of hematopoiesis in healthy and diseased states," says Asifa Akhtar.

Credit: 
Max Planck Institute of Immunobiology and Epigenetics

Great potential in regulating plant greenhouse gas emissions

You cannot see them with the naked eye, but most plants emit volatile gases - isoprenoids - into the atmosphere when they breathe and grow. Some plants emit close to nothing; others emit kilograms annually.

Why are plant isoprenoid emissions interesting? Well, isoprenoids contribute immensely to the amounts of hydrocarbons released into the atmosphere, where they can be converted into potent greenhouse gases, affecting climate change. Actually, it has been estimated that short-chain isoprenoids account for more than 80% of all volatile organic compounds emitted from all living organisms, totaling about 650 million tons of carbon per year.

"We discovered a new way that plants regulate how much volatile isoprenoids they emit into the atmosphere, which had long been unknown. Some plants emit a lot, while very similar species don't emit them at all. This is interesting from a basic research point of view to better understand these emissions and how growing different plants might affect carbon cycling and impact greenhouse gases," says first-author behind a new study recently published in eLife, Senior Researcher Mareike Bongers from The Novo Nordisk Foundation Center for Biosustainability and Australian Institute for Bioengineering and Nanotechnology, The University of Queensland.

Crops that emit a lot of isoprene are for example palm oil trees, spruce, which is grown for timber, and aspen trees, which are grown for timber and biofuel. With this knowledge, farmers could in principle optimise forest land and farming area by planting fewer high-emitter-plants and more zero-emitters.

"It should be said, though, that we do not know for sure that all effects of these emissions are bad, more research is needed on that. But what is clear is that many of the harmful effects of isoprenoid emissions happen when they react with common air pollutants, which affects greenhouse gas formation and air quality. Therefore, large plantations with high emissions are particularly troublesome in the vicinity of industrial or municipal air pollution. So, reducing pollution is another way to address the problem," says Mareike Bongers.

The researchers behind this study are now looking into the possibility of using this new knowledge in applied biotech. The researchers actually discovered the new regulatory mechanism, because they tried to engineer the bacterium E. coli to produce sought-after isoprenoids, which could replace many fossil fuel chemicals if they could be produced more cheaply.

So, while engineering plant genes into E. coli to improve isoprenoid production, the researchers became aware of the plant-based regulation mechanism. When E. coli was engineered with plant genes for an enzyme known as HDR, they produced two important chemicals in different ratios, and this influenced how much isoprene could be produced.

This revelation is very useful in applied biotech, because isoprenoids can be turned into products like rubber. GoodYear has already produced car tires made from bio-produced isoprene. Furthermore, the findings could also improve the production of monoterpene isoprenoids, which are excellent jet fuels because they are very energy dense.

"This is particularly interesting from a sustainability perspective, because it is not anticipated that airplanes can be fuelled from anything else than liquid fuels, as opposed to ground transportation, which could be electric," she says.

Finally, isoprenoids are also used as flavours and fragrances in perfumes and cosmetics, and they are very important as active compound in some drugs, for instance the anti-malarial drug artemisinin or taxadiene, from which the cancer drug Taxol is made.

Today, most labs and biotech companies that make isoprenoids use a pathway from yeast, since the achieved yields have been much higher than with E.coli. But the pathway used by E. coli and plants has a higher theoretical yield, meaning that more isoprenoids could theoretically be made from the same amounts of sugar in E.coli than in yeast. Therefore, trying to optimise E.coli for isoprenoid production makes good sense commercially.

The team compared eight different plant HDR genes and one cyanobacterial HDR gene in E.coli. The best result was obtained with genes from peach, poplar and castor bean. Since this was a proof of concept, the team only produced 2 mg isoprene per litre of cell broth. But with further engineering and fermentation optimization efforts, the researchers expect to improve isoprene production in E. coli using this system.

"We saw that choosing the right plant enzyme made a big difference for isoprene production in E. coli. So, our 'learning from nature' approach on how some plants became so good at emitting isoprenoids really helped us to design more efficient cell factories," she concludes.

Credit: 
Technical University of Denmark

Fire aerosols decrease global terrestrial ecosystem productivity through changing climate

image: Great Xing'an larch forest of northeast China in 2011 after the 2010 high-severity fire

Image: 
Zhihua Liu

Fire is the primary form of terrestrial ecosystem disturbance on a global scale, and a major source of aerosols from the terrestrial biosphere to the atmosphere.

Most previous studies have quantified the effect of fire aerosols on climate and atmospheric circulation, or on the regional and site-scale terrestrial ecosystem productivity. So far, only one work has quantified the global impacts of fire aerosols on terrestrial ecosystem productivity. It was based on offline simulations driven by prescribed atmospheric forcing, so did not consider the fire aerosols' impacts through changing climate (e.g., cloud-aerosol interactions or climate feedbacks).

In a paper recently published in Atmospheric and Oceanic Science Letters, Dr. Fang Li from the Institute of Atmospheric Physics, Chinese Academy of Sciences, provided the first quantitative assessment of fire aerosols on global ecosystem productivity that takes into account the influence of aerosols' climatic effects. The study was based on fully coupled (atmosphere-land-ocean-sea-ice) simulations of the global Earth system model CESM1.2.

According to this study, fire aerosols generally decreased terrestrial gross primary productivity (GPP, carbon input of terrestrial ecosystem, the carbon uptake through photosynthesis) in vegetated areas, with a global total of ?1.6 Pg C per year, mainly because fire aerosols cooled and dried the land surface and weakened the direct photosynthetically active radiation (PAR). Exceptions to this were the Amazon and some regions in North America, which was mainly due to a fire-aerosol-induced wetter land surface and increased diffuse PAR.

"Cooling, drying, and light attenuation are major impacts of fire aerosols on the global terrestrial ecosystem productivity," concludes Dr. Li.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Researchers develop material capable of being invisible or reflective

image: Ivan Sinev, a senior researcher at ITMO University's Department of Physics and Engineering.

Image: 
ITMO.News (news.itmo.ru)

Modern optical devices require constant tuning of their light interaction settings. For that purpose, there exist various mechanical apparatuses that shift lenses, rotate reflectors, and move emitters. An international research team that includes staff members of ITMO University and the University of Exeter have proposed a new metamaterial capable of changing its optical properties without any mechanical input. This development could result in a significant improvement in the reliability of complex optical devices while making them cheaper to manufacture. The study was featured on the cover of the May 2020 issue of Optica.

The rapid development of physics and materials science in the past decades has brought humanity a wide selection of materials. Now, those who design complex devices are less bound by the limitations of traditional materials such as metals, wood, glass, or minerals. In this regard, the so-called metamaterials, which are studied at ITMO University among other places, open up incredible opportunities. Thanks to their complex periodical structure, they are relatively independent from the properties of their components. Such structures can be volumetric or flat - in the latter case, they are referred to as metasurfaces.

"Metasurfaces allow us to achieve many interesting effects in the manipulation of light," says Ivan Sinev, a senior researcher at ITMO University's Department of Physics and Engineering. "But these metasurfaces have one issue: how they interact with light is decided right in the moment when we design their structure. When creating devices for practical use, we would like to be able to control these properties not only at the outset, but during use, as well."

In their search for materials for adaptive optical devices, researchers from ITMO University, who possess great experience in working with silicon metasurfaces, have joined forces with their colleagues from the University of Exeter in the UK, who have a lot of experience in working with phase-change materials. Among such materials is, for instance, the germanium antimony telluride (GeSbTe) compound, often used in DVDs.

"We've made calculations to see what this new composite material would look like," says Pavel Trofimov, PhD student at the Department of Physics and Engineering. "We have an inclusion of GeSbTe embedded as a thin layer between two layers of silicon. It's a sort of sandwich: first we coat a blank substrate with silicon, then put on a layer of phase-change material, and then some more silicon."

Then, using the methods of e-beam lithography, the scientists converted the layered structure into a metasurface: an array of microscopic disks that was then tested at the laboratory on the subject of its ability to manipulate light. As the researchers expected, the combination of two materials into a complex periodic structure resulted in an important effect: the resulting surface's transparency level could be changed throughout the experiment. The reason is that a silicon disk in the near-infrared region has two optical resonances, allowing it to strongly reflect IR beams directed onto its surface. The layer of GeSbTe has made it possible to "switch off" one of the two resonances, making the disk nearly transparent to light in the near-infrared region.

Phase-change materials have two states: a crystalline state in which its molecules are positioned in an ordered structure, and an amorphous state. If the layer of GeSbTe at the center of the metamaterial is in the crystalline state, the second resonance will disappear; if it is in the amorphous state, the disk will continue to reflect IR beams.

"To switch between the two metasurface states, we've used a sufficiently powerful pulse laser," explains Pavel Trofimov. "By focusing the laser on our disk, we're able to perform the switch relatively quickly. A short laser pulse heats up the GeSbTe layer nearly to the melting point, after which it quickly cools down and becomes amorphous. If we subject it to a series of short pulses, it cools down more slowly, settling into a crystalline state."

The properties of this new metasurface can be used for various applications. That includes, first and foremost, the creation of lidars - devices that scan spaces by emitting infrared pulses and receiving the reflected beams. The principle of their creation can also serve as the basis in the production of special ultra-thin photographic lenses, such as ones used in phone cameras.

Credit: 
ITMO University