Earth

Development of cost-efficient electrocatalyst for hydrogen production

image: Schematic diagram of the step-by-step synthesis process for the preparation of Ti.MoP.

Image: 
Korea Institue of Science and Technology(KIST)

The key to promoting the hydrogen economy represented by hydrogen vehicles is to produce hydrogen for electricity generation at an affordable price. Hydrogen production methods include capturing by-product hydrogen, reforming fossil fuel, and electrolyzing water. Water electrolysis in particular is an eco-friendly method of producing hydrogen, in which the use of a catalyst is the most important factor in determining the efficiency and price competitiveness. However, water electrolysis devices require a platinum (Pt) catalyst, which exhibits unparalleled performance when it comes to speeding up the hydrogen generation reaction and enhancing long-term durability but is high in cost, making it less competitive compared to other methods price-wise.

There are water electrolysis devices that vary in terms of the electrolyte that dissolves in water and allows current to flow. A device that uses a proton exchange membrane (PEM), for instance, exhibits a high rate of hydrogen generation reaction even with the use of a catalyst made of a transition metal instead of an expensive Pt-based catalyst. For this reason, there has been a great deal of research on the technology for commercialization purposes. While research has been focused on achieving high reaction activity, research on increasing the durability of transition metals that easily corrode in an electrochemical environment has been relatively neglected.

The Korea Institute of Science and Technology (KIST) announced that a team headed by Dr. Sung-Jong Yoo from the Center for Hydrogen-Fuel Cell Research developed a catalyst made of a transition metal with long-term stability that could improve hydrogen production efficiency without the use of platinum by overcoming the durability issue of non-platinum catalysts.

The research team injected a small amount of titanium (Ti) into molybdenum phosphide (MoP), a low-cost transition metal, through a spray pyrolysis process. Because it is inexpensive and relatively easy to handle, molybdenum is used as a catalyst for energy conversion and storage devices, but its weakness includes the fact that it corrodes easily as it is vulnerable to oxidation.

In the case of the catalyst developed by the research team at KIST, it was found that the electronic structure of each material became completely restructured during the synthesis process, and it resulted in the same level of hydrogen evolution reaction (HER) activity as the platinum catalyst. The changes in the electronic structure addressed the issue of high corrosiveness, thereby improving durability by 26 times compared to existing transition metal-based catalysts. This is expected to greatly accelerate the commercialization of non-platinum catalysts.

Dr. Yoo of KIST said, "This study is significant in that it improved the stability of a transition metal catalyst-based water electrolysis system, which had been its biggest limitation. I hope that this study, which boosted the hydrogen evolution reaction efficiency of the transition metal catalyst to the level of platinum catalysts and at the same time improved the stability will contribute to earlier commercialization of eco-friendly hydrogen energy production technology."

Credit: 
National Research Council of Science & Technology

Groundbreaking study finds activator of magnesium dynamics in the body

image: This illustration depicts the exercise-induced surge of the metabolite lactate (seen in red, black and white) from skeletal muscle (shown as a factory) into liver cells. The lactate surge elicits release of magnesium ions (seen as green balls) from cellular storehouses called the endoplasmic reticulum (depicted as a rock outcrop in the water). The magnesium ions release as waves that are funneled into energy centers called mitochondria (depicted as a boat) through a transporter called Mrs2 (depicted as fishermen).

Image: 
Cover image by Sarah Bussey. Concept by Travis Madaris.

SAN ANTONIO, Texas, USA - Researchers from The University of Texas Health Science Center at San Antonio (UT Health San Antonio) have solved the 100-year-old mystery of what activates magnesium ions in the cell. The discovery is expected to be a springboard for future development of novel drugs to treat cardiovascular disease, metabolic disorders such as diabetes, and other diseases.

Reporting Thursday (Oct. 8) in Cell, scientists in the Joe R. and Teresa Lozano Long School of Medicine at UT Health San Antonio said the magnesium activator is a metabolite called lactate, which is elevated in the blood during intense exercise and in many diseases, including heart disease, diabetes, sepsis and cancer.

"Lactate is a signal that - like a light switch - turns on magnesium ions," said lead author Madesh Muniswamy, PhD, professor of cardiology in the Long School of Medicine. "On lactate's signal, the ions rush out of cellular storehouses called the endoplasmic reticulum."

The team made a second discovery: A protein called Mrs2 transports the released magnesium ions into cell powerhouses known as mitochondria. These power plants generate ATP, which is the energy currency fueling all the processes in the body.

"We believe this loop is essential for health," said study coauthor W. Brian Reeves, MD, chairman of the Department of Medicine at UT Health San Antonio. "If there is a problem with magnesium routing, impairments ensue, such as the diminished mitochondrial function and poor energy production observed in Type 2 diabetes or severe infections."

IP3, the activator for calcium ions, was discovered in 1984. Since that time, the calcium field has grown in monumental fashion, whereas magnesium continued to be a riddle, said coauthor Karthik Ramachandran, Ph.D., postdoctoral fellow in the Muniswamy laboratory.

Coauthor Travis Madaris, a graduate student on the research team, said, "As a student in the lab, this discovery is exciting because it lays out a pathway for multiple publications while I'm in this lab, and most importantly, it can lead to many future discoveries to improve human health."

Madaris is supported by a predoctoral research training fellowship awarded by the National Institutes of Health.

Summing up the discovery, Dr. Muniswamy said: "Magnesium is essential for life. It's in our blood. It's been implicated in and used as a treatment for a variety of diseases, including migraines, cardiovascular diseases, diabetes and preeclampsia. But to take the next step forward, we needed to understand the dynamics of magnesium in our bodies. With this finding, we believe we have laid out one of the pillars of support that the scientific world needed."

Credit: 
University of Texas Health Science Center at San Antonio

New measurements of the solar spectrum verify Einstein's theory of General Relativity

image: Artistic representation of the Sun, the Earth and the Moon (not to scale) with the space-time curvature of Einstein's General Relativity over the spectrum of sunlight reflected from the Moon (in colors from blue to red). The spectrum is taken with the HARPS instrument and calibrated with the LFC.

Image: 
Gabriel Pérez Díaz, SMM (IAC).

This work, which verifies one of the predictions of Einstein's General Relativity, is to be published in the journal Astronomy & Astrophysics.

The General Theory of Relativity, published by Albert Einstein between 1911 and 1916, introduced a new concept of space and time, by showing that massive objects cause a distortion in space-time which is felt as gravity. In this way, Einstein's theory predicts, for example, that light travels in curved paths near massive objects, and one consequence is the observation of the Einstein Cross, four different images of a distant galaxy which lies behind a nearer massive object, and whose light is distorted by it.

Other well known effects of General Relativity are the observed gradual change in Mercury's orbit due to space-time curvature around the "massive" Sun, or the gravitational redshift, the displacement to the red of lines in the spectrum of the Sun due to its gravitational field.

The gravitational redshift is an important effect for satellite navigation systems such as GPS, which would not work if General Relativity was not put into the equations. This effect depends on the mass and the radius of an astronomical object, so that even though it is bigger for the Sun than for the Earth, it is still difficult to measure in the solar spectrum.

In 1920, Einstein wrote: "For the Sun, the theoretical redshift predicted is approximately two millionths of the wavelength. Whether this effect really exists is an open question, and astronomers are currently working hard to resolve it. For the Sun, its existence is difficult to judge because the effect is so small".

To measure it, the scientists have used observations of the solar spectrum reflected from the Moon, obtained with the HARPS (High Accuracy Radial-velocity Planet Searcher) instrument using the new technology of the laser frequency comb.

"Combining the precision of the HARPS instrument with the laser frequency comb, we have been able to measure with high accuracy the position of the iron lines in the solar spectrum", explains Jonay González Hernández, a Ramón y Cajal researcher at the IAC and first author of the article. "This has enabled us to verify one of the predictions of Einstein's Theory of General Relativity, the gravitational redshift, to a precision of just a few metres per second".

"New measurements with the laser frequency comb attached to the ESPRESSO spectrograph, on the 8.2 m VLT telescopes, would allow us to improve these measurements", adds Rafael Rebolo, a researcher and the Director of the IAC and a coauthor of the article.

Credit: 
Instituto de Astrofísica de Canarias (IAC)

High intensity training best for older people

video: A new study shows that high intensity interval training provided the most health benefits for people aged 70-79. The five year long study randomly divided healthy participants into into three different training groups when the study started in 2012. The study was published in the BMJ.

Image: 
Cardiac Exercise Research Group, Norwegian University of Science and Technology

"First of all, I have to say that exercise in general seems to be good for the health of the elderly. And our study results show that on top of that, training regularly at high intensity has an extra positive effect," says Dorthe Stensvold.

Stensvold is a professor in the Cardiac Exercise Research Group (CERG) at the Norwegian University of Science and Technology (NTNU) and has been looking forward to sharing the results from the Generation 100 study for a while now.

Researchers, healthcare professionals and individuals around the world are eager to learn the answer to the question:
Can exercise really give older people a longer and healthier life? Generation 100 is the first major study that can tell us that, and Stensvold has encouraging news.

"Among most 70-77-year-olds in Norway, 90% will survive the next five years. In the Generation 100 study, more than 95% of the 1500 participants survived!" she said.

The study results have been published in the leading medical journal The BMJ.

The Generation 100 study is a cause-and-effect study. This means that all participants were divided completely randomly into three different training groups when the study started in 2012.

One group was assigned to high-intensity training intervals according to the 4X4 method twice a week, while group two was instructed to train at a steady, moderate intensity for 50 minutes two days a week. The participants could choose whether they wanted to train on their own or participate in group training with instructors.

The third group - the control group - was advised to exercise according to the Norwegian health authorities' recommendations. This group was not offered organized training under the auspices of Generation 100, but was called in for regular health checks and fitness assessments.

"Both physical and mental quality of life were better in the high-intensity group after five years than in the other two groups. High-intensity interval training also had the greatest positive effect on fitness," says Stensvold.

But does this kind of exercise prolong life to a greater extent than moderate exercise?

"In the interval training group, 3% of the participants had died after five years. The percentage was 6% in the moderate group. The difference is not statistically significant, but the trend is so clear that we believe the results give good reason to recommend high-intensity training for the elderly," says Stensvold.

Among the participants in the control group, 4.5% had died after five years.

"One challenge in interpreting our results has been that the participants in the control group trained more than we envisioned in advance. One in five people in this group trained regularly at high intensity and ended up, on average, doing more high-intensity training than the participants in the moderate group," says Stensvold.

This could also explain why this group ended up in between the other two groups in terms of survival.

"You could say that this is a disadvantage, as far as the research goes. But it may tell us that an annual fitness and health check is all that's needed to motivate older people to become more physically active. In that case, it's really good news," says Stensvold.

As to the question of whether this study offers definitive proof that exercise prolongs life, Stensvold says, "I'd like to answer with a clear and unequivocal yes, because we believe that this is true. But training is probably not the only reason why so few of the Generation 100 participants died compared to what's expected in this age group. The people who signed up to participate in Generation 100 probably had high training motivation to begin with. They also started with a relatively high level of activity, and most of them considered themselves to be in good health.

Stensvold points out that the participants in all three of the Generation 100 study groups managed to maintain their fitness levels throughout the five-year period. That's quite unique for people in this age group, according to physician and PhD candidate Jon Magne Letnes.

"Normally we see a drop in fitness of 20% over a ten-year period for people in their 70s. The fact that the participants in Generation 100 have managed to maintain their strong fitness levels from start to finish indicates that all three groups were more physically active than is usual for this age group," he says.

Letnes, who like Stensvold is affiliated with CERG at NTNU, refers to his own study which was published two weeks ago. It contains information on 1500 healthy men and women who tested their fitness level twice, at ten years apart, in connection with the Nord-Trøndelag Health Study (Hunt3).

"We found that age has the least effect on fitness level for people who exercise regularly at high intensity. This group had a drop in fitness of 5% over ten years. By comparison, fitness levels dropped by 9% individuals who exercised regularly but not at high intensity. Those who were physically inactive lost as much as 16% of their physical conditioning over ten years," says Letnes.

The decline in fitness was greater among the elderly than in younger people. Those who maintained their conditioning best also had the healthiest status when it came to risk factors for lifestyle diseases and poor health.

"Blood pressure, waist measurement, cholesterol and resting heart rate increased less in people who maintained their conditioning than in those who had a larger drop in fitness figures," Letnes says.

Stensvold believes that the results from Letnes' research support the most important findings in the Generation 100 study.

"In Generation 100, the high-intensity training increased participants' conditioning the most after the first, third and fifth years. We know that better fitness is closely linked to lower risk of premature death, so this improvement may explain why the high-intensity group apparently had the best survival rate," she says.

She ends by saying, "By high intensity we mean training that gets you really sweaty and out of breath. Now our hope is that the national recommendations for physical activity will be modified to encourage older people even more strongly to do high intensity training - either as their only form of exercise or to supplement more moderate training."

Credit: 
Norwegian University of Science and Technology

A new species of Darwin wasp from Mexico named in observance of the 2020 quarantine period

image: Holotype specimen of the newly described species of parasitic Darwin wasp Stethantyx covida.

Image: 
Andrey I. Khalaim

Scientists at the Autonomous University of Tamaulipas (UAT) in Mexico recently discovered five new species of parasitoid wasps in Mexico, but the name of one of them sounds a bit weird: covida. Why this name?

In fact, the reason is quite simple. The thing is that the team of Andrey Khalaim (also a researcher at the Zoological Institute of Russian Academy of Sciences in Saint Petersburg, Russia) and Enrique Ruíz Cancino discovered the new to science species during the 2020 global quarantine period, imposed due to the COVID-19 pandemic. Their findings are described in a newly published research article, in the peer-reviewed, open-access scientific journal ZooKeys.

"We thought that it was a good idea to remember this extraordinary year through the name of one remarkable species of Darwin wasp found in seven Mexican States (including Tamaulipas, where the UAT campus is located) and also Guatemala," explain the scientists.

The new species, which goes by the official scientific name Stethantyx covida, belongs to the Darwin wasp family Ichneumonidae, one of the most species-rich insect families, which comprises more than 25,000 species worldwide.

"Darwin wasps are abundant and well-known almost everywhere in the world because of their beauty, gracility, and because they are used in biological control of insect pests in orchards and forests. Many Darwin wasp species attack the larvae or pupae of butterflies and moths. Yet, some species are particularly interesting, as their larvae feed on spider eggs and others, even more bizarre, develop on living spiders!" further explain the authors of the new study.

Stethantyx covida is a small wasp that measures merely 3.5 mm in length. It is predominantly dark in colour, whereas parts of its body and legs are yellow or brown. It is highly polished and shining, and the ovipositor of the female is very long and slender.

Along with Stethantyx covida, the authors also described four other Mexican species of Darwin wasps from three different genera (Stethantyx, Meggoleus, Phradis), all belonging to the subfamily Tersilochinae. Some tersilochines are common on flowers in springtime. While the majority of them are parasitoids of larvae of various beetles, some Mexican species attack sawflies, inhabiting the forests.

Credit: 
Pensoft Publishers

An electrical trigger fires single, identical photons

image: A map shows the intensity and locations of photons emitted from a thin film material while a voltage is applied.

Image: 
Berkeley Lab

Secure telecommunications networks and rapid information processing make much of modern life possible. To provide more secure, faster, and higher-performance information sharing than is currently possible, scientists and engineers are designing next-generation devices that harness the rules of quantum physics. Those designs rely on single photons to encode and transmit information across quantum networks and between quantum chips. However, tools for generating single photons do not yet offer the precision and stability required for quantum information technology.

Now, as reported recently in the journal Science Advances, researchers have found a way to generate single, identical photons on demand. By positioning a metallic probe over a designated point in a common 2D semiconductor material, the team led by researchers at the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) has triggered a photon emission electrically. The photon's properties may be simply adjusted by changing the applied voltage.

"The demonstration of electrically driven single-photon emission at a precise point constitutes a big step in the quest for integrable quantum technologies," said Alex Weber-Bargioni, a staff scientist at Berkeley Lab's Molecular Foundry who led the project. The research is part of the Center for Novel Pathways to Quantum Coherence in Materials (NPQC), an Energy Frontier Research Center sponsored by the Department of Energy, whose overarching goal is to find new approaches to protect and control quantum memory that can provide new insights into novel materials and designs for quantum computing technology.

Photons are one of the most robust carriers of quantum information and can travel long distances without losing their memory, or so-called coherence. To date, most established schemes for secure communication transfer that will power large-scale quantum communications require light sources to generate one photon at a time. Each photon must have a precisely defined wavelength and orientation. The new photon emitter demonstrated at Berkeley Lab achieves that control and precision. It could be used for transferring information between quantum processors on different chips, and ultimately scaled up to larger processors and a future quantum internet that links sophisticated computers around the world.

The photon emitter is based on a common 2D semiconductor material (tungsten disulfide, WS2), which has a sulfur atom removed from its crystal structure. That carefully located atomic imperfection, or defect, serves as a point where the photon can be generated through application of an electric current.

The challenge is not how to generate single photons, but how to make them truly identical and produce them on demand. Photon-emitting devices, like the semiconductor nanoparticles or "quantum dots" that light up QLED TVs, that are fabricated by lithography are subject to inherent variability, since no pattern-based system can be identical down to a single atom. Researchers working with Weber-Bargioni took a different approach by growing a thin-film material on a sheet of graphene. Any impurities introduced to the thin film's atomic structure are repeated and identical throughout the sample. Through simulations and experiments, the team determined just where to introduce an imperfection to the otherwise uniform structure. Then, by applying an electrical contact to that location, they were able to trigger the material to emit a photon and control its energy with the applied voltage. That photon is then available to carry information to a distant location.

"Single-photon emitters are like a terminal where carefully prepared but fragile quantum information is sent on a journey into a lightning-fast, sturdy box," said Bruno Schuler, a postdoctoral researcher at the Molecular Foundry (now a research scientist at Empa - the Swiss Federal Laboratories for Materials Science and Technology) and lead author of the work.

Key to the experiment is the gold-coated tip of a scanning tunnelling microscope that can be positioned exactly over the defect site in the thin film material. When a voltage is applied between the probe tip and the sample, the tip injects an electron into the defect. When the electron travels or tunnels from the probe tip, a well-defined part of its energy gets transformed into a single photon. Finally, the probe tip acts as an antenna that helps guide the emitted photon to an optical detector which records its wavelength and position.

By mapping the photons emitted from thin films made to include various defects, the researchers were able to pinpoint the correlation between the injected electron, local atomic structure, and the emitted photon. Usually, the optical resolution of such a map is limited to a few hundred nanometers. Thanks to extremely localized electron injection, combined with state-of-the-art microscopy tools, the Berkeley Lab team could determine where in the material a photon emerged with a resolution below 1 angstrom, about the diameter of a single atom. The detailed photon maps were crucial to pinpointing and understanding the electron-triggered photon emission mechanism.

"In terms of technique, this work has been a great breakthrough because we can map light emission from a single defect with sub-nanometer resolution. We visualize light emission with atomic resolution," said Katherine Cochrane, a postdoctoral researcher at the Molecular Foundry and a lead author on the paper.

Defining single-photon light sources in two-dimensional materials with atomic precision provides unprecedented insight critical to understanding how those sources work, and provides a strategy for making groups of perfectly identical ones. The work is part of NPQC's focus on exploring novel quantum phenomena in nonhomogenous 2D materials.

Two-dimensional materials are leading the way as a powerful platform for next-generation photon emitters. The thin films are flexible and easily integrated with other structures, and now provide a systematic way for introducing unparalleled control over photon emission. Based on the new results, the researchers plan to work on employing new materials to use as photon sources in quantum networks and quantum simulations.

Credit: 
DOE/Lawrence Berkeley National Laboratory

Making bones is less difficult than was previously thought

The way in which bone formation occurs needs to be redefined. This was revealed by Radboud university medical center researchers and their colleagues in a publication in Nature Communications. It turns out that bone formation does not require complex biomolecules in collagen at all. This means that the production of bone substitutes and biomaterials is less complicated than was previously thought.

Bones are vital. They form the scaffolding material for our muscles, which allow us to move. They protect vulnerable organs like the brain and lungs, and they accommodate the stem cells that produce our blood and immune system. Without bones, people could not exist. This is despite the fact that bones are seemingly very simple things, consisting mainly of a mix of collagen and calcium phosphate.

Orchestrating biomolecules?

Ten years ago, Nico Sommerdijk published an article in Nature Materials in which he and his colleagues demonstrated that nothing was needed other than collagen (long strands of connective tissue) and tiny pieces of calcium phosphate. "Nothing was known about the way in which the crystals in the collagen actually developed into small plates and then arranged themselves in an imbricate formation," says Sommerdijk, who is a biochemical researcher at Radboud university medical center. "The generally accepted idea was that the bioactive molecules in this collagen strung the crystals together to form plates that were three nanometers thick, twenty nanometers wide and sixty-five nanometers long. A nanometer is one billionth of a meter."

Ten years ago, it was not possible to depict these processes on such a tiny scale. But Sommerdijk soon came to doubt the idea that had been embraced by his field. Sommerdijk: "When we replaced the calcium phosphate with other minerals, the same tiny plates were created. Bioactive molecules are very selective, which immediately made us question the prevailing idea." What followed was a scientific Sisyphean task that was carried out against the establishment, which after ten years has now ruled in his favor with a publication in Nature Communications.

Going against the tide

The first thing that Sommerdijk challenged was the idea of crystallized plates being stacked 'like a deck of cards'. Using an electron microscope he recorded approximately one hundred two-dimensional (2D) images of bone under different angles, which resulted in three-dimensional images (3D) that showed something entirely different. Even though the plates were all laying in the longitudinal direction of the collagen, they were not neatly stacked against each other like playing cards. The new plates that could continuously be seen lay against each other at different angles.

Together with the expert who first reported on the structure of collagen in 2006 and a specialist who studies crystallization in confined spaces, the function of the collagen was meticulously investigated. Collagen appeared to have narrow cavities in the length direction, which were infiltrated by the calcium phosphate that then started to crystallize. "This wasn't happening under the guidance of bioactive molecules in the collagen, but was a blind process that was dominated by the laws of physical chemistry", says Sommerdijk. "Calcium phosphate that crystallized in the length direction had a selection advantage, precisely because of these narrow collagen cavities. As a result, it was only very thin needles that were initially created in such a cavity. This needle then grew out into a small plate of calcium phosphate that pushed the surrounding collagen away."

The basis of bone formation

It is not possible to closely monitor the process of bone growth in humans. Sommerdijk, who works closely with cell biologist Anat Akiva at Radboud university medical center, has been using a model in which bone growth can be simulated outside an organism. This allowes him to see the process as it gradually takes place. And the final result is similar to what he has found in the human bone material that has frequently become available after surgery. "This has consequently allowed us to gradually provide a new argumentation for the way in which our bones are formed, and how they grow and harden."

This fundamental research also has consequences from the practical viewpoint, claims Akiva. "If the growth of crystals need not take place in a matrix with extremely complex and precious biomolecules, but can also simply be done in channels made of inexpensive polymer, this will drastically simplify the process of making biomaterials and bone substitutes. And if we can do this without having to sacrifice the properties of these materials it means that our results will also be valuable to the patient."

ERC Advanced Grant

The article in Nature Communications will need to overturn the perception of bone formation once and for all. But the process of bone formation is more complex than this, claims Sommerdijk: "Bone is not only formed within, but also outside the collagen. Various proteins seem to play a role in this process." Sommerdijk will be using an ERC Advanced Grant of 3.5 million euros that he previously received in order to gain more insight into this.

Credit: 
Radboud University Medical Center

Siberian scientists identified the most promising Russian forest products

A team of scientists from Siberian Federal University evaluated the competitiveness of Russian forest industry products by analyzing international trade data from different regions of the country and comparing it to the data from other markets. The study was published in the Forest Policy and Economics journal and supported by the Russian Science Foundation (project no. 19-18-00145).

The level of forest industry development in Russia varies greatly from region to region. It depends on the existence of large forest resources and production facilities, climate conditions, and proximity to European or Asian markets. In their work, the researchers analyzed the impact of these factors on the foreign trade of forest products in Russian regions.

The research design was based on David Ricardo's Theory of Comparative Advantage. According to it, to succeed in international trade a country needs to have an advantage (in relative terms) in the production of certain goods and to specialize in their export. The team calculated comparative advantage indices for Russia and its regions. Specifically, the researchers used the Balassa index that shows the ratio between a product's share in the national and international export in a given year, as well as the Vollrath index that covers not only export but also import flows.

"In our work, we used extensive data about international transactions that we took from open databases: UN Comtrade and the Federal Customs Service of Russia. This allowed us to study competitiveness on the level of individual product types, not just aggregated product groups," said Roman Gordeev, a senior lecturer at the Department for Social and Economic Planning and a junior researcher at the Laboratory for Environmental and Resource Economics, SFU.

According to the research, the regions in the north-west of Russia are the most successful in forest product trade. The biggest paper and cellulose production facilities are located in this area, including JSC 'Mondi Syktyvkar' (The Komi Republic), 'International Paper' (Leningrad Region), and JSC 'Arkhangelsk Pulp and Paper Mill' (Arkhangelsk Region). Siberia and the Far East with massive forest resources but undeveloped wood processing infrastructure also play a special role in the Russian forestry.

"Russia has certain comparative advantages in trading raw and semi-finished wood products as well as semi-finished paper products. The most competitive Russian products in international markets are wood fuel, raw timber, sawn timber, plywood, and various types of boards," concluded Roman Gordeev.

The results of the study can be used to develop a new forest policy in Russia with respect to the revealed trade patterns and comparative advantages of each region.

Credit: 
Siberian Federal University

New class of highly effective inhibitors protects against neurodegeneration

image: Interaction of interface inhibitor compound 8 within the binding pocket of the protein contact surface.

Image: 
Hilmar Bading

Neurobiologists at Heidelberg University have discovered how a special receptor at neuronal junctions that normally activates a protective genetic programme can lead to nerve cell death when located outside synapses. Their fundamental findings on neurodegenerative processes simultaneously led the researchers at the Interdisciplinary Center for Neurosciences (IZN) to a completely new principle for therapeutic agents. In their experiments on mouse models, they discovered a new class of highly effective inhibitors for protecting nerve cells. As Prof. Dr Hilmar Bading points out, this novel class of drugs opens up - for the first time - perspectives to combat currently untreatable diseases of the nervous system. The results of this research were published in Science.

The research by Prof. Bading and his team is focused on the so-called NMDA receptor. This receptor is an ion channel protein that is activated by a biochemical messenger: the neurotransmitter glutamate. It allows calcium to flow into the cell. The calcium signal sets in motion plasticity processes in the synapse but also propagates into the cell nucleus, where it activates a protective genetic programme. Glutamate-activated NMDA receptors located in the junctions of the nerve cells have a key function in the brain, contributing to learning and memory processes as well as neuroprotection. But the same receptors are also found outside of synapses. These extra-synaptic NMDA receptors pose a threat because their activation can lead to cell death. Normally, however, efficient cellular uptake systems for glutamate make sure that these receptors are not activated and nerve cells remain undamaged.

This situation can change dramatically in the presence of disease. If, for example, parts of the brain are not supplied with sufficient oxygen after a stroke, disruptions in circulation negate the glutamate uptake systems. The glutamate level outside synapses increases, thereby activating the extra-synaptic NMDA receptors. The result is nerve cell damage and death accompanied by restrictions in brain function. Increased glutamate levels outside the synapses occur not only during circulatory disturbances of the brain. "The evidence suggests that the toxic properties of extra-synaptic NMDA receptors play a central role in a number of neurodegenerative diseases," explains Prof. Bading. According to the scientist, this applies, in particular, to Alzheimer's disease and Amyotrophic Lateral Sclerosis with its resulting muscle weakness and muscle wasting as well as retinal degeneration, and possibly even brain damage after infections with viruses or parasites.

While glutamate-activated NMDA receptors inside neuronal junctions help build up a protective shield, outside synapses they change from Dr Jekyll into Mr Hyde. "Understanding why extra-synaptic NMDA receptors lead to nerve cell death is the key to developing neuroprotective therapies," continues Prof. Bading. That is precisely where the Heidelberg researchers are focusing their efforts. In their experiments on mouse models, they were able to demonstrate that the NMDA receptors found outside synapses form a type of "death complex" with another ion channel protein. This protein, called TRPM4, has a variety of functions in the body, with roles in the cardiovascular system and immune responses. According to the latest findings by Hilmar Bading and his team of researchers, TRPM4 confers toxic properties on extra-synaptic NMDA receptors.

Using molecular and protein biochemical methods, the scientists identified the contact surfaces of the two interacting proteins. With this knowledge, they used a structure-based search to identify substances that might disrupt this very bond, thereby dismantling and inactivating the "death complex". This new class of inhibitors - which the Heidelberg researchers call "interface inhibitors" because they disrupt the bond formed at the contact surfaces between the extra-synaptic NMDA receptors and TRPM4 - proved to be extremely effective protectors of nerve cells. "We're working with a completely new principle for therapeutic agents here. The interface inhibitors give us a tool that can selectively remove the toxic properties of extra-synaptic NMDA receptors," explains Prof. Bading.

Prof. Bading and his team were already able to demonstrate the efficacy of the new inhibitors in mouse models of stroke or retinal degeneration. According to the Heidelberg researcher, there is good reason to hope that such interface inhibitors - administered orally as broad-spectrum neuroprotectants - offer treatment options for currently untreatable neurodegenerative diseases. "However, their possible approval as pharmaceutical drugs for human use will take several more years because the new substances must first successfully pass through a number of preclinical and clinical testing phases."

Credit: 
Heidelberg University

Dietary migration of Impala rivals the geographical migration of Serengeti wildebeest

image: The seasonal dietary migration of Impala in the Kruger National Park in South Africa rivals the geographical migration of wildebeest in the Serengeti National Park in Tanzania.

Image: 
Gareth Hempson/Wits University

African savannas are renowned for their huge diversity of wildlife, yet some animal species are much more abundant than others. What causes these differences?

For herbivore species - plant-eating animals like antelope, zebra and elephants - the challenge lies in both obtaining enough food to eat throughout the year, while also avoiding predation by carnivores.

One way to obtain enough food - and which works extremely well in places like the Serengeti - is to migrate over long distances and track the areas where the best food is available through the seasonal cycle. This works best for grass eating or 'grazer' species such as wildebeest and zebra.

On the other hand, being extremely large like an elephant greatly reduces predation risk, with their big bodies also meaning that they can eat whatever they like, because there is lots of time to digest foods as they move through their long gut system.

So why then are impala by far the most abundant herbivore in a place like the Kruger Park? These are animals that do not migrate, and they are also only medium-sized at best.

Recent research published in Science Advances by Professor Carla Staver from Yale University and Dr Gareth Hempson from University of the Witwatersrand has shed new light on this question.

"The key insight emerging from our research is that species like impala do actually migrate, although not in the sense you would expect," says Dr Hempson. "For impala, the migration they undertake is a 'dietary migration', where they switch from eating mostly grass in the wet season, to eating more tree leaves or 'browse' during the dry season. This 'mixed feeding' strategy makes a huge amount of sense, because grasses tend to be higher quality and more abundant food in the wet season, but trees tend to stay greener much longer into the dry season and become the better food source then."

Theoretical models show that for this 'mixed feeding' strategy to work, the costs of switching between food sources must not be too high. For example, the mouth shape of a hippo is excellent for consuming lots of grass quickly, but terrible for picking green leaves out of a thorn tree. Similarly, giraffe are better off sticking to browsing from the treetops. The models also suggest that for the mixed feeding strategy to be advantageous, the 'best' food type must change from season to season. This is typically the case for grass versus browse in savannas.

Animal census data from 18 protected areas in East and southern Africa show that being a mixed feeder does indeed have a strong positive effect on species abundances in savannas, regardless of body size. In fact, the abundance of mixed feeders is rivalled only by migratory grazers, suggesting that switching your diet from season to season - either in terms of what you eat, or where you eat - is fundamental to achieving high population sizes.

But what of the benefits of being large-bodied like an elephant? This research suggests that part of the reason why elephants can become abundant is in fact due to their broad, mixed diets, but the anti-predation benefits of being large almost certainly still stand. However, outsized human poaching effort on very large species such as rhino and elephant mean that we have little idea of just how abundant they really can be.

The findings of this research have important implications for the management and conservation of increasingly threatened herbivore populations, and for our understanding of the ecology of savannas more broadly. For example, land use change and the shrinking space available for animal movement means that mixed feeders - who can switch their diet in situ - are likely to fare better in future than species that historically would have moved long distances to obtain different food sources. Mixed feeders may also come to play an increasingly important role in mitigating woody encroachment - the rapid increase in tree density in many savannas linked to rising carbon dioxide levels - thus maintaining more open, grassy spaces to the benefit of grazing species.

"So, next time you're in the Kruger Park and becoming bored of seeing only impala, take a moment to reflect that you are witnessing the outcome of a migration that rivals that of the Serengeti wildebeest - an extraordinary seasonal dietary migration from grass to browse," says Dr Hempson.

Credit: 
University of the Witwatersrand

Mechanical forces of biofilms could play role in infections

The vast majority of bacteria in the world live on surfaces by forming structures called "biofilms". These communities host thousands to millions of bacteria of different types, and are so biologically complex and active that scientists describe them as "cities".

Biofilms are in fact the preferred lifestyle of bacteria. They form them by attaching to each other on surfaces as diverse as the ocean floor, internal organs and teeth: dental plaque is a common example of a biofilm. But biofilms also cause chronic infections, e.g. the opportunistic pathogen Pseudomonas aeruginosa that forms biofilms in the lungs of cystic fibrosis patients.

Generally speaking, the interaction between biofilm and host is thought to be biochemical. But there is some evidence to suggest that the physical, mechanical interplay between them might be just as important - and overlooked as an influence on the host's physiology. For example, how do biofilms form on soft, tissue-like materials?

This is the question that a team of scientists led by Alex Persat at EPFL have ventured to answer. Publishing in the journal eLife, they show that biofilms of two major pathogenic bacteria, Vibrio cholerae and Pseudomonas aeruginosa, can cause large structural deformations on soft materials like hydrogels.

When bacteria form biofilms, they attach onto a surface and begin to divide. At the same time, they bury themselves inside a mix of polysaccharides, proteins, nucleic acids, and debris from dead cells. This mix forms a sticky substance that is called the "EPS" matrix (EPS stands for "extracellular polymeric substances").

As single bacteria grow inside the EPS they stretch or compress it, exerting mechanical stress. The growth of the biofilm and the EPS matrix's elastic properties generate internal mechanical stress.

The scientists grew biofilms on soft hydrogel surfaces and measured how they exerted forces upon variations of EPS components. This revealed that biofilms induce deformations by "buckling" like a carpet or a ruler. How big the deformations are depends on how stiff the "host" material is and on the composition of the EPS.

The researchers also found that V. cholerae biofilms can generate enough mechanical stress to deform and damage soft epithelial cell monolayers, like those that line the surface of our lungs and intestines. What this means is that the forces generated by growing biofilms might mechanically compromise the physiology of their host. In short, biofilms could promote a "mechanical" mode of infection, which might warrant a whole new approach to treatments.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Experimental glioblastoma therapy shows curative powers in mice models

Houston - (Oct. 8, 2020) - Houston Methodist researchers found that mice harboring human glioblastoma tumors in their brains had greatly enhanced survival and weight gain when given a newly developed prodrug. This mitochondrial-targeted prodrug - an inactive compound that cancer cells selectively metabolize to produce an active toxic drug - also greatly improves outcomes when coupled with standard therapies of radiation and/or chemotherapy. The drug selectively targets and destroys the DNA inside the glioblastoma cell mitochondria (the energy factory of the cancer cell) leaving normal cells intact.

In an Oct. 8 study published online in Molecular Cancer Therapeutics, a journal of the American Association for Cancer Research, investigators used a second generation prodrug called MP-Pt(IV) to target the deadly cells of glioblastoma tumors, a brain cancer that is almost always fatal and has no cure. Life expectancy in humans with glioblastoma ranges from a few months to two years.

Human glioma cells were removed from patients during surgical excision and isolated within 10 minutes after removal. The glioblastoma cells were injected into the brains of 48 female mice for a 300-day study. The prodrug was well tolerated, and, when given on its own, extended survival by more than a factor of three. However, when combined with standard chemotherapy and radiotherapy, the drug was curative in nature, allowing 90% of mice to survive, thrive and gain weight during the 10 months of observation.

"This study tells us that adding MP-Pt(IV) to a chemoradiotherapy protocol could address a critical need in glioblastoma treatment," said David S. Baskin, M.D., FACS, FAANS, corresponding author and director of the Kenneth R. Peak Center for Brain and Pituitary Tumor Treatment in the Department of Neurosurgery at Houston Methodist. "We now know that MP-Pt(IV) is an excellent candidate for preclinical development."

Credit: 
Houston Methodist

Deep-seabed mining lastingly disrupts the seafloor food web

image: Plough tracks are still clearly visible on the seafloor of the DISCOL area 26 years after the disturbance.

Image: 
ROV-Team/GEOMAR

The deep sea is far away and hard to envision. If imagined it seems like a cold and hostile place. However, this remote habitat is directly connected to our lives, as it forms an important part of the global carbon cycle. Also, the deep seafloor is, in many places, covered with polymetallic nodules and crusts that arouse economic interest. There is a lack of clear standards to regulate their mining and set binding thresholds for the impact on the organisms living in affected areas.

Mining can reduce microbial carbon cycling, while animals are less affected

An international team of scientists around Tanja Stratmann from the Max Planck Institute for Marine Microbiology in Bremen, Germany, and Utrecht University, the Netherlands, and Daniëlle de Jonge from Heriot-Watt University in Edinburgh, Scotland, has investigated the food web of the deep seafloor to see how it is affected by disturbances such as those caused by mining activities.

For this, the scientists travelled to the so-called DISCOL area in the tropical East Pacific, about 3000 kilometres off the coast of Peru. Back in 1989, German researchers had simulated mining-related disturbances in this manganese nodule field, 4000 metres under the surface of the ocean, by ploughing a 3.5 km wide area of seabed with a plough-harrow. "Even 26 years after the disturbance, the plough tracks are still there", Stratmann described the site. Previous studies had shown that microbial abundance and density had undergone lasting changes in this area. "Now we wanted to find out what that meant for carbon cycling and the food web of this deep ocean habitat."

"We looked at all different ecosystem components and on all levels, trying to find out how they work together as a team", de Jonge explained who carried out the project as part of her Master's Thesis at the NIOZ Royal Netherlands Institute for Sea Research and the University of Groningen, The Netherlands. The scientists quantified carbon fluxes between living and non-living compartments of the ecosystem and summed them up as a measure of the "ecological size" of the system.

They found significant long-term effects of the 1989 mining simulation experiment. The total throughput of carbon in the ecosystem was significantly reduced. "Especially the microbial part of the food web was heavily affected, much more than we expected", said Stratmann. "Microbes are known for their fast growth rates, so you'd expect them to recover quickly. However, we found that carbon cycling in the so-called microbial loop was reduced by more than one third."

The impact of the simulated mining activity on higher organisms was more variable. "Some animals seemed to do fine, others were still recovering from the disturbance. The diversity of the system was thus reduced", said de Jonge. "Overall, carbon flow in this part of the food web was similar to or even higher than in unaffected areas."

A mined seafloor might be more vulnerable to climate change

The simulated mining resulted in a shift in carbon sources for animals. Usually, small fauna feed on detritus and bacteria in the seafloor. However, in the disturbed areas, where bacterial densities were reduced, the fauna ate more detritus. The possible consequences of this will be part of de Jonge's PhD Thesis, which she just started. "Future climate scenarios predict a decrease of the amount and quality of detritus reaching the seafloor. Thus this shift in diet will be especially interesting to investigate in view of climate change", she looks forward to the upcoming work.

"You also have to consider that the disturbance caused by real deep-seabed mining will be much heavier than the one we're looking at here", she added. "Depending on the technology, it will probably remove the uppermost 15 centimeters of the sediment over a much larger area, thus multiplying the effect and substantially increasing recovery times."

More info:

Polymetallic nodules and crusts cover many thousands of square kilometres of the world's deep-sea floor. They contain mainly manganese and iron, but also the valuable metals nickel, cobalt and copper as well as some of the high-tech metals of the rare earths. Since these resources could become scarce on land in the future - for example, due to future needs for batteries, electromobility and digital technologies - marine deposits are economically very interesting. To date, there is no market-ready technology for deep-sea mining. However, it is already clear that interventions in the seabed have a massive and lasting impact on the affected areas. Studies have shown that many sessile inhabitants of the surface of the seafloor depend on the nodules as a substrate, and are still absent decades after a disturbance in the ecosystem. Also, effects on animals living in the seabed have been proven.

Credit: 
Max Planck Institute for Marine Microbiology

How an egg cell's "operating manual" sets the stage for fertility

image: An illustration of gene expression underlying wave1 and wave2 follicle production. Each dot on the diagram mathematically summarizes the gene expression of individual ovarian helper cells that surround developing egg cells in two-dimensional gene space. Developing cells fall into clusters indicated by a common color and clusters are present in the ovary only at single developmental times (i.e. E14.5, E16.5, etc.) indicated for cells within the dashed zones. It can be seen that each time zone houses precisely two types of follicle cells, which were found to come from future wave 1 follicles (4, odd numbers >4) or wave 2 two follicles (even numbers >4).

Image: 
Figure is courtesy of Allan Spradling and Wanbao Niu. Underlying image purchased from Shutterstock. Composite created by Navid Marvi.

Baltimore, MD-- Recently published work from Carnegie's Allan Spradling and Wanbao Niu revealed in unprecedented detail the genetic instructions immature egg cells go through step by step as they mature into functionality. Their findings improve our understanding of how ovaries maintain a female's fertility.

The general outline of how immature egg cells are assisted by specific ovarian helper cells starting even before a female is born is well understood. But Spradling and Niu mapped the gene activity of thousands of immature egg cells and helper cells to learn how the stage is set for fertility later in life.

Even before birth, "germ" cells assemble a finite number of cell clusters called follicles in a female's ovaries. Follicles consist of an immature egg cell and some "helper" cells, which guide the egg through its maturation process. It is from a follicle that a mature egg cell bursts during ovulation.

"Follicles are slowly used up during a female's reproductive lifespan and menopause ensues when they run out. Understanding what it takes for follicles to form and develop successfully, helps us learn how damaged genes or adverse environmental factors, including a poor diet, can interfere with fertility," explained Spradling. "By documenting the follicle's genetic operating manual, problems in egg development that might lead to birth defects --as a result of mutations or due to bad nutrition-- can be better understood and reduced."

Spradling and Niu sequenced 52,500 mouse ovarian cells at seven stages of follicle development to determine the relative expression of thousands of genes and to characterize their roles.

The study also illuminated how mammalian ovaries produce two distinct types of follicles and Spradling and Niu were able to identify many differences in gene activity between them.

The first, called wave 1 follicles, are present in the ovary even before puberty. In mice, they generate the first fertile eggs; their function in humans is poorly understood, but they may produce useful hormones. The second type, called wave 2 follicles, are stored in a resting state but small groups are activated to mature during a female's hormonal cycle, ending in ovulation. The findings help clarify each type's roles.

Spradling and Niu's work and all its underlying data were published by Proceedings of the National Academy of Sciences.

"We hope our work will serve as a genetic resource for all researchers who study reproduction and fertility," concluded Spradling.

Credit: 
Carnegie Institution for Science

Vaporized metal in the air of an exoplanet

image: The top of the planet's atmosphere is heated to a blazing 2,500 degrees Celsius, hot enough to boil some metals.

Image: 
NASA, ESA, and G. Bacon (STSci)

WASP-121b is an exoplanet located 850 light years from Earth, orbiting its star in less than two days - a process that takes Earth a year to complete. WASP-121b is very close to its star - about 40 times closer than Earth to the Sun. This close proximity is also the main reason for its immensely high temperature of around 2,500 to 3,000 degrees Celsius. This makes it an ideal object of study to learn more about ultra-hot worlds.

Researchers led by Jens Hoeijmakers, first author of the study and postdoctoral research fellow at the National Centre of Competence in Research PlanetS at the Universities of Bern and Geneva, examined data that had been collected by the high-resolution HARPS spectrograph. They were able to show that a total of at least seven gaseous metals occur in the atmosphere of WASP-121b. The results were recently published in the journal Astronomy & Astrophysics.

Unexpectedly much going on in the atmosphere of exoplanet WASP-121b

WASP-121b has been extensively studied since its discovery. "The earlier studies showed that there is a lot going on in its atmosphere," explains Jens Hoeijmakers. And this despite the fact that astronomers had assumed that ultra-hot planets have rather simple atmospheres because not many complex chemical compounds can form in such blistering heat. So how did WASP-121b come to have this unexpected complexity?

"Previous studies tried to explain these complex observations with theories that did not seem plausible to me," says Hoeijmakers. The studies had suspected that molecules containing the relatively rare metal vanadium were the main cause of the complex atmosphere in WASP-121b. According to Hoeijmakers, however, this would only make sense if a more common metal, titanium, were missing in the atmosphere. So Hoeijmakers and his colleagues set out to find another explanation. "But it turned out that they were right," admits Hoeijmakers unequivocally. "To my surprise, we actually found strong signatures of vanadium in the observations." At the same time, however, titanium was missing. This in turn confirmed Hoeijmakers' assumption.

Vaporised metals

But the team made other, unexpected discoveries. In addition to vanadium, they newly discovered six other metals in the atmosphere of WASP-121b: Iron, chromium, calcium, sodium, magnesium and nickel. "All metals evaporated as a result of the high temperatures prevailing on WASP-121b," explains Hoeijmakers, "thus ensuring that the air on the exoplanet consists of evaporated metals, among other things".

A new era in exoplanet research

Such detailed results allow researchers to draw conclusions about the chemical processes that take place on such planets, for example. This is a crucial skill for the not too distant future, when larger, more sensitive telescopes and spectrographs will be developed. These will allow astronomers to study the properties of smaller, cooler rocky planets similar to Earth. "With the same techniques we use today, instead of just detecting signatures of gaseous iron or vanadium, we will be able to focus on biosignatures, signs of life such as the signatures of water, oxygen and methane," says Hoeijmakers.

The extensive knowledge about the atmosphere of WASP- 121b not only confirms the ultra-hot character of the exoplanet, but also underlines the fact that this field of research is entering a new era, as Hoeijmakers puts it: "After years of cataloguing what is out there, we are now no longer just taking measurements," explains the researcher, "but we are really beginning to understand what the data from the instruments show us. How planets resemble and differ from each other. In the same way, perhaps, that Charles Darwin began to develop the theory of evolution after characterizing countless species of animals, we are beginning to understand more about how these exoplanets were formed and how they work".

Credit: 
University of Bern