Tech

Supernovae twins open up new possibilities for precision cosmology

image: The upper left figure shows the spectra -- brightness versus wavelength -- for two supernovae. One is nearby and one is very distant. To measure dark energy, scientists need to measure the distance between them very accurately, but how do they know whether they are the same? The lower right figure compares the spectra -- showing that they are indeed "twins." This means their relative distances can be measured to an accuracy of 3 percent. The bright spot in the upper-middle is a Hubble Space Telescope image of supernova 1994D (SN1994D) in galaxy NGC 4526.

Image: 
Graphic Zosia Rostomian/Berkeley Lab; photo NASA/ESA)

Cosmologists have found a way to double the accuracy of measuring distances to supernova explosions - one of their tried-and-true tools for studying the mysterious dark energy that is making the universe expand faster and faster. The results from the Nearby Supernova Factory (SNfactory) collaboration, led by Greg Aldering of the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab), will enable scientists to study dark energy with greatly improved precision and accuracy, and provide a powerful crosscheck of the technique across vast distances and time. The findings will also be central to major upcoming cosmology experiments that will use new ground and space telescopes to test alternative explanations of dark energy.

Two papers published in The Astrophysical Journal report these findings, with Kyle Boone as lead author. Currently a postdoctoral fellow at the University of Washington, Boone is a former graduate student of Nobel Laureate Saul Perlmutter, the Berkeley Lab senior scientist and UC Berkeley professor who led one of the teams that originally discovered dark energy. Perlmutter was also a co-author on both studies.

Supernovae were used in 1998 to make the startling discovery that the expansion of the universe is speeding up, rather than slowing down as had been expected. This acceleration - attributed to the dark energy that makes up two-thirds of all the energy in the universe - has since been confirmed by a variety of independent techniques as well as with more detailed studies of supernovae.

The discovery of dark energy relied on using a particular class of supernovae, Type Ia. These supernovae always explode with nearly the same intrinsic maximum brightness. Because the observed maximum brightness of the supernova is used to infer its distance, the small remaining variations in the intrinsic maximum brightness limited the precision with which dark energy could be tested. Despite 20 years of improvements by many groups, supernovae studies of dark energy have until now remained limited by these variations.

Quadrupling the number of supernovae

The new results announced by the SNfactory come from a multi-year study devoted entirely to increasing the precision of cosmological measurements made with supernovae. Measurement of dark energy requires comparisons of the maximum brightnesses of distant supernovae billions of light-years away with those of nearby supernovae "only" 300 million light-years away. The team studied hundreds of such nearby supernovae in exquisite detail. Each supernova was measured a number of times, at intervals of a few days. Each measurement examined the spectrum of the supernova, recording its intensity across the wavelength range of visible light. An instrument custom-made for this investigation, the SuperNova Integral Field Spectrometer, installed at the University of Hawaii 2.2-meter telescope at Maunakea, was used to measure the spectra.

"We've long had this idea that if the physics of the explosion of two supernovae were the same, their maximum brightnesses would be the same. Using the Nearby Supernova Factory spectra as a kind of CAT scan through the supernova explosion, we could test this idea," said Perlmutter.

Indeed, several years ago, physicist Hannah Fakhouri, then a graduate student working with Perlmutter, made a discovery key to today's results. Looking at a multitude of spectra taken by the SNfactory, she found that in quite a number of instances, the spectra from two different supernovae looked very nearly identical. Among the 50 or so supernovae, some were virtually identical twins. When the wiggly spectra of a pair of twins were superimposed, to the eye there was just a single track. The current analysis builds on this observation to model the behavior of supernovae in the period near the time of their maximum brightness.

The new work nearly quadruples the number of supernovae used in the analysis. This made the sample large enough to apply machine-learning techniques to identify these twins, leading to the discovery that Type Ia supernova spectra vary in only three ways. The intrinsic brightnesses of the supernovae also depend primarily on these three observed differences, making it possible to measure supernova distances to the remarkable accuracy of about 3%.

Just as important, this new method does not suffer from the biases that have beset previous methods, seen when comparing supernovae found in different types of galaxies. Since nearby galaxies are somewhat different than distant ones, there was a serious concern that such dependence would produce false readings in the dark energy measurement. Now this concern can be greatly reduced by measuring distant supernovae with this new technique.

In describing this work, Boone noted, "Conventional measurement of supernova distances uses light curves - images taken in several colors as a supernova brightens and fades. Instead, we used a spectrum of each supernova. These are so much more detailed, and with machine-learning techniques it then became possible to discern the complex behavior that was key to measuring more accurate distances."

The results from Boone's papers will benefit two upcoming major experiments. The first experiment will be at the 8.4-meter Rubin Observatory, under construction in Chile, with its Legacy Survey of Space and Time, a joint project of the Department of Energy and the National Science Foundation. The second is NASA's forthcoming Nancy Grace Roman Space Telescope. These telescopes will measure thousands of supernovae to further improve the measurement of dark energy. They will be able to compare their results with measurements made using complementary techniques.

Aldering, also a co-author on the papers, observed that "not only is this distance measurement technique more accurate, it only requires a single spectrum, taken when a supernova is brightest and thus easiest to observe - a game changer!" Having a variety of techniques is particularly valuable in this field where preconceptions have turned out to be wrong and the need for independent verification is high.

Credit: 
DOE/Lawrence Berkeley National Laboratory

Researchers develop artificial intelligence that can detect sarcasm in social media

image: Dr. Garibay is investigating ways to make artificial intelligence smarter when it comes to detecting and appropriately responding to human emotions.

Image: 
University of Central Florida

Computer science researchers at the University of Central Florida have developed a sarcasm detector.

Social media has become a dominant form of communication for individuals, and for companies looking to market and sell their products and services. Properly understanding and responding to customer feedback on Twitter, Facebook and other social media platforms is critical for success, but it is incredibly labor intensive.

That's where sentiment analysis comes in. The term refers to the automated process of identifying the emotion -- either positive, negative or neutral -- associated with text. While artificial intelligence refers to logical data analysis and response, sentiment analysis is akin to correctly identifying emotional communication. A UCF team developed a technique that accurately detects sarcasm in social media text.

The team's findings were recently published in the journal Entropy.

Effectively the team taught the computer model to find patterns that often indicate sarcasm and combined that with teaching the program to correctly pick out cue words in sequences that were more likely to indicate sarcasm. They taught the model to do this by feeding it large data sets and then checked its accuracy.

"The presence of sarcasm in text is the main hindrance in the performance of sentiment analysis," says Assistant Professor of engineering Ivan Garibay '00MS '04PhD. "Sarcasm isn't always easy to identify in conversation, so you can imagine it's pretty challenging for a computer program to do it and do it well. We developed an interpretable deep learning model using multi-head self-attention and gated recurrent units. The multi-head self-attention module aids in identifying crucial sarcastic cue-words from the input, and the recurrent units learn long-range dependencies between these cue-words to better classify the input text."

The team, which includes computer science doctoral student Ramya Akula, began working on this problem under a DARPA grant that supports the organization's Computational Simulation of Online Social Behavior program.

"Sarcasm has been a major hurdle to increasing the accuracy of sentiment analysis, especially on social media, since sarcasm relies heavily on vocal tones, facial expressions and gestures that cannot be represented in text," says Brian Kettler, a program manager in DARPA's Information Innovation Office (I2O). "Recognizing sarcasm in textual online communication is no easy task as none of these cues are readily available."

This is one of the challenges Garibay's Complex Adaptive Systems Lab (CASL) is studying. CASL is an interdisciplinary research group dedicated to the study of complex phenomena such as the global economy, the global information environment, innovation ecosystems, sustainability, and social and cultural dynamics and evolution. CASL scientists study these problems using data science, network science, complexity science, cognitive science, machine learning, deep learning, social sciences, team cognition, among other approaches.

"In face-to-face conversation, sarcasm can be identified effortlessly using facial expressions, gestures, and tone of the speaker," Akula says. "Detecting sarcasm in textual communication is not a trivial task as none of these cues are readily available. Specially with the explosion of internet usage, sarcasm detection in online communications from social networking platforms is much more challenging."

Credit: 
University of Central Florida

UNH research: More than one way for animals to survive climate change

DURHAM, N.H.-- As climate change continues to trigger the rise in temperature, increase drier conditions and shift precipitation patterns, adapting to new conditions will be critical for the long-term survival of most species. Researchers at the University of New Hampshire found that to live in hotter more desert-like surroundings, and exist without water, there is more than one genetic mechanism allowing animals to adapt. This is important not only for their survival but may also provide important biomedical groundwork to develop gene therapies to treat human dehydration related illnesses, like kidney disease.

"To reference a familiar phrase, it tells us that there is more than one way to bake a cake," said Jocelyn Colella, a postdoctoral researcher in evolutionary biology. "In other words, there are several ways for animals to adapt to desert conditions and discovering this genetic flexibility offers a silver lining to all species that will increasingly be forced to acclimate to hotter, drier settings."

In their study, recently published in the Journal of Heredity, researchers compared the genetic mechanisms of three species of mice found in warm and dry areas; the cactus and canyon mice, both found predominately in desert habitats, and the North American deer mouse, which can also be found in colder, wetter climates in the northern United States. The researchers hypothesized that similar genes in each species would be critical to survive in desert environments. What they found was that each species used a different mechanism, meaning different genes and functions allowing for the same adaptation. One species adapted through mutational genetic changes over time and another used changes in gene expression which can occur more quickly and may be the more efficient evolutionary route.

"We were excited by the findings because if our research had only found one gene that was critical to adapting to warmer, drier conditions it would suggest that it would be challenging for other animals to respond to climate change, but our work says there are multiple evolutionary options that enable desert survival," said Colella.

The findings could also provide foundational information for biomedical research in developing gene therapies for human kidney disease.

"Because mice are physiologically similar to humans, this type of evolutionary work offers important first steps toward identifying and understanding genes that control complex traits like dehydration, which can compromise human kidneys causing lifelong, irreparable damage," said Matt MacManes, associate professor of genome enabled biology.

Each year millions of people die of dehydration related illness around the world. Experts say even minor dehydration can compromise the kidneys causing lifelong issues.

Credit: 
University of New Hampshire

Aluminum may affect climate change by increasing ocean's carbon sink capacity

image: Diagram of how aluminum may facilitate the uptake of iron and the utilization of dissolved organic phosphorus by marine phytoplankton

Image: 
ZHOU Linbin

Reducing net greenhouse gas emissions to zero as soon as possible and achieving "carbon neutrality" is the key to addressing global warming and climate change. The ocean is the largest active carbon pool on the planet, with huge potential to help achieve negative emissions by serving as a carbon sink.

Recently, researchers found that adding a small amount of aluminum to achieve concentrations in the 10x nanomolar (nM) range can increase the net fixation of CO2 by marine diatoms and decrease their decomposition, thus improving the ocean's ability to absorb CO2 and sequester carbon at deep ocean depths.

The study, published in Limnology and Oceanography on May 3, was conducted by a joint team led by Prof. TAN Yehui from the South China Sea Institute of Oceanology (SCSIO) of the Chinese Academy of Sciences and Prof. Peter G.C. Campbell from the Eau Terre Environnement Research Centre of the National Institute of Scientific Research, Canada.

According to the earlier "iron hypothesis", adding a small amount of iron to the iron-limited but nutrient-rich oceans could significantly promote the growth of marine phytoplankton (microalgae) and their absorption of CO2, and the consequent burial of organic matter in the ocean. However, the results of artificial iron fertilization experiments did not fully support the "iron hypothesis" and later studies suggested that ignoring the effects of aluminum and other elements may be the reason.

"In fact, natural iron fertilization, as caused by dust deposition, upwelling and hydrothermal venting, provides the ocean not only iron, but also aluminum and other elements. Aluminum concentrations in the upper ocean are usually one order of magnitude higher than those of iron," said Prof. TAN.

Prof. TAN's team and their collaborators found that aluminum may not only improve the utilization efficiency of iron and dissolved organic phosphorus by marine phytoplankton, thus enhancing carbon fixation in the upper ocean, but may also reduce the decomposition rate of biogenic organic carbon and enhance the export and sequestration of carbon in deep ocean depths.

They also found a significant negative correlation between aluminum input to the Southern Ocean and atmospheric CO2 concentration over the past 160,000 years.

Based on their findings about aluminum, they improved the original "iron hypothesis" by proposing the "iron-aluminum hypothesis" to better explain the roles of the two elements in climate change.

In this study, the researchers used radiocarbon (14C) as a tracer to show that adding aluminum to seawater to achieve trace concentrations (e.g., 40 nM) increased net carbon fixation of marine diatoms 10% to 30%.

More importantly, this study proved that environmentally relevant low concentrations of aluminum can reduce the daily decomposition rate of marine diatom-produced particulate organic carbon by 50% or more.

Calculations based on the new data suggest that adding aluminum at a concentration of 40 nM or lower to the ocean may increase the amount of particulate organic carbon exported to depths of 1,000 m and deeper by 1-3 orders of magnitude. This will significantly increase the ocean's carbon sink capacity and sequester carbon in the ocean for a long time, thus ameliorating climate change.

Credit: 
Chinese Academy of Sciences Headquarters

Microalgae biofuels: Changing carbohydrates into lipids

image: Electron microscope image of lipid production in the microalgae Chlamydomonas sp.

Image: 
Kato et al. (2021)

A cross-institutional collaboration has developed a technique to repartition carbon resources from carbohydrates to lipids in microalgae. It is hoped that this method can be applied to biofuel production. This discovery was the result of a collaboration between a research group at Kobe University's Engineering Biology Research Center consisting of Project Assistant Professor KATO Yuichi and Professor HASUNUMA Tomohisa et al., and Senior Researcher SATOH Katsuya et al. at the Takasaki Advanced Radiation Research Institute of the Quantum Beam Science Research Directorate (National institutes for Quantum and Radiological Science and Technology).

These research results were published on April 9, 2021 in the international academic journal Communications Biology.

Main Points

Microalgae are highly capable of producing lipids by fixing atmospheric CO2 via photosynthesis, making them promising candidates for biofuel production.

In light/dark conditions (i.e. day and night), the majority of microalgae's carbon resources obtained from CO2 are accumulated as carbohydrates (starch). This makes it difficult to get microalgae to produce lipids.

The researchers used ion beam mutagenesis to develop a strain of microalgae that can produce large amounts of lipids even under light/dark conditions.

In this microalgae mutant, the starch debranching enzyme gene was disrupted, causing it to produce phytoglycogen, which is easily broken down. The carbon resources were then repartitioned from carbohydrate production to lipid production.

Research Background

Biofuels are renewable resources that have received much attention in the move towards creating more sustainable societies. Microalgae are photosynthetic organisms that are highly capable of producing lipids from carbon dioxide in the atmosphere, making them promising candidates for biofuel production. However, a Kobe University research group consisting of Project Assistant Professor Kato Yuichi and Professor Hasunuma Tomohisa et al. discovered that the majority of carbon resources were diverted to starch production instead of lipid production under light/dark conditions (i.e. day and night). This is a problem when cultivating microalgae species outside.

Research Methodology

For this research study, Project Assistant Professor Kato and Professor Hasunuma's Kobe University research group collaborated with Senior Researcher Satoh et al. at the National institutes for Quantum and Radiological Science and Technology (QST). The researchers used the ion beam at QST's Takasaki Advanced Radiation Research Institute to induce mutation in the microalgae. This enabled them to cultivate a new mutant strain called Chlamydomonas sp. KOR1 (*1), which can produce large quantities of lipids even in light/dark conditions.

The researchers discovered that this KOR1 strain has disruptions in the starch debranching enzyme (*2) gene ISA1, causing it to produce a different carbohydrate: phytoglycogen (*3) instead of starch (Figure 1).

Normally, microalgae synthesize and accumulate carbohydrates (starch) during light periods and break them down when it is dark. However, many carbohydrates accumulate that cannot be completely broken down. Contrary to this, the carbohydrate synthesized by KOR1 (phytoglycogen) was completely broken down during the dark period. The results of the KOR1 metabolome analysis (*4) revealed a total increase in intermediate metabolites in both the starch and lipid synthesis pathways (intermediate metabolites included fructose-6-phosphate, glucose-6-phosphate, acetyl-CoA and glycerol 3-phosphate). From this analysis, the researchers illuminated the metabolic mechanism underlying the increased lipid production that resulted from ISA1 gene disruption. In the KOR1 strain, the carbohydrate (phytoglycogen) was quickly broken down and intermediate metabolites subsequently induced the carbon resource to be repartitioned to lipid production (Figure 2).

Further Developments

In order to produce biofuels using microalgae, it is necessary to cultivate these organisms outside in the sunlight. However, there is an unavoidable decrease in lipid production under these light/dark conditions. The technique of 'repartitioning carbon resources by disrupting the starch debranching enzyme gene' developed through this research is one answer to this problem. It is hoped that this new method can contribute towards the large-scale implementation of biofuel production using microalgae.

Credit: 
Kobe University

Fast changing smells can teach mice about space

Researchers at the Francis Crick Institute and UCL (University College London) have found that mice can sense extremely fast and subtle changes in the structure of odours and use this to guide their behaviour. The findings, published in Nature today (Wednesday), alter the current view on how odours are detected and processed in the mammalian brain.

Odour plumes, like the steam off a hot cup of coffee, are complex and often turbulent structures, and can convey meaningful information about an animal's surroundings, like the movements of a predator or the location of food sources. But it has previously been assumed that mammalian brains can't fully process these temporal changes in smell because they happen so rapidly, much faster than an animal can sniff.

Using behavioural experiments where mice were exposed to incredibly short bursts of odour, neural imaging, electrophysiology and computer models, the scientists found that mice can, in fact, detect very rapid fluctuations within odour plumes, at rates previously not thought possible. They also showed that mice can use this information to distinguish whether odours are coming from the same or different sources, even if they are very close to each other.

This suggests that the mammalian olfactory system, responsible for the sense of smell, is also key in processing the awareness of physical space and surroundings, guiding decisions important to survival.

Andreas Schaefer, senior author and group leader of the Sensory Circuits and Neurotechnology Laboratory at the Crick and Professor of Neuroscience at UCL says: "From an evolutionary point of view our findings make sense as they help to explain why there is a lot of computational power within the olfactory bulb, the part of the brain where the nose sends signals to. It isn't just processing chemicals from odours but can also calculate information about physical distance and source. It would have been odd for evolution to create such processing power in this part of the brain if it were not being used to help the species survive."

In one key experiment, the scientists trained mice to detect whether two odours were coming from the same source or separate sources. The mice were able to correctly distinguish this difference even when the odours were released in short blips, lasting only a 40th of a second each (40 Hz).

Tobias Ackels, postdoc in the Sensory Circuits and Neurotechnology Laboratory at the Crick says: "Previous research into the sense of smell was done on the assumption that mice couldn't distinguish the fine, fluctuating information in odour plumes.

"We've shown that mice can access and process this information - this opens up a new dimension for studying the brain; we can run experiments that more effectively trigger neurons in a natural way and challenge the olfactory bulb. This will allow us to find out more about how this part of the brain works and how information about the world is extracted by neural circuits."

As part of the study, the scientists designed new technologies including a high speed odour delivery device and equipment that can measure several odours simultaneously with extremely high precision.

These innovations will enable more sophisticated work on the olfactory bulb, increasing our knowledge of how this brain region processes information about the environment and influences behaviour. Ultimately, the team aim to build understanding of how sensory circuits link the external world with internal thought and action.

Credit: 
The Francis Crick Institute

New Monarch butterfly breeding pattern inspires hope

image: A monarch caterpillar found in San Francisco in February.

Image: 
WSU

PULLMAN, Wash. -- A count of the Western Monarch butterfly population last winter saw a staggering drop in numbers, but there are hopeful signs the beautiful pollinators are adapting to a changing climate and ecology.

The population, counted by citizen scientists at Monarch overwintering locations in southern California, dropped from around 300,000 three years ago to just 1,914 in 2020, leading to an increasing fear of extinction. However, last winter large populations of monarchs were found breeding in the San Francisco and Los Angeles areas. Prior to last winter, it was unusual to find winter breeding by monarchs in those locations.

"There's more to it than just counting overwintering butterflies," said David James, an associate professor in Washington State University's Department of Entomology. "It seems that Monarchs are evolving or adapting, likely to the changing climate, by changing their breeding patterns."

The larger numbers of reported sightings of winter breeding monarchs around the San Francisco Bay area prompted James to write a new commentary article in the journal Animal Migration.

The only way to count breeding populations last winter was to look at online citizen scientist observations, supported by limited field work, James said.

"There has been a huge increase in caterpillars in the Bay area, indicating that those populations are breeding," he said. "The data are limited and preliminary, but we think the population is at least double what has been reported. However, it's hard to tell since they're dispersed over much of California."

Past becomes present

James said this pattern of Monarch butterflies adapting looks familiar to him because he saw something similar while working on his Ph.D. dissertation over 40 years ago in Sydney, Australia.

In the late 1970s, the Monarch population in Australia saw huge declines. Scientists thought it was due to habitat loss, a common guess at one factor causing population declines in the western U.S. now.

"In Australia, Monarchs haven't gone extinct," James said. "They've just adapted and moved along with a smaller population. And there's no effort to preserve them there because they aren't a native species. They're just very resilient."

Sunnier outlook

Though the declining population is a concern, James believes Monarchs in the western U.S. will experience a similar plateau and not go extinct.

"San Francisco is very similar, climate-wise, to the area around Sydney," James said. "And seeing this winter breeding, which is something new we saw in Australia in the late 1970s, leads me to think that Monarchs will adapt well to the changing climate in the western US."

He is now working with citizen scientists to collect more data on winter breeding in California that can show this evolution and adaptability.

"The Monarch is like the cockroach of butterflies," James said. "It's very persistent and adaptable all around the world. The population decline is very worrying, but I remain optimistic that it will persist in the western US, although maybe at lower levels than before."

Why Monarchs matter

Monarchs are iconic and very popular, basically the poster insect when anyone thinks about butterflies. Their large orange wings with black accents are immediately recognizable. They're also important pollinators all along their migration routes, which in the western U.S. is basically from the Pacific Northwest to southern California. The loss of habitat for their favored milkweed is one reason for their dramatic population decline.

"Beyond their beauty is their role in ecology," James said. "They pollinate and they are also an important part of the food chain. There's a whole range of reasons why people care about them and don't want them to go extinct."

James is continuing his long-running Monarch tagging program, in which the butterflies are raised and tagged by people in the Northwest, including inmates at a prison, then released so they can migrate south for winter. He thinks he'll find more tagged monarchs around the Bay area in breeding, instead of in non-breeding overwintering colonies, as happened last winter.

He plans to work with citizen scientists to collect and crunch the data to come to solid scientific conclusions. Until then, he maintains his optimism about how well these butterflies adapt.

"We don't know if this adaptation will continue and how successful it will be," James said. "The western Monarch population is quite precarious right now. It's at a tipping point, and something is happening. We need to do more work to find out exactly what is happening."

Credit: 
Washington State University

Water flora in the lakes of the ancient Tethys Ocean islands

image: Scanning electronic microscope images of gyrogonites of the new species Mesochara dobrogeica (above) and the utricles of the new Clavator ampullaceus var. latibracteatus variety (below) found in the region of Dobrogea (Romania).

Image: 
Cretaceous Research

A study published in Cretaceous Research expands the paleontological richness of continental fossils of the Lower Cretaceous with the discovery of a new water plant (charophytes), the species Mesochara dobrogeica. The study also identifies a new variety of carophytes from the Clavator genus (in particular, Clavator ampullaceus var. latibracteatus) and reveals a set of paleobiographical data from the Cretaceous much richer than other continental records such as dinosaurs'.

Among the authors of the study are Josep Sanjuan, Alba Vicente, Jordi Pérez-Cano and Carles Martín-Closas, members of the Faculty of Earth Sciences and the Biodiversity Research Institute (IRBio) of the University of Barcelona, in collaboration with the expert Marius Stoica, from the University of Bucharest (Romania).

Charophytes: a tool for biostratigraphy

Charophytes are pluricellular algae considered to be the ancestors of vascular plants. Since the Silurian period to date, they have occupied several lacustrine water environments (oligotrophic, alkaline and brackish waters). Nowadays, the anthropic action (exploitation of natural habitat, drainage of wet areas, pollution, etc.) is a threat to the conservation of charophyte pruderies.

For the scientific community, the fossil remains of charophytes -specially, their calcified fructifications- are abundant microfossils with a high biostratigraphy value in the dating of stratum at a local and regional scale. With a wide distribution and high rate of evolution and extinction, some species became excellent fossil indicators of the relative age of continental units.

Water flora in the lakes of the ancient Tethys Ocean islands

The study published in the journal Cretaceous Research focused on the analysis of the water paleoenvironments rich in charophytes in two continental formations -Z?voaia and Gherghina-, dominated by clay, silt and loam of lacustrine origins.

In the Lower Cretaceous, the charophytes flora of the paleo-islands of the large archipelago that built the area of current Europe and Maghreb revealed a defined biogeographical identity. In the islands of the archipelago of the ancient Tethys Ocean, the Clavatoraceae family stood out for its abundance and biodiversity.

The conclusions show a coincidence of 75% of taxons of the charophytes of the Iberia and Hateg (Romania) paleo-islands "that would differentiate these insular flowers from neighbouring continental flora, both Asiatic ones in the east and North-American ones in the western area", notes Josep Sanjuan, lecturer at the Department of Earth and Ocean Dynamics and first author of the article.

Despite the high affinity that existed between the charophyte flora of the islands that built the archipelago of the ancient Tethys Ocean, "there could also be island endemisms," says researcher Alba Vicente, who works at the National Autonomous University of Mexico. The dominant taxa on these islands from the past belong to the Clavatoraceae, an extinct family of charophytes. "Specifically, two subfamilies are represented (Clavatoroidae and Atopocharoidae), and one of the most prominent and common genera in all these islands was Globator, its evolution is a very useful tool for dating the continental successions of the Lower Cretaceous", note the experts Jordi Pérez-Cano and Carles Martín-Closas.

New findings in the charophyte paleontological records

The new species Mesochara dobrogeica is a gyrogonite -the fossilized oospore of charophytes- in the ovoid morphology with pointed apex and base. This small sized fossil fructification -which is about 385 microns high and 310 microns wide- presents apical pore shaped ornamentation. "This new species from the Mesochara genus would be the oldest ornamented piece of the current charophyte Charoidae family", notes Josep Sanjuan, who also collaborates with the American University of Beirut (Lebanon).

The Clavator ampullaceus var. latibracteatus is a new variety of charophyte fossil that shows a type of fruiting, the utricle, large in size (about 769 microns high and 802 microns wide) with bilateral symmetry. It consists of a phylloid - a leaf-like structure close to the main axis, two internal lateral bract cells and a bract cell in abaxial position. The two internal bract cells appear near the apical pore and have a complex structure that characterizes this new variety.

Credit: 
University of Barcelona

Robotic flexing: biologically inspired artificial muscles made from motor proteins

image: This microgripper (2 mm in length) was closed by a printable actuator that formed only in the illuminated area.

Image: 
Yuichi Hiratsuka from JAIST.

Ishikawa, Japan - Inside our cells, and those of the most well-known lifeforms, exist a variety of complex compounds known as "molecular motors." These biological machines are essential for various types of movement in living systems, from the microscopic rearrangement or transport of proteins within a single cell to the macroscopic contraction of muscle tissues. At the crossroads between robotics and nanotechnology, a goal that is highly sought after is finding ways to leverage the action of these tiny molecular motors to perform more sizeable tasks in a controllable manner. However, achieving this goal will certainly be challenging. "So far, even though researchers have found ways to scale up the collective action of molecular motor networks to show macroscopic contraction, it is still difficult to integrate such networks efficiently into actual machines and generate forces large enough to actuate macroscale components," explains Associate Professor Yuichi Hiratsuka from the Japan Advanced Institute of Science and Technology, Japan.

Fortunately, Dr. Hiratsuka, in collaboration with Associate Professor Takahiro Nitta from Gifu University and Professor Keisuke Morishima from Osaka University, both in Japan, have recently made remarkable progress in the quest to bridge the micro with the macro. In their latest study published in Nature Materials, this research team reported the design of a novel type of actuator driven by two genetically modified biomolecular motors. One of the most attractive aspects of their biologically inspired approach is that the actuator self-assembles from the basic proteins by simple light irradiation. In a matter of seconds after light hits a given area, the surrounding motor proteins fuse with rail-like proteins called microtubules and arrange themselves into a hierarchical macroscopic structure that resembles muscle fibers.

Upon formation around the target (illuminated) zone, this "artificial muscle" immediately contracts, and the collective force of the individual motor proteins is amplified from a molecular scale to a millimeter one. As the scientists showed experimentally, their approach could be ideal for small-scale robotics applications, such as actuating microscopic grippers to handle biological samples (Figure 1). Other millimeter-scale applications also demonstrated include joining separate components together, such as miniature cogwheels, and powering minimalistic robotic arms to make an insect-like crawling microrobot.

What's also very remarkable about this technique is that it is compatible with existing 3D printing techniques that use light, such as stereolithography. In other words, microrobots with built-in artificial muscles may be 3D printable, enabling their mass production and hence increasing their applicability to solve various problems! "In the future, our printable actuator could become the much-needed 'actuator ink' for the seamless 3D printing of entire robots. We believe that such a biomolecule-based ink can push forward the frontier of robotics by enabling the printing of complex bone and muscle components required for robots to further resemble living creatures," highlights Dr. Hiratsuka.

One potential improvement to the present technique would be finding ways to efficiently decontract the artificial muscles (reversibility). Alternatively, the present strategy could also be changed so as to produce spontaneous oscillatory behavior instead of contraction, as is observed in the mobile cilia of microbes or in insect flight muscles.

In any case, this study effectively shows how mimicking the strategies that nature has come up with is often times a recipe for success, as many scientists in the field of robotics have already figured out!

Credit: 
Japan Advanced Institute of Science and Technology

SMART evaluates impact of competition between autonomous vehicles and public transit

image: Spatial distribution changes in PT supply during the competition: (left) Routes with supply decrease; (right) Routes with supply increase

Image: 
Zhejing Cao and Baichuan Mo

Singapore, 5 May 2021 - The rapid advancement of Autonomous Vehicles (AV) technology in recent years has changed transport systems and consumer habits globally. As countries worldwide see a surge in AV usage, the rise of shared Autonomous Mobility on Demand (AMoD) service is likely to be next on the cards. Public Transit (PT), a critical component of urban transportation, will inevitably be impacted by the upcoming influx of AMoD and the question remains unanswered on whether AMoD would co-exist with or threaten the PT system.

Researchers at the Future Urban Mobility (FM) Interdisciplinary Research Group (IRG) at Singapore-MIT Alliance for Research and Technology (SMART), MIT's research enterprise in Singapore, and Massachusetts Institute of Technology (MIT), conducted a case study in the first-mile mobility market from origins to subway stations in Tampines, Singapore, to find out.

In a paper titled "Competition between Shared Autonomous Vehicles and Public Transit: A Case Study in Singapore" recently published in the prestigious journal Transportation Research Part C: Emerging Technologies, the first-of-its-kind study used Game Theory to analyse the competition between AMoD and PT.

The study was simulated and evaluated from a competitive perspective?where both AMoD and PT operators are profit-oriented with dynamically adjustable supply strategies. Using an agent-based simulation, the competition process and system performance were evaluated from the standpoints of four stakeholders--the AMoD operator, the PT operator, passengers, and the transport authority.

"The objective of our study is to envision cities of the future and to understand how competition between AMoD and PT will impact the evolution of transportation systems," says the corresponding author of the paper, SMART FM Lead Principal Investigator and Associate Professor at MIT Department of Urban Studies and Planning, Jinhua Zhao. "Our study found that competition between AMoD and PT can be favourable, leading to increased profits and system efficiency for both operators when compared to the status quo, while also benefiting the public and the transport authorities. However, the impact of the competition on passengers is uneven and authorities may be required to provide support for people who suffer from higher travel costs or longer travel times in terms of discounts or other feeder modes."

The research found that the competition between AMoD and PT would compel bus operators to reduce the frequency of inefficient routes and allow AMoDs to fill in the gaps in the service coverage. "Although the overall bus supply was reduced, the change was not uniform", says the first author of the paper, a PhD candidate at MIT, Baichuan Mo. "We found that PT services will be spatially concentrated to shorter routes that feed directly to the subway station, and temporally concentrated to peak hours. On average, this reduces travel time of passengers but increases travel costs. However, the generalised travel cost is reduced when incorporating the value of time." The study also found that providing subsidies to PT services would result in a relatively higher supply, profit, and market share for PT as compared to AMoD, and increased passenger generalised travel cost and total system passenger car equivalent (PCE), which is measured by the average vehicle load and the total vehicle kilometer traveled.

The findings suggest that PT should be allowed to optimise its supply strategies under specific operation goals and constraints to improve efficiency. On the other hand, AMoD operations should be regulated to reduce detrimental system impacts, including limiting the number of licenses, operation time, and service areas, resulting in AMoD operating in a manner more complementary to PT system.

"Our research shows that under the right conditions, an AMoD-PT integrated transport system can effectively co-exist and complement each other, benefiting all four stakeholders involved," says SMART FM alumni, Hongmou Zhang, a PhD graduate from MIT's Department of Urban Studies and Planning, and now Assistant Professor at Peking University School of Government. "Our findings will help the industry, policy makers and government bodies create future policies and plans to maximise the efficiency and sustainability of transportation systems, as well as protect the social welfare of residents as passengers."

The findings of this study is important for future mobility industries and relevant government bodies as it provides insight into possible evolutions and threats to urban transportation systems with the rise of AV and AMoD, and offers a predictive guide for future policy and regulation designs for a AMoD-PT integrated transport system. Policymakers should consider the uneven social costs such as increased travel costs or travel time, especially to vulnerable groups, by supporting and providing them with discounts or other feeder modes.

The research is carried out by SMART and supported by the National Research Foundation (NRF) Singapore under its Campus for Research Excellence And Technological Enterprise (CREATE) programme.

Credit: 
Singapore-MIT Alliance for Research and Technology (SMART)

Elegant constrictions in a cellular kill switch

image: The team revealed the protein's 3D structure using KAUST's state-of-the-art Titan Krios cryo-transmission electron microscope.

Image: 
© 2021 KAUST; Anastasia Serin

The inner workings of a "self-destruct switch" present on human cells that can be activated during an immune response have been revealed. In unprecedented detail, KAUST scientists with collaborators in China report the 3D atomic structure of the human PANX1 protein, which may help underpin new therapies that target the immune system.

When cells become infected with a pathogen, the body's immune system works to destroy the infected cells before they become a threat to surrounding tissues. This form of cell death, during which a cell releases potent danger signals to recruit immune cells, is known as pyroptosis.

The protein PANX1, a channel pore that dots a cell's outer membrane, has been implicated in pyroptosis because it allows the passage of ions and molecules out of the cell, which helps mark it for destruction. But how it carries out this function, or "flicks the switch" on cell death, has been unclear.

"We wanted to know the gating mechanisms of PANX1 by resolving the previously unrevealed protein ends -- the C- and N-termini -- to understand their importance in pyroptosis," says study co-first author Baolei Yuan, a Ph.D. student in Mo Li's lab.

Li's collaborators, led by Maojun Yang at Tsinghua University, first isolated the protein and revealed the 3D structure using data collected on KAUST's state-of-the art Titan Krios cryo-transmission electron microscope and a similar instrument at Tsinghua University.

Through this, the researchers visualized a number of amino acids within the protein that "pinch" the pore to control the passage of molecules across the cell membrane. Using cultured cells, Li's team confirmed the indispensable role these amino acids and PANX1 play in pyroptosis.

But the molecular details of how ions and molecules cross the PANX1 pore only became clear when the researchers teamed up with Xin Gao, whose group was able to simulate the molecular dynamics.

"I was surprised by the intricate and beautifully arranged constrictions in the permeation path of the PANX1 channel," says Yuan.

Together, the cryo-EM and molecular dynamics data revealed that the N- and C-termini stretch deeply into the pore to form barriers under normal conditions to keep ions and small components inside the cells. But once stimulated, the two termini are either modified or cleaved to make the channel more permeable, releasing molecules that help destroy the cell.

"These findings give us a much better understanding of the mechanism that controls pyroptosis," says Li. "PANX1 has been associated with diverse and numerous pathophysiological conditions related to the immune system. Our study provides a high-quality reference for potential drug targets."

Credit: 
King Abdullah University of Science & Technology (KAUST)

Scroll'n'roll -- nanomaterials towards effective photocatalytic pollution treatment

image: Thanks to the technology developed by the team of prof. Juan Carlos
Colmenares, it is easy to create materials that, under the sunlight, can
effectively capture toxic compounds from the environment and neutralize
them.

Image: 
Source: IPC PAS, Grzegorz Krzyzewski

We live in times when among the most limited and precious resources on Earth are air and water. No matter the geographical location, the pollution spreads quickly, negatively affecting even the purest regions like Mount Everest. Thus, anthropogenic activity decreases the quality of the environment, making it harmful for flora and fauna. Current waste treatment methods are not sufficient, so novel and effective methods for maximizing pollutants removal are highly needed. One of the robust and prosperous solutions that make it possible to degrade various highly toxic chemicals from air and water is based on nanotechnology. Nanomaterials offer unique physicochemical properties, establishing them capable to catalytically detoxify harmful substances faster and more efficiently than within classical filtration-based approaches. Facing the global pollution challenges, an international team led by prof. Juan Carlos Colmenares from the Institute of Physical Chemistry, Polish Academy of Sciences (IPC PAS) opens new horizons on harmful chemical treatment. They have synthesized a novel nanomaterial that can be used for multiple toxic compounds degradation.

Periodic table in the air and water

Nowadays, air and water contamination is higher than ever before, pursuing the whole world for remedying their treatment. Industrial wastes are full of organic molecules that are harmful to all types of living organisms. Quite often, they persist and accumulate in the environment for a long time, and once they enter the body, they may cause severe serious problems. Some might have a lethal effect depending on their type, even after a short exposition of a low concentration. Even if some toxic compounds are not breathed in, the air is full of humidity that sooner or later gets to the water and soil. Air pollution becomes a water and soil problem, making it more challenging to treat effectively.

Small size - high hopes

Among many techniques used for efficient air and water purification, the environmental contamination scale requires novel solutions and applications based on nanotechnology. Nanomaterials offer a tremendous high surface area to volume ratio and surface activity, making them highly reactive towards many chemicals. While only some offer low cytotoxicity, their diversity is constantly growing, establishing them as ideal candidates for application in this particular field - environmental remediation.

Recently, researchers from IPC PAS led by prof. Juan Carlos Colmenares proposed using commercial chemically stable and low toxic compound - titanium dioxide (TiO2 P-25) and combining it with carbon-based compounds, specifically graphite oxide (rGO), for effective detoxification of various compounds in air and water. TiO2 works as a photocatalyst that can degrade a wide range of chemical pollutants, including organic compounds and even microbes, under UV light or even solar light. Moreover, its synthesis is cost-effective, and such material is inert towards decomposition after air exposition. Researchers proposed to modify the classical TiO2 nanoparticles, scrolling them like a croissant by ultrasounds into the nanometric rolls called nanotubes. The scrolling procedure leads to unique properties and improves the nanocomposite's photocatalytic properties, where the nanorolls work as a trap for some harmful compounds. That feature makes it an efficient adsorbent, where the treated molecules can get stuck between particular layers of the nanotubes. Moreover, the material was peeled onto the surface and then chemically modified into the novel titanate form covering the whole nanotubes uniformly.

"The developments in nanotechnology pronounced our capability to imagine and hence design novel nanomaterials. Our vision that became reality was to synthesize a nanocomposite that combines the unique properties of the thinner two-dimensional nanomaterial, graphene, with the high photoreactivity of titanium oxide nanotubes. The incorporation of reduced graphite oxide had a positive impact on the desired physicochemical properties as well as on both photocatalytic and adsorptive efficiency comparing to solely titanate nanotubes and the benchmark titanium oxides. This composite presented an elevated detoxification efficiency against the assumed as the "King" of Chemical Warfare agents, mustard gas. Going a step further, experiments revealed additionally that this composite can have alternative environmental remediation applications against a plethora of organic pollutants as well as to be utilized as catalyst for the upgrade of biomass derived platform compounds towards important green-oriented chemicals" - remarks the first author, dr. Dimitrios A. Giannakoudakis

The as-synthesized nanotube-shaped scrolled titanate nanosheets were immobilized onto the reduced graphite oxide (rGO) flakes forming a more efficient catalyst than sole TiO2 or carbon-rich rGO. Moreover, the coupling of titanates with rGO sheets improves the photoactivity of the composite. TiO2 works as a photocatalyst under ultraviolet light exposure. At the same time, titanate nanotubes coupled with rGO can absorb light also in the visible range. The coupling of these two components makes such a composite universal for air and water purification from various harmful chemicals under the solar irradiation.

In their work, the authors presented a modern synthetic approach to achieve the synthesis of the nanocomposite. They have also shown that the toxic vapors are detoxified onto the composite surface, forming less- or non-toxic molecules that can remain strongly adsorbed on the material's surface. That work was published in Chemical Engineering Journal (Elsevier), presenting spectacular efficiency in detoxification, making the composite a promising material for highly effective air and water pollution treatment.

Prof. Juan Carlos Colmenares claims, "We consider the ultrasonication pre-treatment before the hydrothermal treatment is crucial to the formation of our targeted homogeneous nanocomposite, consisting of the nanotube-shape scrolled trititanate nanosheets with well-dispersed and exfoliated rGO, as magnetic stirring (silent pre-treatment) is giving us an almost inactive photocatalyst. The superiority of our nanocomposite over the benchmark photocatalyst TiO2 P25 is arisen from the nanostructured nature and associated with the high amount of surface functional groups that act as catalytic centers and its developed porosity which together with its high level of reusability make it a perfect material for environmental remediation under solar irradiation. Our research efforts (under fruitful collaboration with Prof. Teresa J. Bandosz from the City College of New York, USA) the last years revealed that the utilization of ultrasound irradiation during the materials synthesis can lead to novel nanomaterials of unique physicochemical properties".

The development and application of nanomaterials like the presented nanocomposite for air and water remediation are gaining importance worldwide. Thanks to nanomaterials' unique properties and their coupling with conventional methods, we step closer towards efficient decontamination of the environment.

Credit: 
Institute of Physical Chemistry of the Polish Academy of Sciences

Superconductivity, high critical temperature found in 2D semimetal W2N3

image: The model predicts a remarkably high superconducting critical temperature of 21 K in the easily exfoliable, topologically nontrivial 2D semimetal W2N3.

Image: 
Davide Campi @EPFL

Superconductivity in two-dimensional (2D) systems has attracted much attention in recent years, both because of its relevance to our understanding of fundamental physics and because of potential technological applications in nanoscale devices such as quantum interferometers, superconducting transistors and superconducting qubits.

The critical temperature (Tc), or the temperature under which a material acts as a superconductor, is an essential concern. For most materials, it is between absolute zero and 10 Kelvin, that is, between -273 Celsius and -263 Celsius, too cold to be of any practical use. Focus has then been on finding materials with a higher Tc.

While researchers have discovered materials that act as conventional superconductors at temperatures as high as 250 K under extreme pressure, the reported record until now among 2D materials stands at between 7 and 12K in MoS2 according to experimental evidence and up to 20 K in some doped 2D materials and in intrinsic 2D metals according to theoretical modelling. Theoretical predictions have put a superconducting transition at a temperature above liquid hydrogen for some recently realized 2D boron allotropes but these materials cannot be obtained by exfoliation from van der Waals-bonded 3D parents and must be grown directly on a metal substrate. This results in relatively strong interactions that are predicted to suppress the superconducting critical temperature down to just 2 K in a supported sample.

In parallel to this search for higher Tc, researchers have been looking for materials that combine nontrivial topological properties with superconductivity. This search is driven both by a quest for exotic states of matter as well as for deeper understanding of the interactions between topological edge states and the superconducting phase.

In the paper "Prediction of phonon-mediated superconductivity with high critical temperature in the two-dimensional topological semimetal W2N3" authors Nicola Marzari, head of the Laboratory of Theory and Simulation of Materials at EPFL, scientist Davide Campi and PhD student Simran Kumari use first-principles calculations to identify intrinsic superconductivity in monolayer W2N3, a material that has recently been identified as being easily exfoliable from a layered hexagonal-W2N3 bulk by calculations, a theory also supported by experimental evidence. They find a critical temperature of 21 K, that is, just above liquid hydrogen and a record-high transition temperature for a conventional phonon-mediated 2D superconductor.

They also examine the effects of biaxial strain on the electron-phonon couplings and predict strong dependence of the electron-phonon coupling constant, making 2D W2N3 a very promising platform for studying different interaction regimes and testing the limits of current theories of superconductivity. Finally, they argue that the material could be doped such that currently unoccupied helical edge states 0.5 eV above the Fermi level become filled, even while superconductivity persists--albeit with a much lower transition temperature--making W2N3 a viable candidate for studying and exploiting the possible coexistence and interactions of the superconducting state with topologically protected edge states.

Credit: 
National Centre of Competence in Research (NCCR) MARVEL

Release of drugs from a supramolecular cage

image: Researchers succeed in constructing a supramolecular cage and loading it with pharmaceutically active cargo. In aqueous solution, ultrasound waves open the cage and release the drug.

Image: 
HHU / Robin Küng

How can a highly effective drug be transported to the precise location in the body where it is needed? In the journal Angewandte Chemie, chemists at Heinrich Heine University Düsseldorf (HHU) together with colleagues in Aachen present a solution using a molecular cage that opens through ultrasonification.

Supramolecular chemistry involves the organization of molecules into larger, higher-order structures. When suitable building blocks are chosen, these systems 'self-assemble' from their individual components.

Certain supramolecular compounds are well suited for 'host-guest chemistry'. In such cases, a host structure encloses a guest molecule and can shield, protect and transport it away from its environment. This is a specialist field of Dr. Bernd M. Schmidt and his research group at the Institute of Organic and Macromolecular Chemistry at HHU.

The chemists in Düsseldorf collaborated with colleagues from the DWI Leibniz Institute for Interactive Materials to find a system that may one day even be able to transport cargo molecules through the human body and release the drug at the desired location.

The solution may be to use discrete 'Pd6(TPT)4 cages'. These are octahedral cage-like assemblies, bearing polymer chains on each vertex. They are comprised of four triangular panels, palladium atoms and connecting units.

When the individual components are added to an aqueous solution in the correct ratio, the cages self-assemble. If smaller, hydrophobic molecules are added to the cages, they enter the cavities. The researchers demonstrated this effect using pharmaceutically active molecules, like ibuprofen and progesterone.

"The special trick with our system involves the pre-determined rupture points", explains Dr. Schmidt, last author of the study. "The palladium atoms hold all compounds with a comparatively weak bond. Once you succeed in breaking the atoms out of the compound, the entire octohedral structure breaks apart."

To break the bonds, the researchers in Aachen use powerful ultrasonification similar to that used medically to break down kidney stones, for example. In water, the ultrasound creates cavitation bubbles that burst and exert huge mechanical shear force on the long polymer chains. The forces are so powerful that the palladium atoms are actually torn from the vertices and thus rupture the octahedral cage. The small drug molecules are agitated in the process but are not damaged.

Dr. Robert Göstl (DWI) says: "Localised ultrasound radiation of the tissue to be treated could mean that the drug transported in the cage is later released at the exact location where the therapy is needed." The drug molecules used in the study serve merely as examples. In principle, a large number of different hydrophobic molecules can be packed in the cage. Unlike other host-guest systems described, it is not necessary to alter the drug molecules chemically in order to get them in the cage. "To treat tumours, it would be feasible to use cytostatic drugs as the cargo, for example. By releasing them directly at the site of a solid tumour, it may be possible to have chemotherapy that uses much less of the drug and thus has lesser side effects", explains Schmidt.

This is helped by the fact that the defined cargo volume makes it possible to measure precisely how much of the drug is released at the target site. "The dose administered could even be calculated precisely."

The study is a Proof of Concept that demonstrated the feasibility of the approach. It also convinced the reviewers and publishers of the journal "Angewandte Chemie", who rated the publication as very important. The work, which is classified as "Hot Paper", will also be featured on the cover of the upcoming issue.

"The next steps involve determining how real cells respond to our cages. Before any medical use, we need to ensure that they are not toxic."

Credit: 
Heinrich-Heine University Duesseldorf

Examination of an Estonian patient helped discover a new form of muscular dystrophy

image: Professor of Clinical Genetics of the University of Tartu Katrin Õunap

Image: 
University of Tartu

In about a quarter of patients with hereditary diseases, the cause of the disease remains unclear even after extensive genetic testing. One reason is that we still do not know enough about the function of many genes. Of the 30,000 known genes, just a little more than 4,000 have been found to be associated with hereditary diseases.

At the Department of Clinical Genetics of the University of Tartu Institute of Clinical Medicine, under the leadership of Professor Katrin Õunap, patients with hereditary diseases of unclear cause have been studied in various research projects since 2016. In collaboration with the Broad Institute of MIT and Harvard, these patients have undergone extensive genome-wide sequencing analyses at the level of the exome (the sequence of all genes), genome (whole DNA sequence), and transcriptome (RNA transcribed from the genome).

Professor Katrin Õunap described that in a girl with progressive muscle weakness, they found two changes in the JAG2 gene that had not been associated with any hereditary disease before. "In cooperation with an international team of researchers, we found 22 other patients with similar problems and changes in the JAG2 gene from all over the world," said Õunap.

The study showed that the misfunction of the JAG2 gene interferes with the development of muscle cells and their ability to recover, thereby causing progressive muscle damage.

Estonian researchers conducted a transcriptome (RNA) analysis of the patient's muscle tissue, which provided important information on pathological changes in gene expression in muscle cells. "Also, for the first time in Estonia, our patient underwent a special muscular magnetic resonance imaging scan, which revealed a pattern of muscle involvement characteristic of pathogenic variants in JAG2 in lower limb muscles," explained Õunap.

Credit: 
Estonian Research Council