Earth

Aged bone marrow niche impedes function of rejuvenated hematopoietic stem cells

image: Old rejuvenated HSC in a young niche. Tubulin (green) remains organized. (blue= DAPI, nucleus, green= cytoskeletal protein tubulin). bar = 10 μm.

Image: 
AlphaMed Press

Durham, NC - When leukemia strikes an older person, it is in part due to the aging of his or her hematopoietic stem cells (HSCs). These immature cells can develop into all types of blood cells, including white blood cells, red blood cells and platelets. As such, researchers have focused on rejuvenating HSCs as a way to treat leukemia.

A new study released today in STEM CELLS adds much to that level of knowledge by showing that the youthful function of rejuvenated HSCs upon transplantation depends in part on a young bone marrow "niche," which is the microenvironment surrounding stem cells that interacts with them to regulate their fate.

"The information revealed by our study tells us that the influence of this niche needs to be considered in approaches to rejuvenate old HSCs for treating aging-associated leukemia or immune remodeling," said Hartmut Geiger, Ph.D. He was senior author of the study, conducted by researchers at Ulm University (Germany) and Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio.

Old HSCs exhibit a reduced reconstitution potential and other negative aspects such as altered gene expression profiles and an increase in apolar distribution of proteins. (Polarity is believed to be particularly important to fate decisions on stem cell division and for maintaining an HSC's interaction with its niche. Consequently, a failure to establish or regulate stem cell polarity might result in disease or tissue deterioration.) Aging of HSCs might even affect lifespan.

Researchers already knew that increased activity of a protein called Cdc42, which controls cell division, leads to HSC aging, and that when old HSCs are treated ex vivo with CASIN, an inhibitor of Cdc42 activity, they stay rejuvenated upon transplantation into young recipients. The aim of this latest study was to learn what happens to these rejuvenated HSCs when they are transplanted into an aged niche.

To find out, Dr. Geiger and his team transplanted rejuvenated aged HSCs into three groups of mice: young (8 to 10 weeks old), old (19 to 24 months old) and young cytokine osteopontin (OPN) knockout mice (8 to 12 weeks old). The team had recently demonstrated that a decrease in the level of secreted OPN in the aged bone marrow niche confers hallmarks of aging on young HSCs, and also that secreted OPN regulates HSC polarity. Old HSC and rejuvenated old HSCs were therefore transplanted into the OPN knockout recipients to test whether a lack of this protein in the niche affects the function of old rejuvenated HSCs.

The results were then analyzed for up to 23 weeks after transplantation.

"They showed us that an aged niche restrains the function of ex vivo rejuvenated HSCs, which is at least in part linked to a low level of OPN found in aged niches," Dr. Geiger reported. "This tells us that in order to sustain the function of rejuvenated aged HSCs, we will likely need to address the influence of an aged niche on rejuvenated HSCs."

Dr. Jan Nolta, Editor-in-Chief of STEM CELLS, said, "this study is an elegant demonstration of the molecular mechanisms by which the marrow microenvironment controls cellular aging of hematopoietic stem cells. This new information will be extremely useful in the fields of transplantation and treatment of leukemia and other disorders of the blood-forming system."

Credit: 
AlphaMed Press

Basketball Mathematics scores big at inspiring kids to learn

image: Children running a relay race as part of the Basketball Mathematics

Image: 
Photographer: Allan Jørgensen, University of Copenhagen

New study with 756 1st through 5th graders demonstrates that a six-week mashup of hoops and math has a positive effect on their desire to learn more, provides them with an experience of increased self-determination and grows math confidence among youth. The Basketball Mathematics study was conducted at five Danish primary and elementary schools by researchers from the University of Copenhagen's Department of Nutrition, Exercise and Sports.

Over the past decades, there has been a considerable amount of attention paid to explore different approaches to stimulate children's learning. Especially, there has been a focus on how physical activity, separated from the learning activities, can improve children's cognitive performance and learning. Conversely, there has been less of a focus aimed at the potential of integrating physical activity into the learning activities. The main purpose of this study therefore was to develop a learning activity that integrates basketball and mathematics and examine how it might affect children's motivation in mathematics.

Increased motivation, self-determination and mastery

756 children from 40 different classes at Copenhagen area schools participated in the project, where about half of the them - once a week for six weeks - had Basketball Mathematics during gym class, while the other half played basketball without mathematics.

"During classes with Basketball Mathematics, the children had to collect numbers and perform calculations associated with various basketball exercises. An example could be counting how many times they could sink a basket from three meters away vs. at a one-meter distance, and subsequently adding up the numbers. Both the math and basketball elements could be adjusted to suit the children's levels, as well as adjusting for whether it was addition, multiplication or some other function that needed to be practiced," explains Linn Damsgaard, who is writing her PhD thesis on the connection between learning and physical activity at the University of Copenhagen's Department of Nutrition, Exercise and Sports.

The results demonstrate that children's motivation for math integrated with basketball is 16% higher com-pared to classroom math learning. Children also experienced a 14% increase in self-determination compared with classroom teaching, while Basketball Mathematics increases mastery by 6% compared versus classroom-based mathematics instruction. Furthermore, the study shows that Basketball Mathematics can maintain children's motivation for mathematics over a six-week period, while the motivation of the control group decreases significantly.

"It is widely acknowledged that youth motivation for schoolwork decreases as the school year progresses. Therefore, it is quite interesting that we don't see any decrease in motivation when kids take part in Basketball Mathematics. While we can't explain our results with certainty, it could be that Basketball Mathematics endows children with a sense of ownership of their calculations and helps them clarify and concretize abstract concepts, which in turn increases their motivation to learn mathematics through Basketball Mathematics," says PhD student Linn Damsgaard

Active math on the school schedule

Associate Professor Jacob Wienecke of UCPH's Department of Nutrition, Exercise and Sports, who supervised the study, says that other studies have proved the benefits of movement and physical activity on children's academic learning. He expects for the results of Basketball Mathematics on children's learning and academic performance to be published soon:

"We are currently investigating whether the Basketball Mathematics model can strengthen youth performance in mathematics. Once we have the final results, we hope that they will inspire school teachers and principals to prioritize more physical activity and movement in these subjects," says Jacob Wienecke, who concludes:

"Eventually, we hope to succeed in having these tools built into the school system and the teacher's education. The aim is that schools in the future will include "Active English" and "Active Mathematics" in the weekly schedule as subjects where physical education and subject-learning instructors collaborate to integrate this type of instruction with the normally more sedentary classwork."

Credit: 
University of Copenhagen - Faculty of Science

Social comparisons drive income's effect on happiness in states with higher inequality

image: Sociology professor and head Tim Liao of the University of Illinois Urbana-Champaign found that Americans were happier in states with higher wealth inequality when they had people within their gender/ethno-racial group - some more affluent and others poorer than themselves - to use as benchmarks for social comparisons.

Image: 
Photo courtesy Tim Liao

CHAMPAIGN, Ill. -- In a state with greater income inequality, the happiest place to occupy is not at the pinnacle of the income distribution, as one might think, but somewhere in the middle that provides clear vantage points of people like ourselves, a new study suggests.

According to sociologist Tim Liao of the University of Illinois Urbana-Champaign, it's the ability to compare ourselves with people of similar backgrounds, both people who earn more and others who earn less, that determine how our income affects our happiness - not the absolute amount we earn.

"Contrary to popular belief, more income does not necessarily make people happier. The actual amount a person earns doesn't matter much in terms of happiness," Liao said. "People who can make both upward and downward comparisons - especially with others in the same gender and ethno-racial group - are in the best position as far as their subjective well-being."

In the study, published in the journal Socius, Liao found that in states where incomes were relatively equal, individuals' happiness was affected less by their incomes because their economic positions were less clearly defined, making social comparisons less meaningful.

While there has been significant research on happiness and income inequality, much of that work was based on aggregate-level income inequality and global measures of happiness that did not capture the relationship at the individual level, he said.

Recent research suggests that social comparison theory, the premise that people's self-evaluations are based on their comparisons with others whom they perceive to be better or worse off, might play a key role.

Liao wanted to explore whether people's placement in the income distribution mattered - that is, if those who could conduct these upward and downward social comparisons were happier than outliers who were much more affluent or poorer than their peers.

Since individuals select the people they use as benchmarks for social comparisons, Liao also wanted to investigate which demographic group - gender, ethnicity/race or both of these - was most relevant.

Because no single survey was available that provided data on happiness along with income and demographic characteristics, Liao linked the data from two national surveys, both conducted in 2013, that involved many of the same respondents. Liao's sample included more than 1,900 people.

The 2013 American Time Use Survey was the most recent survey with well-being questions and provided Liao with a measure of each person's happiness. For that study, participants kept a time diary for a single day, rating on a seven-point scale how happy they felt while performing three randomly chosen routine activities. The ratings were added together to achieve a composite score representing each person's happiness level.

"Assessing a person's happiness as they go about their daily activities - a concept social scientists call 'experienced happiness' - may more accurately reflect their overall contentment with life than their responding to survey questions that ask them to rate how happy they are in general subjective terms," Liao said.

Using participants' annual income and demographic data from the Current Population Survey, Liao modeled income inequality at the state and individual levels.

He developed a measure at the individual level by comparing individuals' annual incomes with those of peers within the same gender, ethno-racial and gender/ethno-racial groups in their state.

Liao found that the gender/ethno-racial grouping was the most salient for social comparisons because individuals' inequality scores were more analogous in this array than when they were grouped either by gender or ethnicity/race alone.

In examining links between individuals' inequality scores and happiness within each group, Liao found that individuals with higher inequality scores than their peers also had lower happiness scores.

That is, people whose incomes were significantly higher or lower than their peers - meaning they could only make social comparisons upward or downward rather than in both directions - were less happy overall.

Likewise, Liao found that as income inequality within a state increased, the negative association between one-directional social comparisons and happiness also increased.

Liao said the findings confirm the importance of social comparison theory in research on happiness and income inequality.

And the same analytic method could be applicable in investigations of other social concerns at the individual level, he said, such as the connections between inequality and adverse mental and physical health outcomes.

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Megafauna extinction mystery - size isn't everything

image: Palorchestes azeal, sometimes referred to as the 'marsupial tapir'. It was a cow-sized beast, which probably weighed about 500 kg. It was one of the many species of marsupial megafauna that went extinct sometime in the Pleistocene, probably during the last glacial cycle.

Image: 
Illustration by Gabriel Ugueto

Ancient clues, in the shape of fossils and archaeological evidence of varying quality scattered across Australia, have formed the basis of several hypotheses about the fate of megafauna that vanished about 42,000 years ago from the ancient continent of Sahul, comprising mainland Australia, Tasmania, New Guinea and neighbouring islands.

There is a growing consensus that multiple factors were at play, including climate change, the impact of people on the environment, and access to freshwater sources.

Now, research led by Professor Corey Bradshaw of Flinders University and the Australian Research Council Centre of Excellence of Australian Biodiversity and Heritage (CABAH) has used sophisticated mathematical modelling to assess how susceptible different species were to extinction - and what it means for the survival of creatures today.

Using various characteristics such as body size, weight, lifespan, survival rate, and fertility, they created population simulation models to predict the likelihood of these species surviving under different types of environmental disturbance.

Simulations included everything from increasing droughts to increasing hunting pressure to see which species of 13 extinct megafauna, as well as 8 comparative species still alive today, had the highest chances of surviving.

Published in the journal eLife, Bradshaw and his team compared the results to what we know about the timing of extinction for different megafauna species derived from dated fossil records. They expected to confirm that the most extinction-prone species were the first species to go extinct - but that wasn't necessarily the case.

While they did find that slower-growing species with lower fertility, like the rhino-sized wombat relative Diprotodon, were generally more susceptible to extinction than more-fecund species like the marsupial 'tiger' thylacine, the relative susceptibility rank across species did not match the timing of their extinctions recorded in the fossil record.

"We found no clear relationship between a species' inherent vulnerability to extinction -- such as being slower and heavier and/or slower to reproduce -- and the timing of its extinction in the fossil record", explained Professor Bradshaw.

"In fact, we found that most of the living species used for comparison -- such as short-beaked echidnas, emus, brush turkeys, and common wombats-- were more susceptible on average than their now-extinct counterparts."

The researchers concluded that the true extinction cascade was likely the result of complex, localised scenarios, including impacts of regional climate variation, and different pressures from people across regions.

Associate Professor Vera Weisbecker of Flinders University and co-author of the study said: "The relative speed of different species to escape hunters, as well as whether or not a species dug protective burrows, also likely contributed to the mismatch between extinction susceptibility and timing.

"For example, fast-hopping red kangaroos still alive today might have had an escape advantage over some of the slower-striding short-faced kangaroos that went extinct. Small wombats that dug burrows might also have been more difficult for people to hunt than the bigger, non-burrowing megafauna."

Co-author Dr Frédérik Saltré of Flinders University added: "We determined that the kangaroo species were the least-susceptible to extinction based on their biology, followed by the monotremes (echidnas), and the giant 'wombat' species. Interestingly, the large, flightless birds, like emu and the giant mihirung 'thunderbird' Genyornis, had the highest susceptibilities.

"Our results support the notion that extinction risk can be high across all body sizes depending on a species' particular ecology, meaning that predicting future extinctions from climate change and human impacts aren't always straightforward based on the first principles of biology", concluded Professor Bradshaw.

Credit: 
Flinders University

A new fluorescent probe that can distinguish B cells from T cells

image: A) Splenocytes were isolated from mouse spleens, and T and B cells were separated using magnetic activated cell sorting (MACS). The cells were subsequently plated in 384 wells and over 10,000 different fluorescent molecules were screened using DOFLA. B) Fluorescence microscope image shows that CDgB stains B cells but not T cells. C) Flow cytometry graph of CDgB fluorescence vs SSC between T cell and B cell populations.

Image: 
Institute for Basic Science

Human blood contains several different components, including plasma, red blood cells (RBCs), white blood cells (WBCs), and platelets. Among these, WBCs are divided into numerous subcategories each with unique functions and characteristics, such as lymphocytes, monocytes, neutrophils, and others. Lymphocytes are further subdivided into T lymphocytes, B lymphocytes, and NK cells. Distinguishing and separating different types of these cells is highly important in carrying out studies in the field of immunology.

Discriminating different immune cell types is typically done by flow cytometry and fluorescence-activated cell sorting (FACS), which can identify distinct populations of cells according to their size, granularity, and fluorescence. While size and granularity alone cannot distinguish cells with similar physical parameters, different types of immune cells display a distinct combination of immune receptors on the cell surfaces. For example, the T lymphocytes and B lymphocytes express CD3 and CD19, respectively. Therefore, fluorescently identifying immune cells have relied on staining the cells using multiple antibodies against different receptors. It has been long thought that it was impossible to distinguish different immune cell types without using these antibodies.

However, novel breakthrough research performed by the scientists at the Center for Self-assembly and Complexity within the Institute for Basic Science, South Korea, may have just changed this. The researchers employed a diversity oriented fluorescence library approach (DOFLA) to screen over 10,000 different fluorescent molecules using the B and T lymphocytes separated from mouse spleens. From this, they managed to discover a new fluorescent probe that can discriminate B lymphocytes over T lymphocytes without the cell receptor targeting antibodies.

The researchers called the new probe CDgB, which stands for Compound of Designation green for B lymphocytes. CDgB is a lipophilic molecule that contains a fluorescent component attached to a hydrocarbon chain. As CDgB contains both a polar fluorescent group and a hydrocarbon tail, it means that the free unbound CDgB dye molecules form aggregates similar to micelles in the solution and exhibit a low level of background fluorescence. When they attach to the cell surfaces, however, the aggregates dissociate and cause a spike in fluorescence signals. In addition, the lipophilic nature of the dye means that the dye does not bind to a protein target, and instead localizes to the lipid membrane structure directly. According to the researchers, this was "the first example to report such type of cell distinction mechanism."

The CDgB is able to selectively target the cell membranes of B lymphocytes over T lymphocytes or NK cells. The researchers sought to optimize the selectivity of the CDgB by testing different derivatives of the molecules with various hydrocarbon chain lengths from 4 to 20 carbons. It was found that the CDgB derivatives with 14 to 18 carbons showed the highest selectivity towards the B lymphocytes, with C18 showing the best results. It became more difficult to distinguish the cells through fluorescence when the carbon length was increased beyond 20. The fact that carbon length matters in the selectivity hinted that the mechanism was dependent on the difference in the membrane structures between B and T lymphocytes.

The researchers further elucidated this mechanism by performing lipidomic analysis of B and T cell membranes. Phosphatidylcholine (PC) comprises the majority (> 60%) of the membrane phospholipids of both B and T lymphocytes. It was found that B lymphocytes in general had shorter PC's than that of the T lymphocytes. In addition, the membrane cholesterol content in the T lymphocytes was about twice higher than that of B lymphocytes. These factors give the B lymphocytes a more 'flexible' cell membrane, which was thought to be a crucial factor that explains why the CDgB molecules attach more readily to the cell membranes of the B lymphocytes over those of the T lymphocytes. Even among the B lymphocytes, it was found that the strength of the fluorescence was different based on the cell maturity. The B cell progenitors and immature B cells gave off much brighter fluorescence signals than mature B cells, which is most likely due to the higher membrane flexibility in the immature cells.

The researchers furthermore concluded that this new lipid oriented live cell distinction (LOLD) mechanism can supplement the existing cell distinction mechanism to improve our ability to distinguish specific cell types from complicated mixtures of different cells. This research was published in the Journal of the American Chemical Society.

Credit: 
Institute for Basic Science

Snow chaos in Europe caused by melting sea-ice in the Arctic

image: The Finnish Meteorological Institute's observation station used in the study, Pallas National Park, Arctic Finland.

Image: 
Jeff Welker

They are diligently stoking thousands of bonfires on the ground close to their crops, but the French winemakers are fighting a losing battle. An above-average warm spell at the end of March has been followed by days of extreme frost, destroying the vines with losses amounting to 90 percent above average. The image of the struggle may well be the most depressingly beautiful illustration of the complexities and unpredictability of global climate warming. It is also an agricultural disaster from Bordeaux to Champagne.

It is the loss of the Arctic sea-ice due to climate warming that has, somewhat paradoxically, been implicated with severe cold and snowy mid-latitude winters.

"Climate change doesn't always manifest in the most obvious ways. It's easy to extrapolate models to show that winters are getting warmer and to forecast a virtually snow-free future in Europe, but our most recent study shows that is too simplistic. We should beware of making broad sweeping statements about the impacts of climate change." Says professor Alun Hubbard from CAGE Center for Arctic Gas Hydrate, Environment and Climate at UiT The Arctic University of Norway.

Melting Arctic sea ice supplied 88% of the fresh snow

Hubbard is the co-author of a study in Nature Geoscience examining this counter-intuitive climatic paradox: A 50% reduction in Arctic sea-ice cover has increased open-water and winter evaporation to fuel more extreme snowfall further south across Europe.

The study, led by Dr. Hanna Bailey at the University of Oulu, Finland, has more specifically found that the long-term decline of Arctic sea-ice since the late 1970s had a direct connection to one specific weather event: "Beast from the East" - the February snowfall that brought large parts of the European continent to a halt in 2018, causing £1bn a day in losses.

Researchers discovered that atmospheric vapour traveling south from the Arctic carried a unique geochemical fingerprint, revealing that its source was the warm, open-water surface of the Barents Sea, part of the Arctic Ocean between Norway, Russia, and Svalbard. They found that during the "Beast from the East", open-water conditions in the Barents Sea supplied up to 88% of the corresponding fresh snow that fell over Europe.

Climate warming is lifting the lid off the Arctic Ocean

"What we're finding is that sea-ice is effectively a lid on the ocean. And with its long-term reduction across the Arctic, we're seeing increasing amounts of moisture enter the atmosphere during winter, which directly impacts our weather further south, causing extreme heavy snowfalls. It might seem counter-intuitive, but nature is complex and what happens in the Arctic doesn't stay in the Arctic." says Bailey.

When analyzing the long-term trends from 1979 onwards, researchers found that for every square meter of winter sea-ice lost from the Barents Sea, there was a corresponding 70 kg increase in the evaporation, moisture, and snow falling over Europe.

Their findings indicate that within the next 60 years, a predicted ice-free Barents Sea will likely become a significant source of increased winter precipitation - be it rain or snow - for Europe.

"This study illustrates that the abrupt changes being witnessed across the Arctic now, really are affecting the entire planet." says professor Hubbard.

Credit: 
UiT The Arctic University of Norway

Smell you later: Exposure to smells in early infancy can modulate adult behavior

image: Olfactory or smell-based imprinting is known to affect adulthood odor perception and behavior, but how does this happen? Scientists from Japan have now uncovered the molecular mechanism underlying this phenomenon.

Image: 
Hirofumi Nishizumi from University of Fukui

Imprinting is a popularly known phenomenon, wherein certain animals and birds become fixated on sights and smells they see immediately after being born. In ducklings, this can be the first moving object, usually the mother duck. In migrating fish like salmon and trout, it is the smells they knew as neonates that guides them back to their home river as adults. How does this happen?

Exposure to environmental input during a critical period early in life is important for forming sensory maps and neural circuits in the brain. In mammals, early exposure to environmental inputs, as in the case of imprinting, is known to affect perception and social behavior later in life. Visual imprinting has been widely studied, but the neurological workings of smell-based or "olfactory" imprinting remain a mystery.

To find out more, scientists from Japan, including Drs. Nobuko Inoue, Hirofumi Nishizumi, and Hitoshi Sakano at University of Fukui and Drs. Kazutaka Mogi and Takefumi Kikusui at Azabu University, worked on understanding the mechanism of olfactory imprinting during the critical period in mice. Their study, published in eLife, offers fascinating results. "We discovered three molecules involved in this process," reports Dr. Nishizumi, "Semaphorin 7A (Sema7A), a signaling molecule produced in olfactory sensory neurons, Plexin C1 (PlxnC1), a receptor for Sema7A expressed in the dendrites of mitral/tufted cells, and oxytocin, a brain peptide known as love hormone."

During the critical period, when a newborn mouse pup is exposed to an odor, the signaling molecule Sema7A initiates the imprinting response to the odor by interacting with the receptor PlxnC1. As this receptor is only localized in the dendrites in the first week after birth, it sets the narrow time limitation of the critical period. The hormone oxytocin released in the nursed infants imposes the positive quality of the odor memory.

It is previously known that male mice normally show strong curiosity toward unfamiliar mouse scents of both genders. "Blocking" Sema7A signaling during the critical period results in the mice not responding in their usual manner; they display avoidance response to the stranger mice.

An interesting result of this study is the conflicting response to aversive odors. Let's say, a pup is exposed to an innately aversive odor during the critical period; this imprinted odor will now induce a positive response against the innate natural response towards the odor. Here, the hard-wired innate circuit and the imprinted memory circuit are in competition, and the imprinting circuit takes over. To solve this dilemma and reach a conclusion, the brain must have detailed a mechanism of a crosstalk between the positive and negative responses, which opens a variety of research questions in the human context.

So, what do these results say about the human brain?

First, the results of the study open many research questions for the functioning of the human brain and behavior. Like the critical period in the mouse olfactory system, can we find such a period in humans, possibly in other sensory systems? The way the mouse brain chooses imprinted memory over innate response, do we humans also follow similar decision-making processes?

Secondly, this study also suggests that improper sensory inputs may cause neuro-developmental disorders, such as autism spectrum disorders (ASD) and attachment disorders (AD). Oxytocin is widely used for treating ASD symptoms in adults. However, Dr. Nishizumi says, "our study indicates that oxytocin treatment in early neonates is more effective than after the critical period in improving the impairment of social behavior. Thus, oxytocin treatment of infants will be helpful in preventing the ASD and AD, which may open a new therapeutic procedure for neurodevelopmental disorders."

This study adds valuable new insights to our understanding of decision making and mind struggle in humans and reveals new research paths in the neuroscience of all types of imprinting.

Credit: 
University of Fukui

Reducing ocean acidification by removing CO2: Two targets for cutting-edge research

image: Changes of mean fields of surface pH at mid-century (2046-2050) with respect to present day conditions for RCP4.5 baseline scenario (no alkalinization), and CTS009 scenario (with alkalinization)

Image: 
Butenschön et al., 2021

Is it possible to simultaneously address the increase of the concentration of carbon dioxide (CO2) in the atmosphere and the resulting acidification of the oceans? The research of the project DESARC-MARESANUS, a collaboration between the Politecnico di Milano and the CMCC Euro-Mediterranean Center on Climate Change Foundation, explores the feasibility of this process, its chemical and environmental balance, and the benefits for the marine sector, focusing on the Mediterranean basin.

It is now widely recognized that in order to reach the target of limiting global warming to well below 2°C above pre-industrial levels (as the objective of the Paris agreement), cutting the carbon emissions even at an unprecedented pace will not be sufficient, but there is the need for development and implementation of active Carbon Dioxide Removal (CDR) strategies. Among the CDR strategies that currently exist, relatively few studies have assessed the mitigation capacity of ocean-based Negative Emission Technologies (NET) and the feasibility of their implementation on a larger scale to support efficient implementation strategies of CDR.

The ocean plays a particular role in the climate system acting as significant sink of atmospheric heat and CO2; this has caused the additional hazard of ocean acidification, that is the pH reduction of ocean seawater since the pre-industrial period, that is unprecedented in the last 65 million years and has significant implications for marine organisms affecting their metabolic regulation and capability to form calcium carbonate, destabilizing the ecosystem and ultimately threatening vital ecosystem services.

Among the ocean-based NETs, artificial ocean alkalinization via the dissolution of Ca(OH)2, known in short as ocean liming, has attracted attention due to its capability of contemporarily addressing two issues: global warming via increased levels of CO2 and ocean acidification.

A new study recently published in Frontiers in Climate explores the case of ocean alkalinization in detail.
The research, realized by the Euro-Mediterranean Center on Climate Change Foundation (CMCC) and the Politecnico di Milano within the Desarc-Maresanus project, with the financial support of Amundi and the collaboration of CO2APPS, presents an analysis of marine alkalinization applied to the Mediterranean Sea taking into consideration the regional characteristics of the basin.

Researchers used a set of simulations of alkalinization based on current shipping routes to quantitatively assess the alkalinization efficiency via a coupled physical-biogeochemical high-resolution model (NEMO-BFM) for the Mediterranean Sea (1/16° horizontal resolution that is ~6 km) under an RCP4.5 scenario over the next decades.

The alkalinization strategies applied in this study to the Mediterranean Sea illustrate the potential of ocean alkalinization to mitigate climate change by increasing the air-sea flux of CO2 across the basin and counteracting acidification. In contrast to previous studies, the analyzed scenarios offer a clear pathway into practical implementation being based on realistic levels of lime discharge using the current network of cargo and tanker shipping routes across the Mediterranean Sea.

The simulations used in the study suggest the potential of nearly doubling the carbon-dioxide uptake rate of the Mediterranean Sea after 30 years of alkalinization, and of neutralizing the mean surface acidification trend of the baseline scenario without alkalinization over the same time span.

A more recent paper carried out within the project and just published, realizes an estimate of the potential of maritime transport for ocean liming and atmospheric CO2 removal, highlighting a very high potential discharge of slaked lime in the sea by using the existing global commercial fleet of bulk carrier and container ships. For some closed basins, such as the Mediterranean Sea where traffic density is relatively high, the potential of ocean alkalinization, also with low discharge rates, is far higher than what is needed for counteracting ocean acidification. Therefore, the results of this study highlight from one hand the need for further research for a more precise assessment of the technical aspects of this approach and potential criticalities, from another hand indicates the potential of a regional implementation of ocean liming to the Mediterranean Sea based on the existing network of tankers and cargo ships.

“These two publications provide a key contribution to the international and national scientific and technical communities working to find solutions to these two issues – atmospheric CO2 removal and counteracting ocean acidification – which we will have to tackle in the future. Even if further investigations are needed, these results are encouraging”, states Stefano Caserini, Professor of Mitigation of Climate Change at Politecnico di Milano and Project leader of the project Desarc-Maresanus.

“In these works the idea of ocean alkalinisation as a mitigation strategy for climate change is for the first time assessed on the base of a technically feasible pathway of implementation providing a first step towards a real-world application. In addition, even if the full ecological consequences of this strategy still require additional research, a solution is indicated that may stabilise the acidity of the seawater counteracting acidification without risking dramatic alterations of the seawater chemistry in the opposite direction, which as of today would have largely unknown consequences.” states the main author of the first article, Momme Butenschön, Lead Scientist of the Research Unit on Earth System Modelling at the CMCC Foundation Euro-Mediterranean Center on Climate Change (CMCC).

Read the full papers published in Frontiers in Climate:

Butenschön M., Lovato T., Masina S., Caserini S., Grosso M. (2021), Alkalinization Scenarios in the Mediterranean Sea for Efficient Removal of Atmospheric CO2 and the Mitigation of Ocean Acidification. Frontiers in Climate – Negative Emission Technologies, volume 3, 11 pages, DOI: 10.3389/fclim.2021.614537

Caserini S., Pagano D., Campo F., Abbá A., De Marco S., Righi D., Renforth P., Grosso M. (2021) Potential of maritime transport for ocean liming and atmospheric CO2 removal. Frontiers in Climate – Negative Emission Technologies. 3:575900. https://doi.org/10.3389/fclim.2021.575900

Website: www.desarc-maresanus.net

Journal

Frontiers in Climate

DOI

10.3389/fclim.2021.614537

Credit: 
CMCC Foundation - Euro-Mediterranean Center on Climate Change

The role of hydrophobic molecules in catalytic reactions

Electrochemical processes could be used to convert CO2 into useful starting materials for industry. To optimise the processes, chemists are attempting to calculate in detail the energy costs caused by the various reaction partners and steps. Researchers from Ruhr-Universität Bochum (RUB) and Sorbonne Université in Paris have discovered how small hydrophobic molecules, such as CO2, contribute to the energy costs of such reactions by analysing how the molecules interact in water at the interface. The team describes the results in the journal Proceedings of the National Academy of Sciences, PNAS for short, published online on 13 April 2021.

To conduct the work, Dr. Alessandra Serva and Professor Mathieu Salanne from Laboratoire PHENIX at Université Sorbonne collaborated with Professor Martina Havenith and Dr. Simone Pezzotti from the Bochum Chair of Physical Chemistry II.

Crucial role for small hydrophobic molecules

In many electrochemical processes, small hydrophobic molecules react at catalyst surfaces that often consist of precious metals. Such reactions often take place in an aqueous solution, whereby the water molecules form what are known as hydration shells around the other molecules: they accumulate around the other molecules. The water surrounding polar, i.e. hygroscopic molecules behaves differently compared to the water surrounding non-polar molecules, which are also referred to as hydrophobic. The Franco-German research team was interested in this hydrophobic hydration.

Using molecular dynamic simulations, the researchers analysed the hydrophobic hydration of small molecules such as carbon dioxide (CO2) or nitrogen (N2) at the interface between the gold and water. They showed that the interaction of water molecules in the vicinity of small hydrophobic molecules makes a crucial contribution to the energy costs of electrochemical reactions.

Model for calculating energy costs expanded

The researchers implemented these findings in the Lum-Chandler-Weeks theory. This allows the energy required to form water networks to be calculated. "The energy costs for hydrophobic hydration were calculated for the bulk in the previous model. This model has now been expanded here to hydrophobic molecules near interfaces. This case was not included before," explains Martina Havenith, the Speaker of the Ruhr Explores Solvation Cluster of Excellence, RESOLV for short, at RUB. The adapted model allows the energy costs for hydrophobic hydration to now be calculated at the interface between gold and water based on the size of the hydrophobic molecules. "Due to the water contribution, the size of the molecules plays an important role in the chemical reactions at these interfaces," says Dr. Simone Pezzotti from the Bochum Chair of Physical Chemistry II.

For instance, the model predicts that small hydrophobic molecules would tend to accumulate at the interface based on the interactions with the water, while larger molecules would remain further away in the solution.

Credit: 
Ruhr-University Bochum

New assay detects marker of metastatic cancers, infection, trauma and neurological disease

Scientists at the Walter Reed Army Institute for Research demonstrated the potential of a novel blood test for cathepsin B, a well-studied protein important to brain development and function, as an indicator for a range of disease states.

Cathepsin B plays an important role in the body, regulating the metabolism, immune responses, degradation of improperly produced proteins and other functions. Under certain conditions, such as metastatic cancers, infections, trauma and neurological disease, cathepsin B production is upregulated. Recent research published by WRAIR researchers highlighted the potential of cathepsin B as an indicator, or biomarker, of the severity of traumatic brain injury.

In this study, published in ACS Omega, researchers demonstrated an ultrasensitive assay to measure cathepsin B in blood. While high levels of cathepsin B are readily detectable in aspirates, biopsies and cerebrospinal fluid, a blood test is particularly desirable due to its ease of use with little risk to the patient.

"Although cathepsin can be abundant in some tissues, accurate measurement in blood has been a challenge, especially if changes are expected to be small or sample is limited," said Dr. Bharani Thangavelu, lead author on the paper and researcher at WRAIR's Brain Trauma Neuroprotection Branch. "Our strategy uses an ultrasensitive technique to improve cathepsin B detection from small volumes of blood with little to no noise or impact from interfering substances."

Biomarkers are a source of great interest to researchers due to their potential to dramatically improve both the diagnosis and categorization of disease. Furthermore, they have the potential to validate treatment strategies by indicating whether drugs have reached their proposed targets and achieved therapeutic benefits.

Researchers plan to continue developing and testing the assay, ultimately aiming to develop it into a small, portable diagnostic tool.

"Biomarker tests that accurately reflect the extent and severity of injury can dramatically improve the standard of care, minimizing the need for resource-intensive diagnostics," said Dr. Angela M. Boutté, author and section chief of molecular biology and proteomics within BTN. "This would allow for early, accurate detection as well as monitoring of injury or disease. This aspect is particularly important for assessment of TBI on the battlefield to help guide medical decisions."

Credit: 
Walter Reed Army Institute of Research

No batteries? No sweat! Wearable biofuel cells now produce electricity from lactate

video: Wearable biofuel cell array that generates electric power from the lactate in the wearer's sweat, opening doors to electronic health monitoring powered by nothing but bodily fluids.

Image: 
Tokyo University of Science

It cannot be denied that, over the past few decades, the miniaturization of electronic devices has taken huge strides. Today, after pocket-size smartphones that could put old desktop computers to shame and a plethora of options for wireless connectivity, there is a particular type of device whose development has been steadily advancing: wearable biosensors. These tiny devices are generally meant to be worn directly on the skin in order to measure specific biosignals and, by sending measurements wirelessly to smartphones or computers, keep track of the user's health.

Although materials scientists have developed many types of flexible circuits and electrodes for wearable devices, it has been challenging to find an appropriate power source for wearable biosensors. Traditional button batteries, like those used in wrist watches and pocket calculators, are too thick and bulky, whereas thinner batteries would pose capacity and even safety issues. But what if we were the power sources of wearable devices ourselves?

A team of scientists led by Associate Professor Isao Shitanda from Tokyo University of Science, Japan, are exploring efficient ways of using sweat as the sole source of power for wearable electronics. In their most recent study, published in the Journal of Power Sources, they present a novel design for a biofuel cell array that uses a chemical in sweat, lactate, to generate enough power to drive a biosensor and wireless communication devices for a short time. The study was carried out in collaboration with Dr. Seiya Tsujimura from University of Tsukuba, Dr. Tsutomu Mikawa from RIKEN, and Dr. Hiroyuki Matsui from Yamagata University, all in Japan.

Their new biofuel cell array looks like a paper bandage that can be worn, for example, on the arm or forearm. It essentially consists of a water-repellent paper substrate onto which multiple biofuel cells are laid out in series and in parallel; the number of cells depends on the output voltage and power required. In each cell, electrochemical reactions between lactate and an enzyme present in the electrodes produce an electric current, which flows to a general current collector made from a conducting carbon paste.

This is not the first lactate-based biofuel cell, but some key differences make this novel design stand out from existing lactate-based biofuel cells. One is the fact that the entire device can be fabricated via screen printing, a technique generally suitable for cost-effective mass production. This was possible via the careful selection of materials and an ingenious layout. For example, whereas similar previous cells used silver wires as conducting paths, the present biofuel cells employ porous carbon ink. Another advantage is the way in which lactate is delivered to the cells. Paper layers are used to collect sweat and transport it to all cells simultaneously through the capillary effect--the same effect by which water quickly travels through a napkin when it comes into contact with a water puddle.

These advantages make the biofuel cell arrays exhibit an unprecedented ability to deliver power to electronic circuits, as Dr. Shitanda remarks: "In our experiments, our paper-based biofuel cells could generate a voltage of 3.66 V and an output power of 4.3 mW. To the best of our knowledge, this power is significantly higher than that of previously reported lactate biofuel cells." To demonstrate their applicability for wearable biosensors and general electronic devices, the team fabricated a self-driven lactate biosensor that could not only power itself using lactate and measure the lactate concentration in sweat, but also communicate the measured values in real-time to a smartphone via a low-power Bluetooth device.

As explained in a previous study also led by Dr. Shitanda, lactate is an important biomarker that reflects the intensity of physical exercise in real-time, which is relevant in the training of athletes and rehabilitation patients. However, the proposed biofuel cell arrays can power not only wearable lactate biosensors, but also other types of wearable electronics. "We managed to drive a commercially available activity meter for 1.5 hours using one drop of artificial sweat and our biofuel cells," explains Dr. Shitanda, "and we expect they should be capable of powering all sorts of devices, such as smart watches and other commonplace portable gadgets."

Hopefully, with further developments in wearable biofuel cells, powering portable electronics and biosensors will be no sweat!

Credit: 
Tokyo University of Science

Researchers streamline molecular assembly line to design, test drug compounds

Researchers from North Carolina State University have found a way to fine-tune the molecular assembly line that creates antibiotics via engineered biosynthesis. The work could allow scientists to improve existing antibiotics as well as design new drug candidates quickly and efficiently.

Bacteria - such as E. coli - harness biosynthesis to create molecules that are difficult to make artificially.

"We already use bacteria to make a number of drugs for us," says Edward Kalkreuter, former graduate student at NC State and lead author of a paper describing the research. "But we also want to make alterations to these compounds; for example, there's a lot of drug resistance to erythromycin. Being able to make molecules with similar activity but improved efficacy against resistance is the general goal."

Picture an automobile assembly line: each stop along the line features a robot that chooses a particular piece of the car and adds it to the whole. Now substitute erythromycin for the car, and an acyltransferase (AT) - an enzyme - as the robot at the stations along the assembly line. Each AT "robot" will select a chemical block, or extender unit, to add to the molecule. At each station the AT robot has 430 amino acids, or residues, which help it select which extender unit to add.

"Different types of extender units impact the activity of the molecule," says Gavin Williams, professor of chemistry, LORD Corporation Distinguished Scholar at NC State and corresponding author of the research. "Identifying the residues that affect extender unit selection is one way to create molecules with the activity we want."

The team used molecular dynamic simulations to examine AT residues and identified 10 residues that significantly affect extender unit selection. They then performed mass spectrometry and in vitro testing on AT enzymes that had these residues changed in order to confirm their activity had also changed. The results supported the computer simulation's predictions.

"These simulations predict what parts of the enzyme we can change by showing how the enzyme moves over time," says Kalkreuter. "Generally, people look at static, nonmoving structures of enzymes. That makes it hard to predict what they do, because enzymes aren't static in nature. Prior to this work, very few residues were thought or known to affect extender unit selection."

Williams adds that manipulating residues allows for much greater precision in reprogramming the biosynthetic assembly line.

"Previously, researchers who wanted to change an antibiotic's structure would simply swap out the entire AT enzyme," Williams says. "That's the equivalent of removing an entire robot from the assembly line. By focusing on the residues, we're merely replacing the fingers on that arm - like reprogramming a workstation rather than removing it. It allows for much greater precision.

"Using these computational simulations to figure out which residues to replace is another tool in the toolbox for researchers who use bacteria to biosynthesize drugs."

Credit: 
North Carolina State University

What's in it for us: added value-based approach towards telehealth

image: Prof Asta Pundziene, KTU School of Economics and Business

Image: 
KTU

After interviewing various stakeholders from public and private healthcare systems (in Lithuania and the US), researchers Dr Agne Gadeikiene, Prof Asta Pundziene, Dr Aiste Dovaliene from Kaunas University of Technology (KTU), Lithuania designed a detailed structure revealing added value of remote healthcare services, i.e. telehealth. Adopting the concept of value co-creation common in business research to healthcare, the scientists claim that this is the first comprehensive analysis of this kind in the healthcare field involving two different healthcare systems.

According to the researchers, although in the US the consultations via phone with physician have been available for more than fifty years, the technological development of recent years has radically changed the concept of telehealth. Using artificial intelligence for automated diagnostics, big-data analytics, virtual visits, real-time communication with data exchange, 5G, and blockchain can make a significant impact in the prediction and prevention of various health conditions, and personalisation of healthcare services.

"Despite the increased usage of telehealth during the Covid-19 pandemics, these services are still limited. Without any doubt, telehealth usage will increase in the future as it is considered one of the very important healthcare service trends. However, partly due to the inertia of the healthcare system, which is slow to adopt technology innovations, telehealth development is insufficient", says Dr Agne Gadeikiene, a researcher at KTU School of Economics and Business.

According to her, the acknowledgement of the telehealth added value would allow developing more precise value propositions for different stakeholders of telehealth - patients, physicians, healthcare providers, and the government.

During their research, the scientists interviewed healthcare institutions managers, heads of telehealth programmes and physicians from different clinical areas where telehealth is applied, e.g. radiology, endocrinology, paediatrics and family medicine. In total, 15 interviews were conducted in Lithuania and 10 in California, USA.

"There is clear evidence that telehealth enhances patient and physician experience. It saves time as there is no need to travel, you have a shorter waiting time to access the physician, timeless help at a better price, and more. The issue is with measuring the clinical outcomes of telehealth, but also patient and doctors experience and one of the major issues is exactly what we explore - the added value created by telehealth", Prof Asta Pundziene says.

The research revealed that among the most commonly expressed added values for healthcare system specialists both in the US and Lithuania were: accessibility of care, convenience, timely diagnosis, cost-saving, warmer and closer relationships between the patient and physician, focus on a patient. Remote patient monitoring and monitoring after disease were indicated as added values of telehealth only among the US health service system practitioners, and patient engagement into their illness control, better quality service, convenience, learning and avoidance of mistakes, a second opinion from colleagues were among the added value dimensions indicated by healthcare system representatives in Lithuania.

"The USA can be characterised as an innovative, and Lithuania - as a conservative culture. In the United States, the healthcare system is more of collaborative nature, and private insurance companies and healthcare providers search for ways to improve, save resources and satisfy their customers better. However, both research participants from Lithuania and the USA acknowledge the added value of telehealth, and it is only a matter of time until it will become a common practice in both countries", says Dr Gadeikiene.

Among the inhibitions to the expansion of telehealth, the researchers indicate such organisational-level obstacles as the insufficient managerial shift towards remote patient care, lack of digital training in medical schools. Adoption of digital healthcare technologies incurs additional costs. Also, there are some inhibitions towards it on the individual level, including a lack of competence in using digital technologies both from the patient and from the physician point of view.

Another cluster of obstacles is connected to reimbursement, security and liability issues. It is very difficult to bill for telehealth services, especially when they are administered at the patient's home, which is not affiliated with the hospital. Security of connection between different patient databases, systems, platforms, and information exchange, and the trust issues between patient and physician where the latter has to rely on the patients' provided information further complicates the telehealth advance.

"To fully realise the potential of telehealth, the deployment and transformation part is missing. This entails orchestrating the resources, establishing new processes, capturing the value of telehealth, and transforming institutions to fit the need for the new strategies, resources, processes, and more. To do this, strong leadership is needed. The case of the US demonstrates that they are more advanced due to the stronger leadership to deploy the telehealth services", Prof Pundziene is convinced.

The scientists argue that their research is especially important for healthcare policymakers and healthcare providers' executives, who seek to clearly understand what aspects should be emphasised for patients, physicians, and health insurance companies while promoting telehealth as a new way of achieving value-based healthcare. Also, the research can be very useful for telehealth platform developers as it provides a detailed structure of the telehealth added value that could be applied for a telehealth platform architecture. Additionally, the results of the research could be valuable for public policymakers as a source of information to justify the need and to promote the benefits of telehealth services. In this mission, the intermediary function could be passed on to the governmental organisations responsible for legal regulations.

The scientists emphasise that a strategic approach to telehealth guarantees that the investments in the development of these services are worth spending, as their outcomes are tremendous. However, they acknowledge that this should not compromise traditional service development, as it is of no less importance.

Credit: 
Kaunas University of Technology

Shift in diet allowed gray wolves to survive ice-age extinction

image: Gray wolves take down a horse on the mammoth-steppe habitat of Beringia during the late Pleistocene (around 25,000 years ago).

Image: 
Julius Csotonyi

April12, 2021 - Gray wolves are among the largest predators to have survived the extinction at the end of the last ice age around11,700 years ago. Today, they can be found roaming Yukon's boreal forest and tundra, with caribou and moose as their main sources of food.

A new study led by the Canadian Museum of Nature shows that wolves may have survived by adapting their diet over thousands of years---from a primary reliance on horses during the Pleistocene, to caribou and moose today. The results are published in the journal Palaeogeography, Palaeoclimatology, Palaeoecology.

The research team, led by museum palaeontologist Dr. Danielle Fraser and student Zoe Landry, analysed evidence preserved in teeth and bones from skulls of both ancient (50,000 to 26,000 years ago) and modern gray wolves. All the specimens were collected in Yukon, a region that once supported the Beringia mammoth-steppe ecosystem, and are curated in the museum's national collections as well as those of the Yukon government.

"We can study the change in diet by examining wear patterns on the teeth and chemical traces in the wolf bones," says Landry, the lead author who completed the work as a Carleton University student under Fraser's supervision. "These can tell us a lot about how the animal ate, and what the animal was eating throughout its life, up until about a few weeks before it died."

Landry and Fraser relied on established models that can determine an animal's eating behaviour by examining microscopic wear patterns on its teeth. Scratch marks indicate the wolf would have been consuming flesh, while the presence of pits would suggest chewing and gnawing on bones, likely as a scavenger.

Analysis showed that scratch marks prevailed in both the ancient and modern wolf teeth, meaning that the wolves continued to survive as primary predators, hunting their prey.

What then were the gray wolves eating? The modern diet - caribou and moose - is well established. The diet of the ancient wolves was assessed by looking at the ratios of carbon and nitrogen isotopes extracted from collagen in the bones. Relative levels of the isotopes can be compared with established indicators for specific species. "The axiom, you are what you eat comes into play here," says Landry.

Results showed that horses, which went extinct during the Pleistocene, accounted for about half of the gray wolf diet. About 15% came from caribou and Dall's sheep, with some mammoth mixed in. All this at a time when the ancient wolves would have co-existed with other large predators such as scimitar cats and short-faced bears. The eventual extinction of these predators could have created more opportunity for the wolves to transition to new prey species.

"This is really a story of ice age survival and adaptation, and the building up of a species towards the modern form in terms of ecological adaptation," notes Dr. Grant Zazula, study co-author, and Government of Yukon paleontologist who is an expert on the ice-age animals that populated Beringia.

The findings have implications for conservation today. "The gray wolves showed flexibility in adapting to a changing climate and a shift in habitat from a steppe ecosystem to boreal forest," explains Fraser. "And their survival is closely linked to the survival of prey species that they are able to eat."

Given the reliance of modern gray wolves on caribou, the study's authors suggest that the preservation of caribou populations will be an important factor in maintaining a healthy wolf population.

Credit: 
Canadian Museum of Nature

A novel, quick, and easy system for genetic analysis of SARS-CoV-2

image: A total of nine gene fragments covering the full-length SARS-Cov-2 genome and a linker fragment were amplified. Because all the fragments were designed to include ends that overlapped with the adjacent fragments, they could be assembled as a circular viral genome by an additional PCR step. Recombinant SARS-CoV-2 was rescued by transfection of the circular viral genome into susceptible cells. Cytopathic effects were only observed in cells transfected with CPER products.

Image: 
Osaka University

Osaka, Japan - SARS-CoV-2 is the virus responsible for the COVID-19 pandemic. We know that mutations in the genome of SARS-CoV-2 have occurred and spread, but what effect do those mutations have? Current methods for studying mutations in the SARS-CoV-2 genome are very complicated and time-consuming because coronaviruses have large genomes, but now a team from Osaka University and Hokkaido University have developed a quick, PCR-based reverse genetics system for analyzing SARS-CoV-2 mutations.

This system uses the polymerase chain reaction (PCR) and a circular polymerase extension reaction (CPER) to reconstruct the full-length cDNA of viral genome. This process does not involve the use of bacteria, which can introduce further unwanted mutations, and takes only two weeks using simple steps to generate infectious virus particles. Previous methods took a couple of months and were very complicated procedures.

"This method allows us to quickly examine the biological features of mutations in the SARS-CoV-2," says lead author of the study Shiho Torii. "We can use the CPER technique to create recombinant viruses with each mutation and examine their biological features in comparison with the parental virus." The large circular genome of SARS-CoV-2 can be constructed from smaller DNA fragments that can then be made into a viable viral genome with CPER, and used to infect suitable host cells. A large amount of infectious virus particles can be recovered nine days later.

"We believe that our CPER method will contribute to the understanding of the mechanisms underlying propagation and pathogenesis of SARS-CoV-2, as well as help determine the biological significance of emerging mutations," explains corresponding author Yoshiharu Matsuura. "This could accelerate the development of novel therapeutics and preventative measures for COVID-19." The team also suggest that the use of the CPER method will allow researchers to insert "reporter genes" into the SARS-CoV-2 genome to "tag" genes or proteins of interest. This will enable a greater understanding of how SARS-CoV-2 infects cells and causes COVID-19, assisting with the development of therapies. The CPER method could even allow a recombinant virus that is unable to cause disease to be generated, which could be used as a safe and effective vaccine for SARS-CoV-2.

Mutations are arising in the SARS-CoV-2 population all the time, as well as questions as to what those mutations do and whether they could affect the efficacy of vaccines. "Our simple and rapid method allows scientists around the globe to characterize the mutants, which is a vital step forward in our fight against the SARS-CoV-2," says Takasuke Fukuhara of the research group.

Credit: 
Osaka University