Culture

Roof-tiles in imperial China: Creating Ximing Temple's lotus-pattern tile ends

image: Basic information about tile ends and imbrices. The figure shows the structure of a tile end and how tile ends and imbrices are used.

Image: 
Kanazawa University

Kanazawa, Japan -- Any visitor to China will have noticed the spectacular roofs on buildings dating from imperial times. However, the question of how these roof tiles were produced has attracted relatively little attention from archaeologists. Now, a team of researchers has conducted a major study of tile ends unearthed at the Ximing Temple in Xi'an, yielding exciting insights into their production.

In a study published in Archaeological Research in Asia, researchers from Kanazawa University and the Chinese Academy of Social Sciences have revealed the significance of minute variations in the tile ends used in the roof of the famous Ximing Temple in Xi'an, built during the Tang dynasty (618-907 AD) when Xi'an (then known as Chang'an) was the imperial capital.

The researchers conducted an investigation of 449 tile ends with lotus patterns from various periods during the Tang dynasty that had been recovered from the Ximing Temple. "We were interested in the variations in the tile ends, both those within the conscious control of the artisans who made the tiles, such as whether to use simple or complex lotus patterns, and those outside their control, such as the marks left by the deterioration of the molds used to make the tiles," says lead author of the study Meng Lyu.

"We discovered that the degree of minor variation in the tile ends increases significantly in the later samples," adds author Guoqiang Gong. "This suggests to us that there was a shift away from the centralized manufacturing of imperial building materials during the Early Tang period toward one in which small private artisans played an important role in the Late Tang period."

Intriguingly, the study has revealed traces of the coming together of two distinct cultural traditions. "We found that there were, in fact, two separate production systems at work to make the title ends," notes author Chunlin Li. "One produced tile ends with compound petal patterns and curved incisions, whereas the other made end tiles with simple petal patterns and scratched incisions." These two styles may ultimately have their origins during an earlier historical period when the Northern Wei dynasty was divided into two regimes on either side of the Taihang mountain range.

This study demonstrates that studying the roof tiles of China's grand imperial buildings can reveal a great deal about the circumstances of their production and yield insights into larger historical questions.

Credit: 
Kanazawa University

The bald truth - altered cell divisions cause hair thinning

image: Young HFSCs undergo symmetric cell divisions (SCDs) and asymmetric cell divisions (ACDs) to generate new bulge cells for their self-renewal and expansion. Whereas aged HFSCs provoke hemidesmosomal instability including COL17A1 and undergo stress response (SR) type ACDs to induce epidermal differentiation that triggers their delamination, thereby causing stepwise miniaturization of HFs and hair thinning and loss.

Image: 
Department of Stem Cell Biology,TMDU

Researchers from Tokyo Medical and Dental University (TMDU) identify a novel mechanism underlying hair thinning and loss during aging

Tokyo, Japan - Hair grows from stem cells residing in hair follicles. During aging, the capability of hair follicles to grow hair is successively lost, leading to hair thinning and ultimately hair loss. In a new study, researchers from Tokyo Medical and Dental University (TMDU) and the University of Tokyo identified a novel mechanism by which hair follicles lose their regenerative capabilities.

Hair follicles are mini-organs from which new hair constantly grows. The basis for new hair growth is the proper function of hair follicle stem cells (HFSCs). HFSCs undergo cyclic symmetric and asymmetric cell divisions (SCDs and ACDs). SCDs generate two identical cells that go on to have the same fate, while ACDs generate a differentiating cell and a self-renewing stem cell. The combination ensures that the stem cell population continues to exist, yet how those contribute to the loss of HFSC function due to aging is not yet completely understood.

"For proper tissue function, symmetric and asymmetric cell divisions have to be in balance," says corresponding author of the study Emi Nishimura. "Once stem cells preferentially undergo one of either or, worse yet, deviate from the typical process of either type of cell division, the organ suffers. In this study, we wanted to understand how stem cell division plays into hair grows during aging."

To achieve their goal, the researchers investigated stem cell division in HFSCs in young and aged mice by employing two different types of assays: Cell fate tracing and cell division axis analyses. In the former, HFSCs were marked with a fluorescent protein so they could be followed over time, while in the latter the angle of HFSC division was measured. Strikingly, the researchers were able to show that while HFSCs in young mice underwent typical symmetric and asymmetric cell divisions to regenerate hair follicles, during aging they adopted an atypical senescent type of asymmetric cell division.

But why does the mode of cell division change so drastically during aging? To answer this question, the researchers focused on hemidesmosomes, a class of protein that connect the cells to the extracellular matrix (ECM; proteins surrounding cells). Cell-ECM have long been known to confer polarity to cells, i.e., that the cells can sense their localization within a given space through the action of specific proteins. The researchers found that during aging both hemidesmosomal and cell polarity proteins become destabilized, resulting in the generation of aberrantly differentiating cells during division of HFSCs. As a result, HFSCs become exhausted and lost (leading to hair thinning and hair loss) over time.

"These are striking results that show how hair follicles lose their ability to regenerate hair over time," says first author of the study Hiroyuki Matsumura. "Our results may contribute to the development of new approaches to regulate organ aging and aging-associated diseases."

Credit: 
Tokyo Medical and Dental University

HKUST researchers unlock the micro-molecular physiochemical mechanism of dental plaque formation

image: (Left up panel) Effect of biosynthetic gene cluster muf or mutanofactin-697 (5) on biofilm formation on the surface of artificial acryl teeth. (Left bottom panel) Proposed biosynthetic pathway for mutanofactin-697 (5). (Right panel) Proposed mechanism for mutanofactin-697 (5)-promoted biofilm formation of streptococci. After being biosynthesized and secreted from a producing streptococcus, 5 binds to self and neighboring streptococci, forming surface layers around bacterial cells. The hydrophobic layer increases bacterial cell surface hydrophobicity, promotes initial bacterial adhesion and subsequent biofilm formation and maturation. In addition, 5 also directly binds to eDNA and facilities eDNA-mediated cell aggregation and biofilm formation.

Image: 
HKUST

An inter-disciplinary team of researchers led by Prof. Qian Peiyuan, Chair Professor at the Hong Kong University of Science and Technology (HKUST)'s Department of Ocean Science and Division of Life Science has unraveled how a novel microbial small molecule released by Streptococcus mutans (S. mutans) - a bacterium commonly found in the human oral cavity - is connected to dental caries development using a synthetic biology approach, offering new insights to the health impact of the human oral microbiota and facilitating future research on the prevention of tooth decay. The research findings were recently published in the prestigious scientific journal Nature Chemical Biology and reported by Nature as one of the research highlights.

Every wetted surface on our planet is covered by biofilm made of microbial cells meshed in extracellular organic matrix. An early study by the National Institutes of Health (NIH) concluded that over 80% of human bacterial infection were caused by biofilm . Thus, S. mutans, a natural and primary inhabitant of the human oral cavity that has strong ability to form biofilms and produce organic acids, has long been acknowledged as the major etiological agent of dental caries.

The development of dental caries, or tooth decay, is a complex process that mainly depends on the presence of microbial biofilms on the tooth surfaces, which are known as dental plaque. Tooth decay has been recognized as one of the most common bacterial infections and costly chronic conditions afflicting humans. Annually, the global economic burden of treating tooth decay amounts to billions of dollars(1) . Although the macromolecular agents of S. mutans for biofilm formation and development have been extensively investigated, the role of small-molecule secondary metabolites in biofilm formation of S. mutans remains largely unexplored.

Prof. Qian's research team has been studying the microbe-animal interactions mediated by signal molecules from biofilm, using integrated genomics, transcriptomics and chemical biology approaches. Recently, the research team has extended their work on biofilms related to public health.

In collaboration with Prof. ZHANG Wen-Jun and Prof. Roya MABOUDIAN at University of California at Berkeley, and Prof. Robert BURNE at College of Dentistry, University of Florida, the team has discovered a polyketide/non-ribosomal peptide biosynthetic gene cluster, muf, which directly correlates with a strong biofilm-forming capability, from S. mutans strains clinically isolated from dental plaque. Then, the muf-associated bioactive product, mutanofactin-697 that contains a novel molecular scaffold was identified. Further mode-of-action studies revealed that this unique microbial secondary metabolite promotes biofilm formation via an unprecedented physicochemical mechanism: this small molecule binds to S. mutans cells and extracellular DNA, increases bacterial hydrophobicity, and subsequently promotes bacterial adhesion and biofilm formation.

Prof. Qian, also David von Hansemann Professor of Science at HKUST, said, "Our findings provide the first example of a microbial secondary metabolite promoting biofilm formation via a physicochemical approach, highlighting the significance of secondary metabolism in mediating critical processes related to the development of dental caries."

LI Zhongrui, a researcher of the team, said this discovery will enable further mechanistic exploration of mutanofactin-related chemical regulatory processes in human oral ecology and streptococci-induced dental caries incidence and prevention.

Credit: 
Hong Kong University of Science and Technology

Consumption of added sugar doubles fat production

Sugar is added to many common foodstuffs, and people in Switzerland consume more than 100 grams of it every day. The high calorie content of sugar causes excessive weight and obesity, and the associated diseases. But does too much sugar have any other harmful effects if consumed regularly? And if so, which sugars in particular?

Even moderate amounts of sugar increase fat synthesis

Researchers at the University of Zurich (UZH) and the University Hospital Zurich (USZ) have been investigating these questions. Compared to previous studies, which mainly examined the consumption of very high amounts of sugar, their results show that even moderate amounts lead to a change in the metabolism of test participants. "Eighty grams of sugar daily, which is equivalent to about 0,8 liters of a normal soft drink, boosts fat production in the liver. And the overactive fat production continues for a longer period of time, even if no more sugar is consumed," says study leader Philipp Gerber of the Department of Endocrinology, Diabetology and Clinical Nutrition.

Ninety-four healthy young men took part in the study. Every day for a period of seven weeks, they consumed a drink sweetened with different types of sugar, while the control group did not. The drinks contained either fructose, glucose or sucrose (table sugar which is a combination of fructose and glucose). The researchers then used tracers (labeled substances that can be traced as they move through the body) to analyze the effect of the sugary drinks on the lipid metabolism.

Fructose and sucrose double fat production beyond food intake

Overall, the participants did not consume more calories than before the study, as the sugary drink increased satiety and they therefore reduced their calorie intake from other sources. Nevertheless, the researchers observed that fructose has a negative effect: "The body's own fat production in the liver was twice as high in the fructose group as in the glucose group or the control group - and this was still the case more than twelve hours after the last meal or sugar consumption," says Gerber. Particularly surprising was that the sugar we most commonly consume, sucrose, boosted fat synthesis slightly more than the same amount of fructose. Until now, it was thought that fructose was most likely to cause such changes.

Development of fatty liver or diabetes more likely

Increased fat production in the liver is a significant first step in the development of common diseases such as fatty liver and type-2 diabetes. From a health perspective, the World Health Organization recommends limiting daily sugar consumption to around 50 grams or, even better, 25 grams. "But we are far off that mark in Switzerland," says Philipp Gerber. "Our results are a critical step in researching the harmful effects of added sugars and will be very significant for future dietary recommendations."

Credit: 
University of Zurich

A new way to measure human wellbeing towards sustainability

From science to implementation: How do we know if humankind is moving in the right direction towards global sustainability? The ambitious aim of the SDGs is a global call to action to end poverty, protect the planet, and ensure all people enjoy peace and prosperity by 2030. To monitor progress towards these goals, a set of over 220 indicators is used, but there is a danger that one can no longer see the forest for the trees. A single comprehensive indicator to assess the overall progress is needed. In a new paper published in the Proceedings of the National Academy of Sciences (PNAS), IIASA researchers and colleagues from the University of Vienna, the Vienna Institute of Demography (Austrian Academy of Sciences), and the Bocconi University present a bespoke indicator based on life expectancy and benchmarks of objective and subjective wellbeing: The Years of Good Life (YoGL) indicator.

"Many existing indicators of wellbeing do not consider the basic fact that being alive is a prerequisite for enjoying any quality of life. In addition, they often disregard the length of a life. Life expectancy has long been used as a very comprehensive indicator of human development, with avoiding premature death being a universally shared aspiration. However, mere survival is not enough to enjoy life and its qualities," explains lead author Wolfgang Lutz, Founding Director of the Wittgenstein Centre for Demography and Global Human Capital, a collaborative center of the Austrian Academy of Sciences (Vienna Institute of Demography), International Institute for Applied Systems Analysis, and University of Vienna. "The Years of Good Life indicator only counts a year as a good year if individuals are simultaneously not living in absolute poverty, free from cognitive and physical limitations, and report to be generally satisfied with their lives."

The results show that YoGL differs substantially between countries. While in most developed countries, 20-year-old women can expect to have more than 50 years of good life left (with a record of 58 years in Sweden), women in the least developed countries can expect less than 15 years (with a record low of 10 years for women in Yemen). While life expectancy is higher for women than for men in every country, female Years of Good Life are lower than those of males in most developing countries. This reveals a significant gender inequality in objective living conditions and subjective life satisfaction in most of these countries.

The paper - funded by an Advanced Grant to Lutz from the European Research Council - presents a first step in the great challenge to comprehensively assess sustainable human wellbeing that also considers feedbacks from environmental change. Unlike many other indicators, YoGL is not restricted to the national level but can be assessed for flexibly defined sub-populations and over long-time horizons because it has substantive meaning in its absolute value. It also has the potential to become a broadly used "currency" for measuring the benefits of certain actions, complementing assessments based on purely monetary units. For example, the social costs of carbon could potentially be evaluated in terms of Years of Good Life lost among future generations, rather than only in dollar terms - making it a key indicator to measure sustainable progress in an integrated and tangible way. Applying the same logic to the recent COVID-19 pandemic, study coauthor Erich Striessnig adds that YoGL also represents a major improvement over conventional indicators in assessing the long-term success of intervention measures.

"If we used YoGL as a currency to measure the long-term impacts of the ongoing crisis rather than GDP per capita or life expectancy, we would not only account for the material losses and the lost life years, but also for the losses in physical and cognitive wellbeing, as well as for the losses incurred by the younger generations in terms of their human capital resulting from school closures. Lack of consistent data that is needed to calculate YoGL does of course remain an issue. Political decision makers should, however, aim for improved data availability to make better informed decisions based on indicators such as YoGL," Striessnig concludes.

Credit: 
International Institute for Applied Systems Analysis

Deforestation taking a heavy toll on international bird haven

image: Fork-tailed woodnymph

Image: 
Pablo Negret

An analysis has found deforestation is severely affecting forest bird species in Colombia, home to the greatest number of bird species in the world.

University of Queensland-led research, steered by Dr Pablo Negret, analysed the impact of deforestation on 550 bird species, including 69 only found in the South American nation.

"Our study has shown an astonishing reduction in bird species habitat," Dr Negret said.

"One third of the forest bird species in Colombia have lost at least a third of their historical habitat, and that's just using the most recent data we have available - from 2015.

"Moreover, 18 per cent or 99 species have lost more than half of their historical habitat to date.

"By 2040, we expect this will increase to 38 per cent or 209 species.

"Sadly, many of those species are endemic to the country and are not currently classified as threatened by the International Union for Conservation of Nature, suggesting that there are many unlisted species that face an imminent extinction threat from ongoing habitat loss."

Dr Negret said the results were concerning but not surprising.

"Deforestation is one of the main drivers of habitat loss for many species in the tropics," he said.

"We know that deforestation affects thousands of species in these ecosystems, but our attention is usually focused on a tiny fraction - threatened and charismatic species.

"This study provides more data on species previously thought abundant that are actually dwindling - hopefully we can shine a light on them, so they can be recognised as under threat and don't fall through the cracks."

The researchers used historical and present satellite forest cover data, while collating spatial information on other variables associated with deforestation patterns.

UQ's Professor Martine Maron said the research would help predict future habitat loss for already-threatened species.

"This methodology, and the technologies behind it, allow us to identify places where future habitat loss is predicted.

"This means that we can reveal the locations where threatened species are most likely to lose precious habitat, and prioritise their protection.

"And, in a country with growing threats to rich bird diversity, it pays to be ahead of the game."

The authors hope the Colombian government and NGOs working in this space will use the research to guide conservation of Colombia's bird species.

Credit: 
University of Queensland

Flat brain organoids grown on 3D-printed scaffolds show intrinsic gyrification

The research, by an international team from the Autonomous University of Madrid and the Technical University of Denmark, used 3D printing to create scaffolds for engineered flat brain organoids. The scaffolds allowed the brain organoid size to be significantly increased and after 20 days, self-generated folding was observed. Their results are published in the IOP Publishing journal Biofabrication.

The work aims to address several of the shortcomings of existing brain organoids. One of the lead authors, Theresa Rothenbücher, said: "The lack of vascularization leads to diffusion limitations for nutrients and oxygen, resulting in a necrotic tissue core for organoids larger than approximately 500 μm. In an attempt to solve this problem, brain organoids have been vascularized. While including endothelial cells in the culture system increases the complexity of the model, the generated vessel structures show no functionality (blood flow) in vitro. We are able to circumvent this issue by applying bioengineering techniques."

Another lead author, Hakan Gürbüz, commented: "By culturing brain organoids with a polycaprolactone (PCL) scaffold, we were able to modify their shape into a flat morphology. Engineered Flat Brain Organoids (efBOs) possess advantageous diffusion conditions and thus their tissue is better supplied with oxygen and nutrients, preventing the formation of a necrotic tissue core. The shift from a spherical to a flat shape leads to a significant increase in size and surface-to-volume ratio of the brain organoids." eFBOs also offer increased potential to create biologically relevant systems, due to the complexity of the models that they enable. Ensuring the long-term viability of these models is a major aim of this branch of research, which has been difficult until now; flat organoids address the problem of longevity by avoiding the formation of necrotic tissue.

"The 3D printing of scaffolds was key to overcoming the shape limitations of the previous spherical models. Contributing author Jenny Emneus says, 3D printing enables: "Reproducible fabrication of specific 3D scaffolds with high architectural complexity, precision and design versatility. By introducing a 3D-printed scaffold into the culture protocol, the size of the brain organoids and the tissue density and thickness can be tuned."

The resulting model showed consistent formation of neuroepithelial folding resembling gyrification. Contributing author Alberto Martinez-Serrano said: "...we were able to observe folding reminiscent of gyrification around day 20, which was self-generated by the tissue. To our knowledge, this is the first study that reports intrinsically caused gyrification of neuronal tissue in vitro." The appearance of gyrification reflects a further increase of the surface area and resembles the process of human brain development.

Although brain organoids do not reproduce the exact anatomy of a human brain, they are an important step towards recapitulating the human brain; tissue-equivalent models such as these organoids can replace the use of animal models in research into drug screening for toxicity and the understanding of disease progression. Alberto Martinez-Serrano continued: "The human brain is the most complex organ of the body and due to its inaccessibility, we still lack scientific knowledge about brain development and diseases. Studies with animals are ethically restricted and should be minimized. Especially for drug screening applications, a highly reproducible protocol with simple tissue culture steps and consistent output, is required. We consider our efBO protocol as a next step towards the generation of a stable and reliable human brain model for drug screening applications and spatial patterning experiments."

Credit: 
IOP Publishing

Militarization negatively influences green growth

image: Land vehicles, aircraft, and sea-vessels consume a gargantuan amount of fossil fuels.

Image: 
UrFU / Ilya Safarov.

Military expenditures are highly counterproductive to green economic growth- documented by a recent study conducted by UrFU economist collaboration with an international research team. Sustainable economic development or green growth requires cleaner energy and green technology that can mitigate the negative externalities (e.g., carbon emission) of economic growth. The study utilized various macroeconomic indicators for 21 OECD countries over the year 1980-2016. This empirical study focusing on the dynamic impact of innovation, militarization and renewable energy on the green economy is published in the journal "Environmental Science and Pollution Research".

On the one hand, the military-industry (land vehicles, aircraft, and sea-vessels) consume a gargantuan amount of fossil fuels. About 75% of the global non-renewable energy consumption (coal, gas, oil) is by military actions, economists claim. According to the BP report (without division by sectors), the five main consumers of oil, gas, coal in 2019 were China (120.64 EJ), the United States (78.81 EJ), India (31.01 EJ), Russia (26.2 EJ), and Japan (16.33 EJ).

On the other hand, militarization is one of the main sources of air and environmental pollution.

"Although there is a discrepancy in the environmental damages across the nations, the opulent countries invariably resume causing a challenge to the global ecosystem compared to the impoverishment of counterparts. For example, the Pentagon is the glaring example of a paramount consumer of non-renewable resources. The US maintains hundreds of military bases in sixty countries exclusively. Accordingly, recent armed forces' equipment consistently becomes extra capital, more resource-intensive, and waste-generative as they have a substantial dependency on fossil fuels. In the act of assessing, supporting, and maintaining an arsenal of weapons, a substantial amount of toxic substances is released which is known to cause harm to the land and water adjacent to the military bases and the surrounding communities," - says Sohag Kazi, co-author, senior researcher at the Department of Econometrics and Statistics, Ural Federal University.

Economists are not calling for abandoning militarization. Their suggestion is to not increase the annual funding for the military-industrial complex and to use renewable energy sources for military needs. The researches argue that switching from non-renewable energy to renewable energy in the production process would not significantly affect the output but would reduce carbon emissions.

"It is highly unlikely that governments would reduce the budget allocated for the defense purchases in developed countries for various reasons. However, we have a cautionary remark regarding the operation and maintenance of the military expenditures on green growth. It is recommended that developed countries curtail their military expenditures and non-renewable energy usage, and instead conduct their military operations more cautiously, certainly by using of renewable energy technology, which should help to contribute to a better world," - says Sohag Kazi.

The study was conducted with participation of economists from Ural Federal University (Russia), University of Western Australia (Australia), Drexel University (USA), University of Economics (Vietnam), Universiti Teknologi MARA (Malaysia).

Military expenditures combine all current and capital expenditures spent on the militia, together with pacifist forces and defense establishments. It also includes government bureaus involved in defense projects, paramilitary forces if they are trained and provided for armed forces operations, and military space activities.

Credit: 
Ural Federal University

Visa costs higher for people from poor countries

image: Map of the world showing average number of days that someone has to work to be able to afford a tourist visa

Image: 
Global Visa Cost Dataset

How much do people have to pay for a travel permit to another country? A research team from Göttingen, Paris, Pisa and Florence has investigated the costs around the world. What they found revealed a picture of great inequality. People from poorer countries often pay many times what Europeans would pay. The results have been published in the journal Political Geography.

Dr Emanuel Deutschmann from the Institute of Sociology at the University of Göttingen, together with Professor Ettore Recchi, Dr Lorenzo Gabrielli and Nodira Kholmatova (from Sciences Po Paris, CNR-ISTI Pisa and EUI Florence respectively) compiled a new dataset on visa costs for travel between countries worldwide. The analysis shows that on average people from North Africa and South Asia pay more than three times as much for tourist visas (at just under 60 US dollars), as people from Western Europe (at around 18 US dollars).

The inequality becomes even greater when the differences in wealth between countries are taken into account. While Europeans usually only have to work for a fraction of a day to be able to afford a travel permit, in some African and Asian countries the visa costs are equivalent to several weeks or even months of the average income.

"Our dataset provides information about a dimension of global inequality that has, so far, received little attention," says Deutschmann. "While Article 13 of the Universal Declaration of Human Rights states that every person has the right to move freely and to leave any country, including their own, in reality there are barriers at many different levels which can obstruct global mobility, depending on where you come from. And our data clearly shows that these barriers include visa costs."

Credit: 
University of Göttingen

Keeping it cool: New approach to thermal protection in outdoor wearable electronics

image: Wearable devices and biosensors for outdoor use require innovative designs and novel materials capable of keeping down their temperature, even in sunlight

Image: 
Ketut Subiyanto at Pexels

Wearable electronic devices like fitness trackers and biosensors, are very promising for healthcare applications and research. They can be used to measure relevant biosignals in real-time and send gathered data wirelessly, opening up new ways to study how our bodies react to different types of activities and exercise. However, most body-worn devices face a common enemy: heat.

Heat can accumulate in wearable devices owing to various reasons. Operation in close contact with the user's skin is one of them; this heat is said to come from internal sources. Conversely, when a device is worn outdoors, sunlight acts as a massive external source of heat. These sources combined can easily raise the temperature of wearable devices to levels that not only are uncomfortable for the user, but also cause erroneous readings and measurements. Unfortunately, researchers have been unable to completely address this issue. Most available heat sinks and dissipators for wearable devices are based on thin metallic layers, which block electromagnetic signals and thus hinder wireless communications.

In a recent study published in Advanced Science, scientists from Korea and the US have developed an innovative solution to combat heat in wearable biosensors. Led by Professor Young Min Song from Gwangju Institute of Science and Technology (GIST), Korea, the team produced a nano-/micro-voids polymer (NMVP), a flexible and nonmetallic cooler made from two perforated polymers: polymethylmetacrylate and styrene-ethylene-butylene-styrene.

The resulting material has many attractive qualities. First, it has almost 100% reflectivity in the solar spectrum, meaning that it reflects nearly all sunlight. Second, it has high emissivity in the range of frequencies known as the atmospheric window. Thus, the material can easily radiate excess heat into the atmosphere, which helps cool it down. Finally, the good mechanical properties of the new polymer make it suitable for outdoor wearable devices.

To test the effectiveness of their innovation, the scientists built a patch-type tissue oximeter equipped with an NMVP-based cooler. Thanks to the superior performance of the cooler, their wearable biosensor could externally measure the concentration of oxygen in blood more accurately than conventional oximeters while also maintaining a much lower temperature. "Our approach is the first demonstration of successful thermal management in wearable devices considering both internal and external heat sources without blocking wireless communications," remarks Prof. Song.

The promising results of this study could pave the way for the widespread adoption of wearable devices and biosensors, which will become powerful tools in health monitoring and the training of athletes. With eyes set on the future, Prof. Song comments: "Our flexible strategy for radiative cooling will help bring about thermally protected skin-like electronics, which in turn will make human body monitoring unobtrusive and imperceptible."

The term "cool gadgets" is likely to get a whole new meaning in the future!

Credit: 
GIST (Gwangju Institute of Science and Technology)

Research shows how mutations in SARS-CoV-2 allow the virus to dodge immune defenses

The vast majority of people infected with SARS-CoV-2 clear the virus, but those with compromised immunity--such as individuals receiving immune-suppressive drugs for autoimmune diseases--can become chronically infected. As a result, their weakened immune defenses continue to attack the virus without being able to eradicate it fully.

This physiological tug-of-war between human host and pathogen offers a valuable opportunity to understand how SARS-CoV-2 can survive under immune pressure and adapt to it.

Now, a new study led by Harvard Medical School scientists offers a look into this interplay, shedding light on the ways in which compromised immunity may render SARS-CoV-2 fitter and capable of evading the immune system.

The research, published March 16 in Cell, shows that a mutated SARS-CoV-2 from a chronically infected immunocompromised patient is capable of evading both naturally occurring antibodies from COVID-19 survivors as well as lab-made antibodies now in clinical use for treatment of COVID-19.

The patient case was originally described Dec. 3, 2020, as a New England Journal of Medicine report by scientists at Brigham and Women's Hospital a few weeks before the U.K. and South African variants were first reported to the World Health Organization. Interestingly, the patient-derived virus contained a cluster of changes on its spike protein--the current target for vaccines and antibody-based treatments--and some of these changes were later detected in viral samples in the U.K. and South Africa, where they appear to have arisen independently, the researchers said.

The newly published study, which builds on the initial case report, shows something more alarming still. Some of the changes found in the patient-derived virus have not been identified yet in dominant viral variants circulating in the population at large. However, these changes have been already detected in databases of publicly available viral sequences. These mutations remain isolated, the authors of the report said, but they could be harbingers of viral mutants that may spread across the population.

The researchers emphasize that variants initially detected in the U.K. and South Africa remain vulnerable to currently approved mRNA vaccines, which target the entire spike protein rather than just portions of it. Nonetheless, the study results could also offer a preview into a future, in which current vaccines and treatments may gradually lose their effectiveness against next-wave mutations that render the virus impervious to immune pressures.

"Our experiments demonstrated that structural changes to the viral spike protein offer workarounds that allow the virus to escape antibody neutralization," said study senior author Jonathan Abraham, assistant professor of microbiology in the Blavatnik Institute at Harvard Medical School and an infectious disease specialist at Brigham and Women's Hospital. "The concern here is that an accumulation of changes to the spike protein over time could impact the long-term effectiveness of monoclonal antibody therapies and vaccines that target the spike protein."

Although the scenario remains hypothetical for now, Abraham said, it underscores the importance of two things. First, reducing the growth and spread of mutations by curbing the virus's spread both through infection-prevention measures and through widespread vaccination. Second, the need to design next-generation vaccines and therapies that target less mutable parts of the virus.

"How the spike responded to persistent immune pressure in one person over a five-month period can teach us how the virus will mutate if it continues to spread across the globe," added Abraham, who co-leads the COVID-19 therapeutics working group of the Massachusetts Consortium on Pathogen Readiness (MassCPR). "To help stop the virus from circulating, it's critical to make sure that vaccines are rolled out in an equitable way so that everyone in every country has a chance to get immunized."

A game of survival

Mutations are a normal part of a virus's life cycle. They occur when a virus makes copies of itself. Many of these mutations are inconsequential, others are harmful to the virus itself and yet others may become advantageous to the microbe, allowing it to propagate more easily from host to host. This latter change allows a variant to become more transmissible. If a change on a variant confers some type of evolutionary advantage to the virus, this variant can gradually outcompete others and become dominant.

In the early months of the pandemic, the assumption--and hope--was that SARS-CoV-2 would not change too fast because, unlike most RNA viruses, it has a "proofreading" protein whose job is to prevent too many changes to the viral genome. But last fall, Abraham and colleagues became intrigued by--and then alarmed about--a patient receiving immune-suppressive treatment for an autoimmune disorder who had been infected with SARS-CoV-2. The patient had developed a chronic infection. A genomic analysis of the patient's virus showed a cluster of eight mutations on the viral spike protein, which the virus uses to enter human cells and that is the target of current antibody treatments and vaccines. Specifically, the mutations had clustered on a segment of the spike known as the receptor-binding domain (RBD), the part that antibodies latch onto to prevent SARS-CoV-2 from entering human cells.

Abraham and colleagues knew the changes were a sign that the virus had developed workarounds to the patient's immune defenses. But would these mutations allow the virus to dodge the immune assault of antibodies that were not the patient's own?

To answer the question, Abraham and colleagues created lab-made, noninfectious replicas of the patient virus that mimicked the various structural changes that had accumulated in the span of five months.

In a series of experiments, the researchers exposed the dummy virus to both antibody-rich plasma from COVID-19 survivors and to pharmaceutically made antibodies now in clinical use. The virus dodged both naturally occurring and pharmaceutical-grade antibodies.

Experiments with a monoclonal antibody drug that contains two antibodies showed the virus was entirely resistant to one of the antibodies in the cocktail and somewhat, although not fully, impervious to the other. The second antibody was four-times less potent in neutralizing the mutated virus.

Not all eight mutations rendered the virus equally resistant to antibodies. Two particular mutations conferred the greatest resistance to both natural and lab-grown antibodies.

In a final experiment, the researchers created a super antibody by cobbling together proteins from naturally occurring antibodies that had evolved over time to become more attuned to and better at recognizing SARS-CoV-2 and capable of latching onto it more tightly. The process, known as antibody affinity maturation, is the principle behind vaccine booster shots used to fortify existing antibodies. One specific variant containing mutations that had occurred late in the course of the patient's infection was capable of withstanding even this super-potent antibody. But the super-potent antibody did manage to neutralize viral mutations detected at a different time in the course of the infection.

"This observation underscores two points: That the virus is smart enough to eventually evolve around even our most potent antibody therapies, but that we can also get ahead by 'cooking' new potent antibodies now, before new variants emerge," Abraham said

Getting ahead of the virus

Taken together the findings underscore the need to further understand human antibody responses to SARS-CoV-2 and to untangle the complex interplay between virus and human host, the researchers said. Doing so would allow scientist to anticipate changes in the virus and design countermeasures around these mutations before they become widespread.

In the short term, this speaks to the greater need to design antibody-based therapies and vaccines that directly target more stable, less mutable parts of the spike protein beyond its mutation-prone RBD region.

Long-term, this means that scientists should pivot toward developing therapies that go beyond antibody immunity and include also so-called cellular immunity, which is driven by T cells--a separate branch of the immune system that is independent of antibody-based immunity.

The most immediate implication, however, Abraham said, is to stay on top of emerging mutations through aggressive genomic surveillance. This means that instead of merely detecting whether SARS-CoV-2 is present in a patient sample, the tests should also analyze the viral genome and look for mutations. The technology to do so exists and is used in several countries as a way to monitor viral behavior and track changes to the virus across the population.

"In the United States, especially, the strategy has been to test and say whether a person is infected or not infected," Abraham said. "But there's a lot more information in that sample that can be obtained to help us track whether the virus is mutating. I am encouraged by the concerted efforts across the world to monitor sequences more aggressively--doing so is critical."

"It is important for us to stay ahead of this virus as it continues to evolve," said study first author Sarah Clark, member of the Abraham lab and a fourth-year student in the Ph.D. Program in Virology at Harvard University. "My hope is that our study provides insights that allow us to continue to do that."

Credit: 
Harvard Medical School

Certain mouthwashes might stop COVID-19 virus transmission

Researchers at Rutgers School of Dental Medicine have found evidence that two types of mouthwash disrupt the COVID-19 virus under laboratory conditions, preventing it from replicating in a human cell.

The study, published in the journal Pathogens, found that Listerine and the prescription mouthwash Chlorhexidine disrupted the virus within seconds after being diluted to concentrations that would mimic actual use. Further studies are needed to test real-life efficacy in humans.

The study was conducted in a lab using concentrations of the mouthwash and the time it would take to contact tissues to replicate conditions found in the mouth, said Daniel H. Fine, the paper's senior author and chair of the school's Department of Oral Biology.

The study found two other mouthwashes showed promise in potentially providing some protection in preventing viral transmission: Betadine, which contains povidone iodine, and Peroxal, which contains hydrogen peroxide. However, only Listerine and Chlorhexidine disrupted the virus with little impact on skin cells inside the mouth that provide a protective barrier against the virus.

"Both Povidone iodine and Peroxal caused significant skin cell death in our studies, while both Listerine and Chlorhexidine had minimal skin cell killing at concentrations that simulated what would be found in daily use," said Fine.

The team studied the efficacy of mouthwash potential for preventing viral transmission to better understand how dental providers can be protected from aerosols exhaled by patients. "As dentists, we're right there in a patient's face. We wanted to know if there's something that might lower the viral load,'' said coauthor Eileen Hoskin, an assistant professor at Rutgers School of Dental Medicine.

Fine cautions the public against relying on mouthwash as a way to slow the spread until it is proven in clinical trials on humans.

"The ultimate goal would be to determine whether rinsing two or three times a day with an antiseptic agent with active anti-viral activity would have the potential to reduce the ability to transmit the disease. But this needs to be investigated in a real-world situation,'' he said.

Previous research has shown various types of antiseptic mouthwashes can disrupt the novel coronavirus and temporarily prevent transmission, but this was one of the first studies that examined antiseptic rinse concentrations, time of contact and the skin-cell killing properties that simulated oral conditions. The study was conducted by a team of dental school scientists and virologist at the Public Health Research Institute.

"Since the SARS CoV-2 virus responsible for COVID-19 enters primarily through the oral and nasal cavity, oral biologists should be included in these studies because they have an in-depth understanding of oral infectious diseases," said Fine.

Credit: 
Rutgers University

NUS researchers harness AI to identify cancer cells by their acidity

image: The novel technique of using AI to quickly analyse cells for cancer diagnosis was developed by an NUS research team led by Professor Lim Chwee Teck (left). With him are two team members - Dr Jokhun Doorgesh Sharma (centre) and Dr Yuri Belotti (right).

Image: 
National University of Singapore

Singapore, 17 March 2021 - Healthy and cancer cells can look similar under a microscope. One way of differentiating them is by examining the level of acidity, or pH level, inside the cells.

Tapping on this distinguishing characteristic, a research team from the National University of Singapore (NUS) has developed a technique that uses artificial intelligence (AI) to determine whether a single cell is healthy or cancerous by analysing its pH. Each cancer test can be completed in under 35 minutes, and single cells can be classified with an accuracy rate of more than 95 per cent.

The research, led by Professor Lim Chwee Teck, Director of the Institute for Health Innovation & Technology (iHealthtech) at NUS, was first published in the journal APL Bioengineering on 16 March 2021.

"The ability to analyse single cells is one of the holy grails of health innovation for precision medicine or personalised therapy. Our proof-of-concept study demonstrates the potential of our technique to be used as a fast, inexpensive and accurate tool for cancer diagnosis," said Prof Lim, who is also from the NUS Department of Biomedical Engineering.

Using AI for cancer detection

Current techniques for examining a single cell can induce toxic effects or even kill the cell. The approach developed by Prof Lim and his team, however, can distinguish between cells originating from normal and cancerous tissue, as well as among different types of cancer. Crucially, all of these can be achieved while keeping the cells alive.

The NUS team's method relies on applying bromothymol blue - a pH-sensitive dye that changes colour according to the level of acidity of a solution - onto living cells. Due to its intracellular activity, each type of cell displays its own 'fingerprint' which consists of its own unique combination of red, green, and blue (RGB) components when illuminated. Cancer cells have an altered pH compared to healthy cells, so they react differently to the dye, and this changes their RGB fingerprint.

Using a standard microscope equipped with a digital colour camera, the RGB components emitted from the dye inside the cells are captured. By using an AI-based algorithm they developed, the NUS researchers were able to quantitatively map the unique acidic fingerprints so that the cell types examined can be easily and accurately identified. Thousands of cells originating from various cancerous tissues can be imaged simultaneously, and single-cell features can be extracted and analysed. Compared to current standard methods of cancer cell imaging which require several hours, the process developed by the NUS team can be completed in less than 35 minutes.

"Unlike other cell analysis techniques, our approach uses simple, inexpensive equipment, and does not require lengthy preparation and sophisticated devices. Using AI, we are able to screen cells faster and accurately. Furthermore, we can monitor and analyse living cells without causing any toxicity to the cells or the need to kill them. This would allow for further downstream analysis that may require live cells," explained Prof Lim.

Opening the door for faster detection

As their technique is simple, low cost, fast and high-throughput, the research team is planning to develop a real-time version of this technique where cancer cells can be automatically recognised, and promptly separated for further downstream molecular analysis, such as genetic sequencing, to determine any possible drug treatable mutation.

"We are also exploring the possibility of performing the real-time analysis on circulating cancer cells suspended in blood," shared Prof Lim. "One potential application for this would be in liquid biopsy where tumour cells that escaped from a primary tumour can be isolated in a minimally-invasive fashion from bodily fluids such as blood."

In addition, the group is looking forward to advancing their concept to detect different stages of malignancies from the cells tested.

Credit: 
National University of Singapore

Trouble for honeyeaters that sing the wrong song

image: Regent honeyeater

Image: 
David Stowe

The critically endangered regent honeyeater is losing its "song culture" due to the bird's rapidly declining population, according to new research from The Australian National University (ANU).

Just like humans learning to speak, many birds learn to sing by associating with older birds of the same species. They risk losing this skill if adults become too rare. And if they don't learn to sing a sexy enough song, their chances of mating are reduced.

"If endangered birds are unable to learn how to sing correctly, it seriously impacts their ability to communicate," lead author Dr Ross Crates said.

"It could also be exacerbating the honeyeater's population decline, because we know a sexy song increases the odds of reproduction in songbirds. Females will avoid males that sing unusual songs."

The study found that in places where there were still reasonable numbers of regent honeyeaters, males sang rich and complex songs. Where the birds were rare, males sang simplified or "totally incorrect" songs.

"For example, 18 male regent honeyeaters - or around 12 per cent of the total population - were only able to copy the songs of other bird species," study co-author Dr Dejan Stojanovic said.

"This lack of ability to communicate with their own species is unprecedented in a wild animal. We can assume that regent honeyeaters are now so rare that some young males never find an older male teacher."

The study also showed regent honeyeaters born in captivity have totally different songs to wild birds.

The research team believe this could prove crucial when it comes to conservation.

"The unusual songs of captive-bred birds could reduce their attractiveness to wild birds when they are eventually released," Dr Crates said.

"So we've devised a new strategy to teach young captive regent honeyeaters to sing the same song as the wild birds by playing them audio recordings.

"Loss of song culture is a major warning sign the regent honeyeater is on the brink of extinction and we still have a lot to learn about how to help them."

Credit: 
Australian National University

UK variant spread rapidly in care homes in England

The UK variant of SARS-CoV-2 spread rapidly in care homes in England in November and December last year, broadly reflecting its spread in the general population, according to a study by UCL researchers.

The study, published as a letter in the New England Journal of Medicine, looked at positive PCR tests of care home staff and residents between October and December. It found that, among the samples it had access to, the proportion of infections caused by the new variant rose from 12% in the week beginning 23 November to 60% of positive cases just two weeks later, in the week beginning 7 December.

In the south east of England, where the variant was most dominant, the proportion increased from 55% to 80% over the same period. In London, where the variant spread fastest, the proportion increased from 20% to 66%.

The researchers said the timing of infections suggested the new variant may have been passed from staff to residents, with positive cases among older people occurring later.

Senior author Dr Laura Shallcross (UCL Institute of Health Informatics) said: "Our findings suggest the UK variant spread just as quickly in care homes as it did in the general population. This shows the importance of public health measures to reduce transmission in the country as a whole."

Lead author Dr Maria Krutikov (UCL Institute of Health Informatics) said: "Our results are consistent with national trends, suggesting that the UK variant was present in care homes from early on, although our sample did not fully represent all care homes in England. As we carried out this work in December, we were able to inform public health decisions at the time.

"To see how viruses like Covid-19 are changing and to respond quickly and appropriately, it is really important we have an advanced surveillance system, with gene sequencing that can identify new variants as early as possible."

For the study, researchers analysed 4,442 positive PCR samples from care home staff and residents in England. These were all the positive tests of staff and residents processed from October to December at the Lighthouse laboratory in Milton Keynes, one of the UK's biggest coronavirus testing labs. Staff in care homes are tested every week, while residents are tested monthly.

PCR tests for SARS-CoV-2 are designed to detect three parts of the virus - the S gene, N gene, and ORF1ab. The UK variant, known as B.1.1.7., has changes in its S gene, or spike gene, which mean the tests do not detect this particular target.

This means researchers were able to identify the proportion of infections caused by the new variant by looking at the samples in which the other two targets, the N gene and ORF1b, were detected, but not the S gene.

They also compared Ct values, which show how much of the virus is present, to check the samples did not miss the S gene because they were "weaker" positive tests, with less viral material.

Their analysis showed that in late November, the proportion of infections associated with B.1.1.7 increased sharply in several regions of England. In London, this was from 20% (week beginning 23 November) to 66% (week beginning 7 December). In the east of England, it rose from 35% to 64% over the same period, while in the south east the increase was from 55% to 80%. The data was predominantly drawn from London, the south east and east of England and the Midlands, with fewer positive test samples from the north of England and the south west.

Most samples were from people aged under 65, as staff are tested much more frequently than residents. However, among samples from those aged over 65, the proportion of infections caused by the new variant rose from 14% in the week beginning 23 November to 76% in the week beginning 7 December. (The number of total positive samples was low - just 21 and 157 respectively.)

The research was conducted as part of the Vivaldi study looking at Covid-19 infections in care homes. It received support and funding from the Department of Health and Social Care.

Credit: 
University College London