Tech

Experts agree on new global definition of 'fermented foods'

image: Only recently, with the availability of new scientific techniques for analyzing the nutritional properties and microbiological composition of fermented foods, have scientists begun to understand exactly how the unique flavors and textures are created and how these foods benefit human health.
Interdisciplinary scientists have come together to create the first international consensus definition of fermented foods, published in Nature Reviews Gastroenterology & Hepatology.

Image: 
Dean Kaylan

Humans have consumed different types of fermented foods - from kimchi to yogurt - for thousands of years. Yet only recently, with the availability of new scientific techniques for analyzing their nutritional properties and microbiological composition, have scientists begun to understand exactly how the unique flavors and textures are created and how these foods benefit human health.

Now, 13 interdisciplinary scientists from the fields of microbiology, food science and technology, family medicine, ecology, immunology, and microbial genetics have come together to create the first international consensus definition of fermented foods. Their paper, published in Nature Reviews Gastroenterology & Hepatology, defines fermented foods as: "foods made through desired microbial growth and enzymatic conversions of food components".

The authors take care to note the difference between probiotics and the live microbes associated with fermented foods. The word 'probiotic', they say, only applies in special cases where the fermented food retains live microorganisms at the time of consumption, and only when the microorganisms are defined and shown to provide a health benefit, as demonstrated in a scientific study.

"Many people think fermented foods are good for health--and that may be true, but the scientific studies required to prove it are limited and have mainly focused on certain fermented food types," says first author Maria Marco, Professor in the Department of Food Science and Technology at the University of California, Davis.

Co-author Bob Hutkins, Professor in the Department of Food Science and Technology at University of Nebraska, Lincoln--who has authored a well-known academic textbook on fermented foods--says, "We created this definition to cover the thousands of different types of fermented foods from all over the world, as a starting point for further investigations into how these foods and their associated microbes affect human health."

The consensus panel discussion was organized in 2019 by the International Scientific Association for Probiotics and Prebiotics (ISAPP), a non-profit organization responsible for the published scientific consensus definitions of both probiotics (in 2014) and prebiotics (in 2017).

Mary Ellen Sanders, Executive Science Officer of ISAPP, says, "To date, different people have had different ideas of what constitutes a fermented food. The new definition provides a clear concept that can be understood by the general public, industry members and regulators."

Currently, evidence for the positive health effects of fermented foods has relied more on epidemiological and population-based studies and less on randomized controlled trials. The authors expect that, in the years ahead, scientists will undertake more hypothesis-driven research on how different fermented foods from around the globe--derived from dairy products, fruit, vegetables, grains, and even meats--affect human physiology and enhance human health.

Credit: 
International Scientific Association for Probiotics and Prebiotics

Mechanophores: Making polymer crystallization processes crystal clear

image: The fluorescence of radicals generated from tetraarylsuccinonitrile would enable visualization of the crystallization. Fluorescence microscopy observations of crystallization-induced mechanofluorescence allow the precise identification of the stress location and in-depth clarification of the polymer crystallization process.

Image: 
Tokyo Tech

In modern times, manufacturers produce highly specialized materials for a wide array of uses, called polymers. Polymers have a variety of purposes owing to their versatile properties, ranging from being used in construction due to their high tensile strength and resistance to manufacturing plastic bags that require more lightweight, flexible materials, such as nylon or polyethene.

These differences between the properties of different polymers stems from their internal structure. Polymers are made up of long chains of smaller sub-units, called "monomers." Crystallization occurs when crystalline polymers are melted, then cooled down slowly, which enables the chains to organize themselves into neatly arranged plates.

Depending on the degree and location of crystallization, this process gives valuable properties to the polymers, including flexibility, heat conductivity, and strength. However, if not properly controlled, crystallization can also weaken the material, putting undue stress on the polymer chain. This is especially problematic when polymers are subjected to extreme conditions, such as freezing temperatures or intense pressure.

To guarantee optimal performance, we need to predict how a given polymer will react to mechanical stress and to what degree crystallization contributes to this response. However, scientists know very little about the intricate forces at play during crystallization, having never been able to observe them directly or measure them accurately without destroying the material first.

Based on recent advancements in polymer science, a research group, led by Professor Hideyuki Otsuka of Tokyo Tech, has been working on a method to visualize polymer crystallization in real time. In a recent study published in Nature Communications, they used highly reactive molecules, called radical-type "mechanophores," embedded in the polymer structures. Radical-type mechanophores are sensitive to mechanical stress and easily break down into two equivalent radical species, which can act as a probe to know when and how stress is applied. In this case, to examine the mechanical forces at play during crystallization, they used a radical-type mechanophore called "TASN", which breaks down and emits fluorescence when subjected to mechanical stress.

The team had already used similar molecules previously, showing they could be used to visualize and evaluate the degree of mechanical stress within a polymer material. In the current study, they used a similar method to observe the crystallization of a polymer. As the crystals formed, the mechanical forces caused the mechanophores in its structure to dissociate into smaller, pink-colored radicals with a characteristic yellow fluorescence, enabling the researchers to directly observe the process. Because fluorescence shows high visibility, the researchers were able to measure the emitted wavelengths of fluorescence to determine the exact rate of crystallization, as well as its extent and location even in three dimension within the polymer material.

Prof Otsuka explains the significance of this finding, "The direct visualization of polymer crystallization offers unprecedented insight into crystal growth processes." Indeed, this method enables manufacturers to test polymer materials for specific mechanical properties during crystallization. The researchers believe that their study will enable the industrial optimization of polymer materials by controlling the crystallization process to obtain desired properties. Ultimately, Prof Otsuka concludes, this could "lead to design guidelines for advanced polymer materials."

Credit: 
Tokyo Institute of Technology

Uncovering how grasslands changed our climate

Grasslands are managed worldwide to support livestock production, while remaining natural or semi-natural ones provide critical services that contribute to the wellbeing of both people and the planet. Human activities are however causing grasslands to become a source of greenhouse gas emissions rather than a carbon sink. A new study uncovered how grasslands used by humans have changed our climate over the last centuries.

Grasslands are the most extensive terrestrial biome on Earth and are critically important for animal forage, biodiversity, and ecosystem services. They absorb and release carbon dioxide (CO2), and emit methane (CH4) from grazing livestock and nitrous oxide (N2O) from soils, especially when manure or mineral fertilizers are introduced. Little is known, however, about how the fluxes of these three greenhouse gases from managed and natural grasslands worldwide have contributed to climate change in the past, and about the role of managed pastures versus natural or very sparsely grazed grasslands.

To address this knowledge gap, an international research team quantified the changes in carbon storage and greenhouse gas fluxes in natural and managed grasslands between 1750 and 2012 in their study published in Nature Communications. The study's comprehensive estimates of global grasslands' contribution to past climate change illustrate the important climate cooling service provided by sparsely grazed areas, and the growing contribution to warming from quickly increasing livestock numbers and more intensive management ? which are in turn associated with more CH4 and N2O emissions ? in determining the contemporary net climate effect of the grassland biome.

"We built and applied a new spatially explicit global grassland model that includes mechanisms of soil organic matter and plant productivity changes driven by historical shifts in livestock and the reduction of wild grazers in each region. This model is one of the first to simulate the regional details of land use change and degradation from livestock overload," explains Jinfeng Chang who led this study at IIASA and is now based at Zhejiang University in China. "We also looked at the effect of fires, and soil carbon losses by water erosion; CH4 emissions from animals; N2O emissions from animal excrement, manure, and mineral fertilizer applications; and atmospheric nitrogen deposition."

The study shows that emissions of CH4 and N2O from grasslands increased by a factor of 2.5 since 1750 due to increased emissions from livestock that have more than compensated for reduced emissions from the shrinking number of wild grazers. The net carbon sink effect of grasslands worldwide - in other words, the ability of grasslands to absorb more carbon and pack it in the soil - was estimated to have intensified over the last century, but mainly over sparsely grazed and natural grasslands. Conversely, over the last decade, grasslands intensively managed by humans have become a net source of greenhouse gas emissions - in fact, it has greenhouse gas emission levels similar to those of global croplands, which represent a large source of greenhouse gases.

"Our results show that the different human activities that have affected grasslands have shifted the balance of greenhouse gas removals and emissions more towards warming in intensively exploited pastures, and more towards cooling in natural and semi-natural systems. Coincidently, until recently the two types of grasslands have almost been canceling each other out," notes coauthor Thomas Gasser from IIASA. "However, the recent trends we see towards the expansion of pasture land and higher livestock numbers lead us to expect that global grasslands will accelerate climate warming if better policies are not put in place to favor soil carbon increases, stop deforestation for ranching, and develop climate-smart livestock production systems."

According to the authors, the cooling services provided by sparsely grazed or wild grasslands, makes it clear that countries should assess not only the greenhouse gas budgets of their managed pastures (such as specified in the current national greenhouse gas reporting rules of the UN's Framework Convention on Climate Change), but also the sinks and sources of sparsely grazed rangelands, steppes, tundra, and wild grasslands. Full greenhouse gas reporting for each country could facilitate the assessment of progress towards the goals of the Paris Agreement and better link national greenhouse gas budgets to the observed growth rates of emissions in the atmosphere.

"In the context of low-warming climate targets, the mitigating or amplifying role of grasslands will depend on a number of aspects. This includes future changes in grass-fed livestock numbers; the stability of accumulated soil carbon in grasslands; and whether carbon storage can be further increased over time or if it will saturate, as observed in long-term experiments," concludes Philippe Ciais, a study coauthor from the Laboratory for Sciences of Climate and Environment (LSCE).

Credit: 
International Institute for Applied Systems Analysis

Researchers uncover a potential treatment for an aggressive form of lung cancer

image: The image depicts a non-small cell lung cancer tumor surrounded by metabolites involved in the hexosamine biosynthesis pathway.

Image: 
Elizabeth Lieu

DALLAS - Jan. 5, 2021 - Researchers at the Children's Medical Center Research Institute at UT Southwestern (CRI) have discovered a new metabolic vulnerability in a highly aggressive form of non-small cell lung cancer (NSCLC). These findings could pave the way for new treatments for patients with mutations in two key genes - KRAS and LKB1. Patients whose tumors contain both of these mutations, known as KL tumors, have poor outcomes and usually do not respond to immunotherapy.

"We used to think that most tumors rely on the same handful of metabolic pathways to grow, but we've learned over the last decade that this is an oversimplification. Instead, different tumor subclasses have particular metabolic needs arising from mutations in key genes. Understanding how specific combinations of mutations promote tumor growth and metastasis may allow us to design tailored therapies for patients," says Ralph DeBerardinis, M.D., Ph.D., a professor at CRI and a Howard Hughes Medical Institute investigator.

While mutations in either KRAS or LKB1 can alter metabolism individually, less is known about the metabolic needs when both genes are mutated in the same tumor. To uncover new metabolic vulnerabilities, the scientists compared metabolic properties of KL tumors genetically engineered in mice to tumors containing different mutations and to the normal lung. In the study, published recently in Nature Metabolism, they discovered that the hexosamine biosynthesis pathway (HBP) is activated in KL tumors. These findings were consistent with previous research in the DeBerardinis lab that showed KL cells reprogram carbon and nitrogen metabolism in ways that promote their growth but increase their sensitivity to particular metabolic inhibitors.

The HBP allows cells to modify proteins through a process called glycosylation, which facilitates protein trafficking and secretion. The high rate of protein production that fuels KL tumor growth is thought to require activation of the HBP. In order to develop ways to inhibit the HBP, the researchers next identified the enzyme GFPT2 as a key liability in KL tumors. Genetically silencing or chemically inhibiting this enzyme suppressed KL tumor growth in mice, but had little effect on the growth of tumors containing only the KRAS mutation. Altogether, the findings indicate the selective importance of the HBP in KL tumors and suggest that GFPT2 could be a useful target for this aggressive subtype of NSCLC.

"Since no specific inhibitor against GFPT2 exists, our next step is to see if blocking certain steps in the glycosylation pathway could be therapeutically beneficial. Ultimately we are looking for options that can help stop the growth and spread of these aggressive tumors," says Jiyeon Kim, Ph.D., the postdoctoral fellow who led the study with DeBerardinis. Kim is now an assistant professor in the department of biochemistry and molecular genetics at the University of Illinois at Chicago.

Credit: 
UT Southwestern Medical Center

Catalyst transforms plastic waste to valuable ingredients at low temperature

image: Graphical abstract

Image: 
Tamura/Osaka City University

For the first time, researchers have used a novel catalyst process to recycle a type of plastic found in everything from grocery bags and food packaging to toys and electronics into liquid fuels and wax.

The team published their results on Dec. 10 in Applied Catalysis B: Environmental.

"Plastics are essential materials for our life because they bring safety and hygiene to our society," said paper co-authors Masazumi Tamura, associate professor in the Research Center for Artificial Photosynthesis in the Advanced Research Institute for Natural Science and Technology in Osaka City University, and Keiichi Tomishige, professor in the Graduate School of Engineering in Tohoku University. "However, the growth of the global plastic production and the rapid penetration of plastics into our society brought mismanagement of waste plastics, causing serious environmental and biological issues such as ocean pollution."

Polyolefinic plastics -- the most common plastic -- have physical properties that make it difficult for a catalyst, responsible for inducing chemical transformation, to interact directly with the molecular elements to cause a change. Current recycling efforts require temperatures of at least 573 degrees Kelvin, and up to 1,173 degrees Kelvin. For comparison, water boils at 373.15 degrees Kelvin, and the surface of the Sun is 5,778 degrees Kelvin.

The researchers looked to heterogenous catalysts in an effort to find a reaction that might require a lower temperature to activate. By using a catalyst in a different state of matter than the plastics, they hypothesized that the reaction would be stronger at a lower temperature.

They combined ruthenium, a metal in the platinum family, with cerium dioxide, used to polish glass among other applications, to produce a catalyst that caused the plastics to react at 473 degrees Kelvin. While still high for human sensibilities, it requires significantly less energy input compared to other catalyst systems.

According to Tamura and Tomishige, ruthenium-based catalysts have never been reported in the scientific literature as a way to directly recycle polyolefinic plastics.

"Our approach acted as an effective and reusable heterogeneous catalyst, showing much higher activity than other metal-supported catalysts, working even under mild reaction conditions," Tamura and Tomishige said. "Furthermore, a plastic bag and waste plastics could be transformed to valuable chemicals in high yields."

The researchers processed a plastic bag and waste plastics with the catalyst, producing a 92% yield of useful materials, including a 77% yield of liquid fuel and a 15% yield of wax.

"This catalyst system is expected to contribute to not only suppression of plastic wastes but also to utilization of plastic wastes as raw materials for production of chemicals," Tamura and Tomishige said.

Credit: 
Osaka City University

DeepTFactor predicts transcription factors

image: The network architecture of DeepTFactor. An input protein sequence is processed using three parallel subnetworks.

Image: 
KAIST

A joint research team from KAIST and UCSD has developed a deep neural network named DeepTFactor that predicts transcription factors from protein sequences. DeepTFactor will serve as a useful tool for understanding the regulatory systems of organisms, accelerating the use of deep learning for solving biological problems.

A transcription factor is a protein that specifically binds to DNA sequences to control the transcription initiation. Analyzing transcriptional regulation enables the understanding of how organisms control gene expression in response to genetic or environmental changes. In this regard, finding the transcription factor of an organism is the first step in the analysis of the transcriptional regulatory system of an organism.

Previously, transcription factors have been predicted by analyzing sequence homology with already characterized transcription factors or by data-driven approaches such as machine learning. Conventional machine learning models require a rigorous feature selection process that relies on domain expertise such as calculating the physicochemical properties of molecules or analyzing the homology of biological sequences. Meanwhile, deep learning can inherently learn latent features for the specific task.

A joint research team comprised of Ph.D. candidate Gi Bae Kim and Distinguished Professor Sang Yup Lee of the Department of Chemical and Biomolecular Engineering at KAIST, and Ye Gao and Professor Bernhard O. Palsson of the Department of Biochemical Engineering at UCSD reported a deep learning-based tool for the prediction of transcription factors. Their research paper "DeepTFactor: A deep learning-based tool for the prediction of transcription factors" was published online in PNAS.

Their article reports the development of DeepTFactor, a deep learning-based tool that predicts whether a given protein sequence is a transcription factor using three parallel convolutional neural networks. The joint research team predicted 332 transcription factors of Escherichia coli K-12 MG1655 using DeepTFactor and the performance of DeepTFactor by experimentally confirming the genome-wide binding sites of three predicted transcription factors (YqhC, YiaU, and YahB).

The joint research team further used a saliency method to understand the reasoning process of DeepTFactor. The researchers confirmed that even though information on the DNA binding domains of the transcription factor was not explicitly given the training process, DeepTFactor implicitly learned and used them for prediction. Unlike previous transcription factor prediction tools that were developed only for protein sequences of specific organisms, DeepTFactor is expected to be used in the analysis of the transcription systems of all organisms at a high level of performance.

Distinguished Professor Sang Yup Lee said, "DeepTFactor can be used to discover unknown transcription factors from numerous protein sequences that have not yet been characterized. It is expected that DeepTFactor will serve as an important tool for analyzing the regulatory systems of organisms of interest."

Credit: 
The Korea Advanced Institute of Science and Technology (KAIST)

Impurities boost performance of organic solar cells

image: Adding diquat, a known herbicide, as a dopant enhances the conversion efficiency of high-performance organic solar cells.

Image: 
© 2021 Yuanbao Lin

Sunlight offers a potential solution in the search for an energy source that does not harm the planet, but this depends on finding a way to efficiently turn electromagnetic energy into electricity. Researchers from KAUST have shown how a known herbicide can improve this conversion in organic devices.

While solar cells have traditionally been made from inorganic materials such as silicon, organic materials are starting to break through as an alternative because they are light, flexible and relatively inexpensive to make, even offering the possibility for printable manufacture.

For organic photovoltaics to become a realistic replacement for fossil fuels, they must improve their efficiency when converting the fraction of incident solar energy to electrical energy. Key to achieving this is choosing the right combination of materials.

Ph.D. student Yuanbao Lin and Thomas Anthopoulos have now achieved this by developing "an efficient molecular dopant to improve the performance and stability of organic solar cells," according to Lin.

Most photovoltaic devices have two important elements: an n-type region and a p-region, so called because each region has a net negative and positive electric charge, respectively. These charges can be achieved by adding impurities to the semiconductor. An impurity that creates an n-type material is known as a donor, while an acceptor impurity makes a p-type material.

Lin, Anthopoulos and their team used diquat (C12H12Br2N2) as a molecular donor dopant to enhance the conversion efficiency of high-performance organic solar cells.

The dopant was added to two organic material systems that have previously shown excellent photovoltaic performance. In one case, the power conversion efficiency was improved from 16.7 percent to 17.4 percent, while they were able to attain a maximum efficiency of 18.3 percent in the other. These improvements were possible because the molecular diquat dopant increased both the materials' optical absorption and the lifetime of the electrical charges when light was absorbed.

Like many organic n-type dopants, diquat is reactive in an ambient atmosphere; its lack of stability has prevented its use as a molecular dopant so far. However, the KAUST team were able to develop a process that stably created neutral diquat by electrochemically reducing charged diquat, which is stable in air.

This ability makes diquat a promising choice for the next generation of organic solar cells. "The predicted maximum efficiency of the organic solar cell is around 20 percent," explains Lin. "We will try our best to reach this."

Credit: 
King Abdullah University of Science & Technology (KAUST)

Magnets dim natural glow of human cells, may shed light on how animals migrate

video: Researchers at the University of Tokyo irradiated cells with a short burst of blue light and then swept a magnetic field over the cells every four seconds while measuring changes in the intensity of the fluorescent light cells emitted. Statistical analysis of the light intensity in videos like this revealed that the cell's fluorescence dimmed by about 3.5% each time the magnetic field swept over the cells. These results are a crucial step in understanding how animals from birds to butterflies navigate using Earth's geomagnetic field.

Image: 
Video by Ikeya and Woodward, CC BY, originally published in PNAS DOI: 10.1073/pnas.2018043118

Researchers in Japan have made the first observations of biological magnetoreception - live, unaltered cells responding to a magnetic field in real time. This discovery is a crucial step in understanding how animals from birds to butterflies navigate using Earth's magnetic field and addressing the question of whether weak electromagnetic fields in our environment might affect human health.

"The joyous thing about this research is to see that the relationship between the spins of two individual electrons can have a major effect on biology," said Professor Jonathan Woodward from the University of Tokyo, who conducted the research with doctoral student Noboru Ikeya. The results were recently published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS).

Researchers have suspected since the 1970s that because magnets can attract and repel electrons, Earth's magnetic field, also called the geomagnetic field, could influence animal behavior by affecting chemical reactions. When some molecules are excited by light, an electron can jump from one molecule to another and create two molecules with single electrons, known as a radical pair. The single electrons can exist in one of two different spin states. If the two radicals have the same electron spin, their subsequent chemical reactions are slow, while radical pairs with opposite electron spins can react faster. Magnetic fields can influence electron spin states and thus directly influence chemical reactions involving radical pairs.

Over the past 50 years, chemists have identified multiple reactions and specific proteins called cryptochromes that are sensitive to magnetic fields in test tube environments. Biologists have even observed how genetically interfering with cryptochromes in fruit flies and cockroaches can eliminate the insects' ability to navigate according to geomagnetic cues. Other research has indicated that birds' and other animals' geomagnetic navigation is light sensitive. However, no one has previously measured chemical reactions inside a living cell changing directly because of a magnetic field.

Woodward and Ikeya worked with HeLa cells, human cervical cancer cells that are commonly used in research labs, and were specifically interested in their flavin molecules.

Flavins are a subunit of cryptochromes that are themselves a common and well-studied group of molecules that naturally glow, or fluoresce, when exposed to blue light. They are important light-sensing molecules in biology.

When flavins are excited by light, they can either fluoresce or produce radical pairs. This competition means that the amount of flavin fluorescence depends on how quickly the radical pairs react. The University of Tokyo team hoped to observe biological magnetoreception by monitoring cells' autofluorescence while adding an artificial magnetic field to their environment.

Autofluorescence is common in cells, so to isolate flavin autofluorescence, the researchers used lasers to shine light of a specific wavelength onto the cells and then measured the wavelengths of the light that the cells emitted back to ensure that it matched the characteristic values of flavin autofluorescence. Standard magnetic equipment can generate heat, so the researchers took extensive precautions and performed exhaustive control measurements to verify that the only change in the cells' environment was the presence or absence of the magnetic field.

"My goal even as a Ph.D. student has always been to directly see these radical pair effects in a real biological system. I think that is what we've just managed," said Woodward.

The cells were irradiated with blue light and fluoresced for about 40 seconds. Researchers swept a magnetic field over the cells every four seconds and measured changes in the intensity of the fluorescence. Statistical analysis of the visual data from the experiments revealed that the cell's fluorescence dimmed by about 3.5% each time the magnetic field swept over the cells.

The researchers suspect that the blue light excites the flavin molecules to generate radical pairs. The presence of a magnetic field caused more radical pairs to have the same electron spin states, meaning that there were fewer flavin molecules available to emit light. Thus, the cell's flavin fluorescence dimmed until the magnetic field went away.

"We've not modified or added anything to these cells. We think we have extremely strong evidence that we've observed a purely quantum mechanical process affecting chemical activity at the cellular level," Woodward remarked.

Lab experiments and real-world magnetoreception

The experimental magnetic fields were 25 millitesla, which is roughly equivalent to common refrigerator magnets. The magnetic field of the Earth varies by location, but is estimated to be about 50 microtesla, or 500 times weaker than the magnetic fields used in the experiments.

Woodward states that Earth's very weak magnetic field could still have a biologically important influence due to a phenomenon known as the low field effect. Although strong magnetic fields make it difficult for radical pairs to switch between states in which the two electron spins are the same and states in which they are different, weak magnetic fields can have the opposite effect and make the switch easier than when there is no magnetic field.

The authors are now investigating the effect in other types of cells, the potential role of the cells' health and surroundings, and testing candidate magnetic receptors, including cryptochromes directly inside cells. Interpreting any potential environmental or physiological significance of the results will require developing more specialized and highly sensitive equipment to work with much weaker magnetic fields and more detailed cellular analysis to connect the magnetic field-sensitive response to specific signaling pathways or other consequences within the cell.

Credit: 
University of Tokyo

Danish and Chinese tongues taste broccoli and chocolate differently

Two studies from the University of Copenhagen show that Danes aren't quite as good as Chinese at discerning bitter tastes. The research suggests that this is related to anatomical differences upon the tongues of Danish and Chinese people.

For several years, researchers have known that women are generally better than men at tasting bitter flavours. Now, research from the University of Copenhagen suggests that ethnicity may also play a role in how sensitive a person is to the bitter taste found in for example broccoli, Brussels sprouts and dark chocolate. By letting test subjects taste the bitter substance PROP, two studies demonstrate that Danish and Chinese people experience this basic taste differently. The reason seems to be related to an anatomical difference upon the tongue surfaces of these two groups.

"Our studies show that the vast majority of Chinese test subjects are more sensitive to bitter tastes than the Danish subjects. We also see a link between the prominence of bitter taste and the number of small bumps, known as papillae, on a person's tongue," says Professor Wender Bredie of the University of Copenhagen's Department of Food Science (UCPH FOOD).

A taste of artificial intelligence

Using a new artificial intelligence method, researchers from UCPH FOOD, in collaboration with Chenhao Wang and Jon Sporring of UCPH's Department of Computer Science, analysed the number of mushroom-shaped "fungiform" papillae on the tongues of 152 test subjects, of whom half were Danish and half Chinese.

Fungiform papillae, located at the tip of the tongue, are known to contain a large portion of our taste buds and play a central role in our food and taste experiences. To appreciate the significance of papillae in food preferences across cultures and ethnicities, it is important to learn more about their distribution, size and quantity.

The analysis demonstrated that the Chinese test subjects generally had more of these papillae than the Danish subjects, a result that the researchers believe explains why Chinese people are better at tasting bitter flavours.

However, Professor Bredie emphasizes that larger cohorts need to be examined before any definitive conclusions can be drawn about whether these apparent phenotypical differences between Danes and Chinese hold at the general population level.

More knowledge about differences in taste impressions can be important for food development. According to Professor Bredie:

"It is relevant for Danish food producers exporting to Asia to know that Asian and Danish consumers probably experience tastes from the same product differently. This must be taken into account when developing products."

Danes prefer foods that require a good chew

Professor Wender Bredie points out that genetics are only one of several factors that influence how we experience food. Another significant factor has to do with our preferences -- including texture. Think, for example, of the difference between munching on crispy potato chips from a newly opened bag, compared to eating softened ones from a bag opened the day before. Here, many Danes would probably prefer the crispy ones over the soft ones, even if the taste is similar. According to the UCPH studies, there seems to be a difference between the Danish and Chinese test subjects on this point as well.

While the vast majority of Chinese subjects (77%) prefer foods that don't require much chewing, the opposite holds true for the Danish subjects. Among the Danes, 73% prefer eating foods with a harder consistency that require biting and chewing - rye bread and carrots, for example.

The reason for this difference remains unknown, but the researchers suspect that it stems from differences in food culture and the ways in which we learn to eat. The studies do not point to tongue shape as making any difference.

THE NEW METHOD:

Because the counting of tongue papillae is usually done manually, and a tongue has hundreds of tiny fungiform papillae, it is a demanding job in which mistakes are easily made.

The new method, based on artificial intelligence and developed by image analysis experts Chenhao Wang and Jon Sporring of the Department of Computer Science, automates the counting and delivers precision. Using an algorithm, they have designed a tongue-coordinate system (see figure) that can map papillae on individual tongues using image recognition.

Credit: 
University of Copenhagen - Faculty of Science

Vaping combined with smoking is likely as harmful as smoking cigarettes alone

DALLAS, Jan. 4, 2021 -- Smoking traditional cigarettes in addition to using e-cigarettes results in harmful health effects similar to smoking cigarettes exclusively, according to new research published today in the American Heart Association's flagship journal Circulation.

Smoking, a well-known link to cardiovascular disease and death, appears to be on the decline. While the use of e-cigarettes, known as vaping, is increasingly popular, there has been limited research on the impact of vaping on the body.

In a large data analysis of more than 7,100 U.S. adults ages 18 and older, researchers studied the association of cigarette smoking and e-cigarette use with inflammation and oxidative stress as biomarkers. Inflammation and oxidative stress are key contributors to smoking-induced cardiovascular disease and their biomarkers have been shown to be predictors of cardiovascular events, including heart attack and heart failure.

"This study is among the first to use nationally representative data to examine the association of cigarette and e-cigarette use behaviors with biomarkers of inflammation and oxidative stress," said Andrew C. Stokes, Ph.D., assistant professor of global health at Boston University School of Public Health in Boston and first author of the study. "Given the lag time between tobacco exposure and disease symptoms and diagnosis, identifying the association between e-cigarette use and sensitive biomarkers of subclinical cardiovascular injury is necessary for understanding the long-term effects of newer tobacco products such as e-cigarettes."

Researchers used data from the Population Assessment of Tobacco and Health (PATH) Study, a nationally representative longitudinal cohort in the U.S. This study's analysis was restricted to adults 18 years and older from Wave 1 of the survey, which was administered from 2013 to 2014 and included the collection of blood and urine samples.

Five biomarkers of inflammation and oxidative stress were analyzed. Participants were slotted into four categories based on the use of traditional cigarettes and e-cigarettes within a 30-day period: nonuse of cigarettes and e-cigarettes; exclusive vaping; exclusive cigarette smoking; and dual use of cigarettes and e-cigarettes. To test the robustness of initial results, the scientists repeated the analyses in subgroups of respondents, including those with no past 30-day use of any other tobacco products.

Of the study participants, more than half (58.6%) did not use cigarettes or e-cigarettes; nearly 2% vaped exclusively; about 30% smoked cigarettes exclusively; and about 10% used e-cigarettes and traditional cigarettes.

The analysis found:

Participants who vaped exclusively showed a similar inflammatory and oxidative stress profile as people who did not smoke cigarettes or use e-cigarettes.

Participants who smoked exclusively and those who used cigarettes and e-cigarettes had higher levels across all biomarkers assessed compared to participants who did not use cigarettes or e-cigarettes.

Compared to participants who smoked exclusively, those who vaped exclusively had significantly lower levels of almost all inflammatory and oxidative stress biomarkers.
However, participants who used cigarettes and e-cigarettes had levels of all inflammatory and oxidative stress biomarkers comparable to those who smoked exclusively.

"This study adds to the limited body of research we have on biologic measures in those using e-cigarettes," said study co-author Rose Marie Robertson, M.D., FAHA, deputy chief science and medical officer of the American Heart Association and co-director of the Association's National Institutes of Health/Food and Drug Administration-funded Tobacco Center of Regulatory Science, which supported the study. "I believe it has an important message for individuals who may believe using e-cigarettes while continuing to smoke some combustible cigarettes reduces their risk. This commonly-seen pattern of dual use was not associated with lower levels of inflammatory markers, and thus is not likely to offer a reduction in risk in this specific area."

Researchers also conducted extensive analyses to test the results against the influence of related behaviors such as the use of other tobacco products and marijuana, and secondhand smoke exposure. The results remained consistent across the additional analyses.

The large population sample of this study makes the findings applicable to the U.S. adult population. One of the study's limitations is its cross-sectional approach of looking at population data at one point in time, which makes it impossible to establish causality.

Researchers said the study highlights the importance of continued public education regarding the risks of cigarette smoking and the failure of dual use to reduce risk.

"The results could be used to counsel patients about the potential risk of using both cigarettes and e-cigarettes," Stokes said. "Some people who smoke cigarettes, pick up e-cigarette use to reduce the frequency with which they smoke cigarettes. They often become dual users of both products rather than switching entirely from one to the other. If e-cigarettes are used as a means to quit smoking, cigarette smoking should be completely replaced and a plan to ultimately attain freedom from all tobacco products should be advised."

Credit: 
American Heart Association

Brain cancer linked to tissue healing

image: A brain scan showing a top down view of a cross-section with a glioblastoma tumour highlighted in red.

Image: 
Hellerhoff, Wikimedia Commons

The healing process that follows a brain injury could spur tumour growth when new cells generated to replace those lost to the injury are derailed by mutations, Toronto scientists have found. A brain injury can be anything from trauma to infection or stroke.

The findings were made by an interdisciplinary team of researchers from the University of Toronto, The Hospital for Sick Children (SickKids) and the Princess Margaret Cancer Centre who are also on the pan-Canadian Stand Up To Cancer Canada Dream Team that focuses on a common brain cancer known as glioblastoma.

"Our data suggest that the right mutational change in particular cells in the brain could be modified by injury to give rise to a tumour," says Dr. Peter Dirks, Dream Team leader who is the Head of the Division of Neurosurgery and a Senior Scientist in the Developmental and Stem Cell Biology program at SickKids.

Gary Bader, a professor of molecular genetics in the Donnelly Centre for Cellular and Biomolecular Research at U of T's Temerty Faculty of Medicine and Dr. Trevor Pugh, Senior Scientist at the Princess Margaret, also led the research which has been published today in the journal Nature Cancer.

The findings could lead to new therapy for glioblastoma patients who currently have limited treatment options with an average lifespan of 15 months after diagnosis.

"Glioblastoma can be thought of as a wound that never stops healing," says Dirks. "We're excited about what this tells us about how cancer originates and grows and it opens up entirely new ideas about treatment by focusing on the injury and inflammation response."

The researchers applied the latest single-cell RNA sequencing and machine learning technologies to map the molecular make-up of the glioblastoma stem cells (GSCs), which Dirks' team previously showed are responsible for tumour initiation and recurrence after treatment.

They found new subpopulations of GSCs which bear the molecular hallmarks of inflammation and which are comingled with other cancer stem cells inside patients' tumours. It suggests that some glioblastomas start to form when the normal tissue healing process, which generates new cells to replace those lost to injury, gets derailed by mutations, possibly even many years before patients become symptomatic, Dirks said.

Once a mutant cell becomes engaged in wound healing, it cannot stop multiplying because the normal controls are broken and this spurs tumour growth, according to the study.

"The goal is to identify a drug that will kill the glioblastoma stem cells," says Bader, whose graduate student Owen Whitley contributed to the computational data analysis "But we first needed to understand the molecular nature of these cells in order to be able to target them more effectively."

The team collected GSCs from 26 patients' tumours and expanded them in the lab to obtain sufficient numbers of these rare cells for analysis. Almost 70,000 cells were analyzed by single-cell RNA sequencing which detects what genes are switched on in individual cells, an effort led by Laura Richards, a graduate student in Pugh's lab.

The data confirmed extensive disease heterogeneity, meaning that each tumour contains multiple subpopulations of molecularly distinct cancer stem cells, making recurrence likely as existing therapy can't wipe out all the different subclones.

A closer look revealed that each tumour has either of the two distinct molecular states--termed "Developmental" and "Injury Response"-- or somewhere on a gradient between the two.

The developmental state is a hallmark of the glioblastoma stem cells and resembles that of the rapidly dividing stem cells in the growing brain before birth.

But the second state came as a surprise. The researchers termed it "Injury Response" because it showed an upregulation of immune pathways and inflammation markers, such as interferon and TNFalpha, which are indicative of wound healing processes.

These immune signatures were only picked up thanks to the new single-cell technology after being missed by older methods for bulk cell measurements.

Meanwhile, experiments led by Stephane Angers' lab at the Leslie Dan Faculty of Pharmacy established that the two states are vulnerable to different types of gene knock outs, revealing a swathe of therapeutic targets linked to inflammation that had not been previously considered for glioblastoma.

Finally, the relative comingling of the two states was found to be patient-specific, meaning that each tumour was biased either toward the developmental or the injury response end of the gradient. The researchers are now looking to target these biases for tailored therapies.

"We're now looking for drugs that are effective on different points of this gradient", says Pugh, who is also the Director of Genomics at the Ontario Institute for Cancer Research. "There's a real opportunity here for precision medicine-- to dissect patients' tumours at the single cell level and design a drug cocktail that can take out more than one cancer stem cell subclone at the same time."

Credit: 
University of Toronto

A high order for a low dimension

image: Subtle changes in the arrangement of component materials can have a stronger knock-on effect to the bulk material than was previously thought.

Image: 
© 2020 Kondo et al

Spintronics refers to a suite of physical systems which may one day replace many electronic systems. To realize this generational leap, material components that confine electrons in one dimension are highly sought after. For the first time, researchers created such a material in the form of a special bismuth-based crystal known as a high-order topological insulator.

To create spintronic devices, new materials need to be designed that take advantage of quantum behaviors not seen in everyday life. You are probably familiar with conductors and insulators, which permit and restrict the flow of electrons, respectively. Semiconductors are common but less familiar to some; these usually insulate, but conduct under certain circumstances, making them ideal miniature switches.

For spintronic applications, a new kind of electronic material is required and it's called a topological insulator. It differs from these other three materials by insulating throughout its bulk, but conducting only along its surface. And what it conducts is not the flow of electrons themselves, but a property of them known as their spin or angular momentum. This spin current, as it's known, could open up a world of ultrahigh-speed and low-power devices.

However, not all topological insulators are equal: Two kinds, so-called strong and weak, have already been created, but have some drawbacks. As they conduct spin along their entire surface, the electrons present tend to scatter, which weakens their ability to convey a spin current. But since 2017, a third kind of topological insulator called a higher-order topological insulator has been theorized. Now, for the first time, one has been created by a team at the Institute for Solid State Physics at the University of Tokyo.

"We created a higher-order topological insulator using the element bismuth," said Associate Professor Takeshi Kondo. "It has the novel ability of being able to conduct a spin current along only its corner edges, essentially one-dimensional lines. As the spin current is bound to one dimension instead of two, the electrons do not scatter so the spin current remains stable."

To create this three-dimensional crystal, Kondo and his team stacked two-dimensional slices of crystal one atom thick in a certain way. For strong or weak topological insulators, crystal slices in the stack are all oriented the same way, like playing cards face down in a deck. But to create the higher-order topological insulator, the orientation of the slices was alternated, the metaphorical playing cards were faced up then down repeatedly throughout the stack. This subtle change in arrangement makes a huge change in the behavior of the resultant three-dimensional crystal.

The crystal layers in the stack are held together by a quantum mechanical force called the van der Waals force. This is one of the rare kinds of quantum phenomena that you actually do see in daily life, as it is partly responsible for the way that powdered materials clump together and flow the way they do. In the crystal, it adheres the layers together.

"It was exciting to see that the topological properties appear and disappear depending only on the way the two-dimensional atomic sheets were stacked," said Kondo. "Such a degree of freedom in material design will bring new ideas, leading toward applications including fast and efficient spintronic devices, and things we have yet to envisage."

Credit: 
University of Tokyo

Pollutants rapidly changing the waters near Ieodo Island

There has been frequent occurrence of red tide in coastal waters around Korea where the sea turns red. Red tide is a phenomenon in which phytoplankton proliferate as nutrient or sewage flow into seawater, making it appear red. This not only causes damage to the fisheries industry but also affects the marine ecosystem.

Professor Kitack Lee and Ph.D. candidate Ji-Young Moon (first author) of POSTECH's Division of Environmental Science and Engineering have confirmed that the inflow of nitrogen pollutants since the 1980s has disturbed the nutrient balance in the northeast Asian waters and is changing the species of phytoplankton responsible for red tide. The team also found that the fastest change in the oceanic conditions caused by this inflow of nitrogen pollutants is happening in the waters near the Ieodo Ocean Research Station, located downstream of the Changjiang River of China. These findings were recently introduced in the journal Limnology and Oceanography.

The Northeast Asia region, including Korea, China, and Japan, has seen an increase of nitrogen pollutants because of the rapid population growth and industrialization in modern times. As the nitrogen pollutant flows into the sea as a result of floods and monsoons, northeast Asian waters have experienced an unexpected massive fertilization. Many scientists have warned that these nitrogen pollutants not only increase harmful algae bloom in the coastal waters, but also lead to deterioration of water quality and changes in the formation of marine ecosystem species.

The researchers analyzed the nutrient concentration data and the occurrence of red tide in the East China seas and coastal waters of the Korean Peninsula in the past 40 years since the 1980s. The results show that a wide range of oceans in this region have changed from being nitrogen deficient to phosphorus (P) deficient, while at the same time the concentration of nitrate (N) has been higher than that of silicate (Si). In particular, it has been confirmed that the major phytoplankton in Korea's coastal waters are also changing from diatoms to dinoflagellates.

The research team explained that this is direct evidence that the nutrient regime in the northeast Asian marginal sea is changing as the amount of nitrogen pollutants is increasing, which is further creating phytoplankton species and disrupting the marine ecosystem.

At the same time, the team verified that the fastest place to see these oceanic changes due to the inflow of nitrogen pollutants was in the waters around Ieodo Ocean Research Station.

"Since the changes in the waters near Ieodo Ocean Research Station will soon occur in the waters near the Korean Peninsula, long-term observation of the concentration of nutrient in the coastal waters and changes in the ecosystem are necessary," proposed Professor Kitack Lee who led the study. He added, "The findings can be used as important scientific evidence for establishing environmental policies, such as setting nitrogen pollutant emissions."

Credit: 
Pohang University of Science & Technology (POSTECH)

Largest, most diverse ever study of prostate cancer genetics brings disparities into focus

Some racial and ethnic groups suffer relatively more often, and fare worse, from common ailments compared to others. Prostate cancer is one disease where such health disparities occur: risk for the disease is about 75 percent higher, and prostate cancer is more than twice as deadly, in Blacks compared to whites. Yet whites are often overrepresented as research participants, making these differences difficult to understand and, ultimately, address.

With this problem in mind, scientists at the USC Center for Genetic Epidemiology and the Institute for Cancer Research, London, led a study that brings together data from the majority of genomic prostate cancer studies globally. Including more than 200,000 men of European, African, Asian and Hispanic ancestry from around the world, the study is the largest, most diverse genetic analysis ever conducted for prostate cancer -- and possibly for any other cancer.

The paper appears today in Nature Genetics.

The study's authors identified 86 new genetic variations that increase risk for prostate cancer, not previously discovered, bringing the total number of risk loci for prostate cancer to 269. Applying a model for assessing prostate cancer risk based on the interplay of these genetic factors, the researchers showed that men of African ancestry inherit about twice the prostate cancer risk on average compared to men of European ancestry, while men of Asian ancestry inherit about three-quarter the risk of their white counterparts -- evidence that genetics play some part in the differences in how often cancer occurs in different racial groups.

This research is also a step toward applying precision medicine to early detection.

"Our long-term objective is to develop a genetic risk score that can be used to determine a man's risk of developing prostate cancer," said corresponding author Christopher Haiman, ScD, professor of preventive medicine at the Keck School of Medicine of USC and director of the USC Center for Genetic Epidemiology. "Men at higher risk may benefit from earlier and more frequent screening, so the disease can be identified when it's more treatable."

Study tackles health disparities

Praise for the study's potential in increasing health equity came from Jonathan W. Simons, MD, president and chief executive officer of the Prostate Cancer Foundation. The foundation funds Haiman's other work leading the RESPOND initiative exploring the disease among African American men.

"PCF believes that Dr. Haiman's research findings will lead to more effective prostate cancer precision screening strategies for men of West African ancestry," Simons said. "PCF is certain that identification of these very high-risk individuals will make a positive impact on this significant health care disparity."

Haiman and his colleagues used genomic datasets from countries including the U.S., the UK, Sweden, Japan, and Ghana to compare 107,247 men with prostate cancer to a control group comprising 127,006 men. By examining a spectrum of races and ethnicities, the study's authors aim to make the genetic risk score more useful for more people.

"We not only found new markers of risk, but also demonstrated that, by combining genetic information across populations, we were able to identify a risk profile that can be applied across populations," said Haiman. "This emphasizes the value of adding multiple racial and ethnic populations into genetic studies."

Risk score could contribute to better screening

Today's screening guidelines for prostate cancer suggest that those 55 and older with average risk can choose to take the prostate-specific antigen (PSA) test in consultation with their physicians. High PSA levels are associated with prostate cancer, but the PSA test tends to detect slow-growing tumors. With widespread use, it too often leads to unnecessary treatment.

The PSA test's value as a screening tool would grow if it were deployed selectively to monitor people found to be at high risk for prostate cancer -- which is where the genetic risk score could come into play. Those at particularly high risk might even begin screening before age 55.

In order to translate the current research findings into better early detection, a large-scale clinical trial would be needed.

"Most important, unlike previous screening trials, this one would need to be more representative of the diversity we see in the world," Haiman said. "No population should get left behind."

Credit: 
Keck School of Medicine of USC

Novel film that that evaporates sweat six times faster and holds 15 times more moisture

image: An NUS research team led by Assistant Professor Tan Swee Ching (seated, left) and Professor Ding Jun (seated right) has developed a novel film that is extremely effective in evaporating sweat from our skin. Promising applications include shoe insoles and linings, as well as underarm pads for sweat absorption.

Image: 
National University of Singapore

A team of researchers from the National University of Singapore (NUS) has created a novel film that is very effective in evaporating sweat from our skin to keep us cool and comfortable when we exercise, and the moisture harvested from human sweat can be used to power wearable electronic devices such as watches, fitness trackers, and more.

Sweating is a natural process for our body to reduce thermal stress. "Sweat is mostly composed of water. When water is evaporated from the skin surface, it lowers the skin temperature and we feel cooler. In our new invention, we created a novel film that is extremely effective in evaporating sweat from our skin and then absorbing the moisture from sweat. We also take this one step further - by converting the moisture from sweat into energy that could be used to power small wearable devices," explained research team leader Assistant Professor Tan Swee Ching, who is from the NUS Department of Material Science and Engineering.

The main components of the novel thin film are two hygroscopic chemicals - cobalt chloride and ethanolamine. Besides being extremely moisture-absorbent, this film can rapidly release water when exposed to sunlight, and it can be 'regenerated' and reused for more than 100 times.

To make full use of the absorbed sweat, the NUS team has also designed a wearable energy harvesting device comprising eight electrochemical cells (ECs), using the novel film as the electrolyte. Each EC can generate about 0.57 volts of electricity upon absorbing moisture. The overall energy harvested by the device is sufficient to power a light-emitting diode. This proof-of-concept demonstration illustrates the potential of battery-less wearables powered using human sweat.

This technological breakthrough was reported in the September print issue of the scientific journal Nano Energy.

Absorbing moisture for personal comfort

Conventional hygroscopic materials such as zeolites and silica gels have low water uptake and bulk solid structures, making them unsuitable for absorbing moisture from sweat evaporation. In comparison, the new moisture-absorbing film developed by NUS researchers takes in 15 times more moisture and do this 6 times faster than conventional materials.

In addition, this innovative film shows a colour change upon absorbing moisture, from blue to purple, and finally pink. This feature can be used an indicator of the degree of moisture absorption.

The NUS team packaged the film into breathable and waterproof polytetrafluoroethylene (PTFE) membranes, which are flexible and commonly used in clothing, and successfully demonstrated the application of the moisture-absorption film for underarm pad, shoe lining and shoe insole.

Asst Prof Tan said, "Underarm sweating is embarrassing and frustrating, and this condition contributes to the growth of bacteria and leads to unpleasant body odour. Accumulation of perspiration in the shoes could give rise to health problems such as blisters, calluses, and fungal infections. Using the underarm pad, shoe lining and shoe insole embedded with the moisture-absorbing film, the moisture from sweat evaporation is rapidly taken in, preventing an accumulation of sweat and provides a dry and cool microclimate for personal comfort."

"The prototype for the shoe insole was created using 3D printing. The material used is a mixture of soft polymer and hard polymer, thus providing sufficient support and shock absorption," explained research team co-leader Professor Ding Jun, who is also from the NUS Department of Material Science and Engineering.

The NUS team now hopes to work with companies to incorporate the novel moisture-absorption film into consumer products.

Credit: 
National University of Singapore