Tech

How human genetic data is helping dogs fight cancer

image: Dan Duval, PhD, and colleagues identify common drivers of human/canine cancers.

Image: 
University of Colorado Cancer Center

Some of what we learn through the compassionate treatment of dogs with cancer goes on to help human patients. Now a study by researchers at University of Colorado Cancer Center and Colorado State University Flint Animal Cancer Center returns the favor: We know so many of the genetic changes that cause human cancer - the current study, recently published in the journal Molecular Cancer Therapeutics, sequences 33 canine cancer cell lines to identify "human" genetic changes could be driving these canine cancers, possibly helping veterinary oncologists use more human medicines to cure cancer in dogs.

"We're taking what we know from human cancers and applying it in canine cancers to help move the canine cancer treatment forward faster," says Dawn L. Duval, PhD, investigator at CU Cancer Center and assistant professor in the CSU College of Veterinary Medicine and Biomedical Sciences.

The study used whole-exome sequencing, which identifies the structure of a cell's protein-producing genes (excluding genes that sit silently in the genome). Humans have about 23,000 genes with many genetic variations naturally found in each cell. The vast majority of these variations are meaningless; a very few cause cancer. In the current study, the group was able to identify 61 genetic variants in these canine cancer cell lines that match known drivers of human cancer.

"Our goal was to start to connect the dots from genetic variations we were seeing on an analytical, screening level to variants that were truly driving cancer and could be explored as druggable targets," Duval says.

The study went beyond simply identifying these targets.

"We did all this variant finding and and at that point it was all very theoretical. Then we used a molecularly targeted drug - a MEK inhibitor - across the whole panel. Sure enough, what we saw was that canine cell lines with mutations in MAPK pathway genes, matching those known to cause human cancers, were sensitive to the same drugs that we would use with humans," Duval says.

Interestingly, the study could bring benefit full circle. After identifying these human oncogenes in canine cancer cells, the group placed the genes they found into 10 functional categories - "Categories of genes that control things like proliferation, cell cycle, DNA repair, etc.," Duval says. Then, in each of these categories, the group placed a "heat map" of genetic variants not yet known to cause cancer, but suspicious due to their prevalence in these cancer cell lines.

"The goal was to say, for example, here's a BRAF mutation - we know this is a cancer driver and here are all these other genes that had alterations that were grouped with BRAF. So here are genes to look at in the future. It's a way to discover new cancer drivers in dogs, which could potentially be new drivers in humans as well," Duval says.

By determining which canine cancers are most like human cancers, the group may also be able to transfer lessons learned while treating these human-like canine cancers back to the treatment of human patients.

"This would allow us to run drug trials in dogs that can be used to optimize therapies for both species," Duval says.

The goal now is to sequence more tumors from dogs and to develop additional cell lines to expand the panel. Eventually, the project could help to discover which human drugs work against cancer in dogs and which dog cancers closely resemble human cancers in both the driving mutations and drug responses. And just as the genes found to cause cancer in humans may also be targets for cancer in dogs, this process that leads to the discovery of cancer-causing genes in dogs could lead to new targets for anti-cancer drugs in humans.

Credit: 
University of Colorado Anschutz Medical Campus

Machine learning reveals how strongly interacting electrons behave at atomic level

image: Background: a real experiment image of electron density from one of the group's microscopes.

Inset: the architecture of ANN that were trained to 'look at' such images and report back what states of electronic matter are hidden therein.

Image: 
JC Séamus Davis

For the last 100 years materials such as gold and silicon have been conduits to the force which has powered civilisation: electronics. And in all such conventional materials the behaviour of electrons is simple: they largely ignore each other.

However, future electronics designed for quantum technologies requires development of new quantum materials. In quantum materials, e.g. high temperature superconductors, electrons interact so strongly and behave so strangely that, until now, they have defied explanation.

But now, scientists have made a significant breakthrough in both technique and understanding. Based on a suite of 80 artificial neural networks (ANN) that they had designed and trained to recognize different forms of electronic matter, machine learning has discovered a new state called a Vestigial Nematic State (VNS).

Lead author, Prof. JC Séamus Davis, of University of Oxford, said: 'I have focused on visualisation of electrons at atomic level. Twenty years ago we developed a microscope that could see directly where all electrons are in the quantum materials, and how the function.

'In this new collaboration with Professors Eun-Ah Kim (Cornell) and E. Kathami (San Jose State) , we fed an electronic image archive gathered over about 20 years - 1000s of electronic structure images - into these artificial neural networks. To my amazement it actually worked! The Vestigial Nematic State had been predicted by theorists but there was no experimental evidence. It was thrilling to see how the new machine learning technique discovered it hiding in plain sight. '

It is a milestone for general scientific technique as it demonstrates how machine learning techniques can process and identify specific symmetries of highly complex image-arrays from electronic quantum matter data.

By fusing machine learning with quantum matter visualisation the scientists believe that it will accelerate quantum material advances, especially in the area of high temperature superconductivity, in the quest for room temperature quantum computers.

Credit: 
University of Oxford

Milk: Best drink to reduce burn from chili peppers

People who order their Buffalo wings especially spicy and sometimes find them to be too "hot," should choose milk to reduce the burn, according to Penn State researchers, who also suggest it does not matter if it is whole or skim.

The research originated as an effort by the Sensory Evaluation Center in Penn State's College of Agricultural Sciences to identify a beverage to clear the palates of participants in tasting studies involving capsaicin. An extract from chili peppers, capsaicin is considered an irritant because it causes warming and burning sensations.

"We were interested in giving capsaicin solutions to many test participants and we were concerned with the lingering burn at the end of an experiment," said center director John Hayes, associate professor of food science. "Initially, one of our undergrad researchers wanted to figure out the best way to cut the burn for people who found our samples to be too intense."

Widespread consumption of chili peppers and foods such as wings spiced with siracha and hot sauce show that many people enjoy this burn, Hayes added. But these sensations also can be overwhelming. While folklore exists on the ability of specific beverages to mitigate capsaicin burn, quantitative data to support these claims are lacking.

The researchers looked at five beverages and involved 72 people -- 42 women and 30 men. Participants drank spicy Bloody Mary mix, containing capsaicin. Immediately after swallowing, they rated the initial burn.

Then, in subsequent separate trials, they drank purified water, cola, cherry-flavored Kool-Aid, seltzer water, non-alcoholic beer, skim milk and whole milk. Participants continued to rate perceived burn every 10 seconds for two minutes. There were eight trials. Seven included one of the test beverages and one trial did not include a test beverage.

The initial burn of the spicy Bloody Mary mix was, on average, rated below "strong" but above "moderate" by participants and continued to decay over the two?minutes of the tests to a mean just above "weak," according to lead researcher Alissa Nolden. All beverages significantly reduced the burn of the mix, but the largest reductions in burn were observed for whole milk, skim milk and Kool-Aid.

More work is needed to determine how these beverages reduce burn, noted Nolden, a doctoral student in food science at Penn State when she conducted the research, now an assistant professor in the Department of Food Science at the University of Massachusetts. She suspects it is related to how capsaicin reacts in the presence of fat, protein and sugar.

"We weren't surprised that our data suggest milk is the best choice to mitigate burn, but we didn't expect skim milk to be as effective at reducing the burn as whole milk," she said. "That appears to mean that the fat context of the beverage is not the critical factor and suggests the presence of protein may be more relevant than lipid content."

Following the completion of all the trials, the participants answered two questions: "How often do you consume spicy food?" and "Do you like spicy food?" Researchers had hoped to see some correlation between participants' perception of the burn from capsaicin and their exposure to spicy food, Nolden pointed out. But no such relationship emerged from the study.

The findings of the research, recently published in Physiology and Behavior, might surprise some spicy foods consumers, but they should not, Nolden noted.

"Beverages with carbonation such as beer, soda and seltzer water predictably performed poorly at reducing the burn of capsaicin," she said. "And if the beer tested would have contained alcohol, it would have been even worse because ethanol amplifies the sensation."

In the case of Kool-Aid, Nolden and her colleagues do not think that the drink removes the capsaicin but rather overwhelms it with a sensation of sweet.

The study was novel, Nolden believes, because it incorporated products found on food-market shelves, making it more user friendly.

"Traditionally, in our work, we use capsaicin and water for research like this, but we wanted to use something more realistic and applicable to consumers, so we chose spicy Bloody Mary mix," she said. "That is what I think was really cool about this project -- all the test beverages are commercially available, too."

Credit: 
Penn State

The making of 'warm ice'

image: (1) A magnified view of the diamond anvil cell to produce dynamic high pressures.
(2) Observation image of the growth formation change of an ice crystal with increasing pressure rate.

Image: 
Korea Research Institute of Standards and Science (KRISS)

Can water freeze at room temperature or even higher temperatures at which ordinary water boils? Formation of so-called "Warm Ice" maybe an unfamiliar phenomenon to the general public and yet can be made possible by controlling the crystallization process in which liquid turns into a solid. In principle, such manipulation could be achieved not only by changing temperature but also pressure. However, the latter requires exerting extreme level of pressure (10,000 times the atmospheric pressure) on water.

The Center for Convergence Property Measurement, Frontier in Extreme Physics Team at Korea Research Institute of Standards and Science (KRISS, President Sang-Ryoul Park) succeeded in creating room-temperature ice and controlling its growth behaviors by dynamically compressing water up to pressures above 10,000 atmospheres. By systematically varying the compression rates, the research team discovered a sudden morphological crossover from 3-dimensional to 2-dimensional ice. Rigorous investigation unveiled the underlying mechanism of the anomalous growth transition which is manifested in the forms of shape- and growth speed changes of ice. Such high-pressure technology and phenomena can have significant impact on a wide range of practical applications - biology, foods, medical, and aerospace.

This technology is significant in that the size, shape, and growth rate of ice can be artificially controlled regardless of the temperature.

Ice observed in nature has more than 10,000 crystals including hexagonal plates, columns, and dendrites. Such ice crystals of various forms induce curiosity about nature as well as having significant industrial applications. Especially, controlling ice crystals with pressure rather than temperature can resolve existing problems of ice, so there is great interest in this endeavor.

A representative example is foods. When meat is frozen at regular atmospheric pressure, hexagonal plate ice crystals with needle-like corners are created, and cause damages of the cells and tissue. This is why meat from the freezer is less juicy and does not taste as good as unfrozen meat. However, when meat is frozen at high pressure, ice crystals of different shapes that are not sharp in corners are produced, protecting the quality of the meat.

Ice formed on airplanes can cause aircraft defects and accidents. On days with snow and at an altitude of 10,000 m, where the temperature falls below 40? below zero, ice forms on aircraft wings. When ice crystals are formed abnormally, the wing shape changes, degrading the lift. Thus, control of the growth rate and form of ice crystals greatly affects the safety and operation efficiency of aircraft.

KRISS Principal Research Scientists Yun-Hee Lee, Sooheyong Lee, and Geun Woo Lee developed a "real-time dynamic diamond anvil cell" device that can apply pressures up to 5,000,000 times that of atmospheric pressure per second and applied the device to study ice growth under high pressures. As a result, the research team succeeded in compressing water at room temperature to produce high pressure ice and succeeded in transforming 3-dimensional octahedron ice into a 2-dimensional wing shape ice through dynamic pressure control.

This technology is an independent technology that can simultaneously measure the pressure, volume, image, and molecular structure of materials by integrating driving control and molecular vibration measurement technologies to the diamond anvil cell, which implements an extremely high-pressure environment.

The focus of similar research has conventionally been on the control of temperature and concentration; clear observation of fast crystal growth was not possible due to time delay from inevitable thermal and mass diffusion. On the other hand, pressure allowed for immediate and uniform application and overcoming of existing limitations so that the crystallization process of water molecules can be understood in detail and controlled.

KRISS Principal Research Scientist Yun-Hee Lee said, "application of high-pressure freezing technology can lead to new forms of ice crystals and freezing processes to maintain the taste and freshness of foods. Applying this technology to the cold chain system currently used in the logistics of fresh foods is expected to further improve the marketability of foods."

KRISS Principal Research Scientist Geun Woo Lee explained that "this technology can be applied to analyze various crystalline structures; the range of application fields is infinite." Also, Principal Research Scientist Sooheyong Lee continued, "new material characteristics can be discovered under extreme environments like those of extremely high pressure, so science and technologies that have reached limits can be pushed further towards new dimensions."

Credit: 
National Research Council of Science & Technology

New membrane efficiently separates mirrored molecules

image: Chiral sites (trees)are inserted between two layers of graphite-phase carbon nitride (cloud layers). The 'trees' can catch the left-handed molecules (Six Ears) while allowing the right-handed ones (Monkey King) to be transported away, thus resulting in high separation efficiency.

Image: 
Image by CUI Jie

Prof. LIU Bo and colleagues at the University of Science and Technology of China (USTC) have developed a chiral separation membrane capable of capturing left-handed chiral molecules and releasing right-handed counterpart using two-dimensional layered materials. The chiral membrane, showing a separation efficiency up to 89% towards limonene racemate, is expected to be put into industrial production. The research was published in Nature Communications on June 7.

In the classic Chinese tale Journey to the West, no one could tell the difference between the real Monkey King and his "evil twin" Six Ears, thus causing much confusion. Only the Buddha could distinguish the real Monkey King from the fake, ensuring that the Monkey King could continue his journey.

Among biomolecules, many are inseparable from each other - just like the Monkey King and Six Ears. These are the so-called chiral isomers (enantiomers), which have identical chemical formulas but rotate in space in opposite directions. They are mirror images of each other and are non-superposable.

However, despite their chemical similarity, enantiomers may function very differently. For example, levamlodipine can treat high blood pressure while dextroamphetamine has no such effect. In the biopharmaceutical process, chiral isomers are often produced at the same time, so the mixture must be separated. However, left-handed and right-handed molecules are as difficult to identify and separate as the Monkey King and Six Ears.

Chiral separation membranes are the most impressive solution. However, polymer membranes have low separation efficiency, and crystalline compounds don't easily form membranes. LIU Bo and his research team, using a two-dimensional (2D) layered material, tuned its interlayer distance and introduced chiral sites into the interlayer space, and assembled the layers into efficient and stable chiral separation membrane.

"The membrane exhibits high selective permeation efficiency among various enantiomers," said WANG Yang, a Ph.D. student at USTC and the first author. "It can efficiently separate the R-limonene and retain most of the L-limonene. The separation performance can be further improved when applying a certain pressure."

This work demonstrates the potential of tuning the chemical environment within interlayer space via electrostatic interaction in order to fabricate stable membranes that fulfill the function of precise sieving at the sub-nanometer scale. Such membranes could be applied to sewage processing and desalination, among other things. Indeed, LIU Bo sees "broad application prospects" for chiral membranes comprising 2D layers.

Currently, the researchers are able to fabricate chiral membranes at the centimeter scale in the lab. The team is increasing the membrane size into meter-scale membranes, aiming to separate chiral drug molecules for the pharmaceutical industry.

Credit: 
Chinese Academy of Sciences Headquarters

A further step towards reliable quantum computation

Quantum computation has been drawing the attention of many scientists because of its potential to outperform the capabilities of standard computers for certain tasks. For the realization of a quantum computer, one of the most essential features is quantum entanglement. This describes an effect in which several quantum particles are interconnected in a complex way. If one of the entangled particles is influenced by an external measurement, the state of the other entangled particle changes as well, no matter how far apart they may be from one another. Many scientists are developing new techniques to verify the presence of this essential quantum feature in quantum systems. Efficient methods have been tested for systems containing only a few qubits, the basic units of quantum information. However, the physical implementation of a quantum computer would involve much larger quantum systems. Yet, with conventional methods, verifying entanglement in large systems becomes challenging and time-consuming, since many repeated experimental runs are required.

Building on a recent theoretical scheme, a team of experimental and theoretical physicists from the University of Vienna and the ÖAW led by Philip Walther and Borivoje Daki?, together with colleagues from the University of Belgrade, successfully demonstrated that entanglement verification can be undertaken in a surprisingly efficient way and in a very short time, thus making this task applicable also to large-scale quantum systems. To test their new method, they experimentally produced a quantum system composed of six entangled photons. The results show that only a few experimental runs suffice to confirm the presence of entanglement with extremely high confidence, up to 99.99 %.

The verified method can be understood in a rather simple way. After a quantum system has been generated in the laboratory, the scientists carefully choose specific quantum measurements which are then applied to the system. The results of these measurements lead to either confirming or denying the presence of entanglement. "It is somehow similar to asking certain yes-no questions to the quantum system and noting down the given answers. The more positive answers are given, the higher the probability that the system exhibits entanglement", says Valeria Saggio, first author of the publication in Nature Physics. Surprisingly, the amount of needed questions and answers is extremely low. The new technique proves to be orders of magnitude more efficient compared to conventional methods.

Moreover, in certain cases the number of questions needed is even independent of the size of the system, thus confirming the power of the new method for future quantum experiments.

While the physical implementation of a quantum computer is still facing various challenges, new advances like efficient entanglement verification could move the field a step forward, thus contributing to the progress of quantum technologies.

Credit: 
University of Vienna

Which climates are best for passive cooling technologies?

image: UCSD researchers recently set out to gain a better understanding of the thermal balance of power plants and surfaces, but quickly realized that they would need to determine what roles cloud cover and relative humidity play in the transparency of the atmosphere to radiatio. In the Journal of Renewable and Sustainable Energy, the group presents detailed radiative cooling resource maps they created to help determine the best climates for large-scale deployment of passive cooling technologies. This image shows yearly averaged passive cooling potential in watts per square meter for the contiguous US territory.

Image: 
Carlos Coimbra

WASHINGTON, D.C., June 25, 2019 -- A group of University of California, San Diego researchers set out to gain a better understanding of the thermal balance of power plants and surfaces, like heliostat mirrors or solar panels, when exposed to both solar (shortwave) and atmospheric (longwave) radiation. They quickly realized that they would first need to determine what roles cloud cover and relative humidity play in the transparency of the atmosphere to radiation at temperatures common on Earth.

Determining how much heat can be rejected to outer space and how much is radiated back by the atmosphere to the surface is important when it comes to identifying the exact role water plays. It turns out water, which is present in gaseous, liquid and solid phases within the atmosphere, is not only the main player but also the only atmospheric element that varies rapidly in concentration and isn't mixed well vertically.

In the Journal of Renewable and Sustainable Energy, from AIP Publishing, the group presents detailed radiative cooling resource maps they created to help determine the best climates for large-scale deployment of passive cooling technologies, which rely on daily changes in temperature and humidity.

"We used recently calibrated correlations, experimental data and models for ground values of water vapor and temperature with sky emissivities to map out the places in the U.S. where we can most effectively reject heat from the ground to outer space," said Carlos F.M. Coimbra, chair of the Department of Mechanical and Aerospace Engineering. "Because of the physical processes involved, locations with drier atmospheres and the most frequent clear skies are the most appropriate for deploying passive cooling technologies."

The American Southwest shows great potential, while "other areas where the effect of relative humidity alone depletes the ability to use this cold reservoir resource show much less potential," Coimbra said. "In the areas with great cooling potential, the total energy consumption and the associated carbon footprint of conventional cooling technologies -- often the highest component of electricity demand -- can be substantially reduced."

This work is particularly significant for the area of thermophotonic design of surfaces for passive cooling, which has garnered attention lately due to the potential of rejecting heat to the sky.

"Since antiquity, many societies have used the cold sky to their advantage," he said. "In desert areas, a clever combination of transpiration cooling (an evaporative method) with passive radiative cooling to 'cold' (dry, clear) skies was often used to produce ice and keep it from melting."

Recent developments in the design of surfaces for specific radiative properties means that exposed surfaces could be coated with paints or other surface treatments -- such as specially designed plastics -- to substantially improve the ability of these surfaces to reject heat during dry, clear sky conditions during the day or night.

"The design of dry cooling condensers for concentrated solar power plants or air conditioning systems will benefit from the ability to reflect selectivity in the solar while emitting strongly within the infrared parts of the spectrum," Coimbra said. "But these strategies are most effective during particular seasons and for particular regions of the planet. We live in an era of DNA-targeted medicines, but we still use generic energy technologies that aren't necessarily tailored to different regional needs. It is time to rethink the way we deploy these impactful technologies."

Credit: 
American Institute of Physics

Laser light detects tumors

video: A team of researchers from Jena presents a groundbreaking new method for the rapid, gentle and reliable detection of tumors with laser light. The optical method will help surgeons to remove tumors more precisely and could make cancer operations possible without a scalpel.

Image: 
Leibniz-IPHT // René Hiepen

It can take up to four weeks before patients can be sure whether the entire tumor has been removed during cancer surgery. A time of agonizing uncertainty - in which any remaining tumor cells can already multiply again. A team of scientists from Jena has now researched a diagnostic procedure that could revolutionize the previous procedure: Using laser light, the researchers make cancerous tissue visible. This enables them to provide the surgical team with real-time information in order to reliably identify tumors and tumor margins and decide how much tissue needs to be cut away.

This is made possible by a compact microscope developed by a research team from the Leibniz Institute of Photonic Technology, Friedrich Schiller University, the University Hospital and the Fraunhofer Institute for Applied Optics and Precision Engineering in Jena. It combines three imaging techniques and uses tissue samples to generate spatially high-resolution images of the tissue structure during surgery. Software makes patterns and molecular details visible and processes them with the aid of artificial intelligence. The automated analysis is faster and promises more reliable results than the currently used frozen section diagnostics, which can only be evaluated by an experienced pathologist and still have to be confirmed afterwards.

The optical method, for which the Jena scientists were awarded the renowned Kaiser Friedrich Prize in 2018, helps to prevent weakened patients from having to undergo another operation. It thus makes a significant contribution to improving their chances of recovery. Professor Jürgen Popp, scientific director of Leibniz IPHT, who was also involved in researching the laser rapid test, predicts that the compact microscope could be in the clinic in five years' time.

This could save the German healthcare system considerable costs. "One minute in the operating room is the most expensive minute in the entire clinic," explains Professor Orlando Guntinas-Lichius, Director of the Department of Otolaryngology at the University Hospital Jena. In the case of tumors in the head and neck area, for example, cancer cells are found after almost every 10th operation.

And the Jena researchers are already thinking ahead. They are researching a solution that would enable them to use the unique properties of light to detect tumors inside the body at an early stage and remove them immediately. "To do this, we need novel methods that no longer work with rigid optics, but with flexible endoscopes," says Jürgen Popp. Technologists at Leibniz IPHT produce such fiber probes: glass fibers that are thinner than a human hair. They open the way to minimally invasive medicine that makes gentle diagnosis and healing possible. "Our vision," says Jürgen Popp, "is to use light not only to identify the tumor, but to remove it immediately. This would eliminate the need for physicians to cut with a scalpel and would enable them to ablate the tumor layer by layer using light in order to remove the tumor from the patient completely". In ten to fifteen years, the research team hopes to find a solution. Popp predicts that this would "be a giant step towards completely new tumor diagnostics and therapy".

Credit: 
Leibniz-Institute of Photonic Technology

HKBU discovers mechanisms underlying early life stress and irritable bowel syndrome

Researchers from the School of Chinese Medicine (SCM) at Hong Kong Baptist University (HKBU) have found that the abnormal rise of a soluble protein called Nerve Growth Factor is a key factor linking early life stress to the development of irritable bowel syndrome (IBS). The study, which is the first to demonstrate the link between traumatic psychological events occurring in childhood and lifelong health repercussions, could lead to the development of new treatments for gastrointestinal diseases.

IBS is a common functional bowel disorder characterized by stool irregularities, abdominal discomfort and bloating. While evidence increasingly links the impact of early life adversity with the development of IBS later on in life, the underlying mechanisms which translate a psychological event into gastrointestinal disease have remained elusive. This is especially pertinent since the disease in question, IBS, is widespread globally, including in Hong Kong. IBS presents a large health burden but there is currently no known cure. As a result, a better understanding of its development may present new ways to treat the disease.

The HKBU research team, which was led by SCM Chair Professor Bian Zhaoxiang and Research Assistant Professor Dr Xavier Wong Hoi-leong, found that Nerve Growth Factor (NGF), a neurotrophic factor essential for neuronal development in the nervous system, was highly elevated in the gut of mice in response to early-life stress induced by neonatal maternal separation (NMS). This elicited IBS-like-symptoms in the animal model.

Regarding the mechanism, NGF acts on intestinal stem cells (ISCs) directly to promote their growth and proliferation. The significantly increased number of ISCs in the gut leads to abnormally high numbers of enterochromaffin (EC) cells, a type of intestinal cell responsible for serotonin secretion, which results in aberrantly high serotonin levels. Aberrant serotonin production in the gut is known to cause IBS.

To uncover the mechanism by which early life stress alters intestinal homeostasis, the researchers set up an animal model of early life stress, known as NMS. They found that mice which experienced early-life stress went on to develop life-long IBS-like symptoms. In addition, the number of ISCs and EC cells in the gut increased significantly by 50%. The animal model showed that the elevated secretion of serotonin, which is due to the increase in EC cell density in the gastrointestinal tract, triggered visceral hyperalgesia. This results in heightened pain in the gastrointestinal tract, which is a hallmark of IBS.

To inhibit the activity of NGF, the team administered a specific NGF-blocking antibody into the animal model. This did not only reduce ISCs and EC cell numbers, but also caused the IBS-like symptoms to completely disappear in the animal model.

Importantly, they analyze HKBU clinical data and found a significant positive correlation between NGF and serotonin in the sera of diarrhea-predominant IBS patients, NGF and serotonin increased by around 30% and 75% respectively in the IBS patients, comparing with heathy one, reinforcing the causal link between NGF and serotonin in the development of IBS.

The researchers concluded that NGF is a key driver of the development of IBS following early-life stress. Their work not only highlights the importance of NGF as a novel target in treating IBS, but also demonstrates that early-life adversity, such as a lack of parental care or abuse, may have serious lifelong health consequences.
Their findings, entitled "Early life stress disrupts intestinal homeostasis via NGF-TrkA signaling", was published in the prestigious international journal Nature Communications (April 2019).

The team plans to continue to investigate the therapeutic potential of NGF-inhibitors as a treatment for IBS, including those found in Chinese medicine, and will strive to uncover the complete chain of events that link childhood stress to the development of IBS in adulthood.

Credit: 
Hong Kong Baptist University

Exposure to air pollution in India is associated with more hypertension in women

Long-term exposure to air pollution has been previously associated with a higher risk of hypertension in high-income countries, where air pollution levels are generally lower than in low- and middle-income countries. A team led by the Barcelona Institute for Global Health (ISGlobal), an institution supported by "la Caixa", set out to study this association in India, a lower middle-income country where burdens of air pollution and hypertension are projected to increase. The results show that women exposed to higher levels of air pollution at residence have a higher hypertension prevalence.

The study, performed within the framework of the CHAI project and published in the journal Epidemiology, studied 5,531 adults from 28 peri-urban villages near Hyderabad city, in Southern India. The researchers measured systolic and diastolic blood pressure of participants and estimated their annual residential exposure to fine particulate matter (PM2.5) and black carbon. The participants also answered a survey to determine socio-economic status, lifestyle (including physical activity levels and salt intake), and household characteristics, including the type of cooking fuel generally used (biomass or clean).

Notably, all study participants were exposed to fine particulate matter levels above the 10 μg/m³ limit recommended by the World Health Organisation (WHO). Average exposure to PM2.5 in this study was 33 μg/m3. Based on the blood pressure measurements, almost half of participants (46%) were identified as hypertensive, with high proportions of participants with undiagnosed and untreated hypertension.

The results show that an increase of 1μg/m3 in PM2.5 exposure was associated with a 4% increase in hypertension prevalence in women, as well as a higher systolic and diastolic blood pressure -an increase of 1,4 mmHg and 0.87 mmHg, respectively-. In men, the association observed was weaker.

"Women spend most of their time near their households in this study area - 83% of their daily time as compared to 57% for men -, which could explain why we observe a stronger association in women than in men", explains Ariadna Curto, first author of the study.

The research indicates that long-term exposure to particulate matter is associated with a higher prevalence of hypertension, regardless of the type of fuel used for cooking. "Other studies have found that women that cook with solid fuels such as biomass tend to have higher blood pressure than those using clean fuels, although our data is not powered enough to support this, our study suggests that the effects of outdoor air pollution on cardiovascular health may be independent from those of indoor air pollution", she stresses. "In the light of our lack of association with black carbon, it is important to keep in mind that this is a peri-urban area, where the sources and chemical makeup of air pollution differ to urban areas mostly dominated by traffic sources", adds Curto.

Cathryn Tonne, CHAI project and study coordinator, explains that the mechanisms by which air pollution could contribute to high blood pressure "include inflammation and oxidative stress, which may lead to changes in arterial function."

"Although further epidemiological evidence is needed to confirm our findings, ideally through longitudinal studies, these data suggest that public policies aimed at reducing air pollution will greatly benefit cardiovascular health," concludes Tonne.

Credit: 
Barcelona Institute for Global Health (ISGlobal)

Babies can learn link between language and ethnicity, study suggests

image: Babies as young as 11 months old can associate language with ethnicity, UBC research suggests.

Image: 
Sav Nijeboer / Dr. Janet Werker's Infant Studies Centre

Eleven-month-old infants can learn to associate the language they hear with ethnicity, recent research from the University of British Columbia suggests.

The study, published April 22 by Developmental Psychobiology, found that 11-month-old infants looked more at the faces of people of Asian descent versus those of Caucasian descent when hearing Cantonese versus English--but not when hearing Spanish.

"Our findings suggest that by 11 months, infants are making connections between languages and ethnicities based on the individuals they encounter in their environments. In learning about language, infants are doing more than picking up sounds and sentences--they also learn about the speakers of language," said Lillian May, a psychology lecturer at UBC who was lead author of the study.

The research was done in Vancouver, where approximately nine per cent of the population can speak Cantonese.

The researchers played English-learning infants of Caucasian ancestry sentences in both English and Cantonese and showed them pictures of people of Caucasian descent, and of Asian descent. When the infants heard Cantonese, they looked more at the Asian faces than when they were hearing English. When they heard English, they looked equally to Asian and Caucasian faces.

"This indicates that they have already learned that in Vancouver, both Caucasians and Asians are likely to speak English, but only Asians are likely to speak Cantonese," noted UBC psychology professor Janet Werker, the study's senior author.

The researchers showed the same pictures to the infants while playing Spanish, to see whether they were inclined to associate any unfamiliar language with any unfamiliar ethnicity. However, in that test the infants looked equally to Asian and Caucasian faces. This suggests young infants pick up on specific language-ethnicity pairings based on the faces and languages they encounter.

"Babies are learning so much about language--even about its social use--long before they produce the first word," said Werker. "The link between speaker characteristics and language is something no one has to teach babies. They learn it all on their own."

The researchers are now probing how babies' ability to link language and ethnicity might help them with language acquisition.

Credit: 
University of British Columbia

Global surgical guidelines drive cut in post-surgery deaths -- study

The English National Health Service (NHS) reduced post-operative deaths by 37.2% following the introduction of globally recognised surgical guidelines - paving the way for life-saving action in low- and middle-income countries (LMICs), a new study reveals.

Researchers at the University of Birmingham have confirmed that the NHS achieved the reduction between 1998 and 2014, coinciding with the introduction of the World Health Organisation (WHO) Surgical Safety Checklist in 2008.

Investigation of data showed a consistent downward trend over the 16-year period, with the greatest reductions achieved in oesophagogastric (68.8%) and breast (69.3%) surgery.

The researchers published their analysis on the reduction of postoperative mortality rates (POMR) in a research letter to British Journal of Surgery. Their findings echo the results of similar research into NHS Scotland for the period 2000 to 2014, which found that the WHO Checklist was a key driver in POMR by 39%. *

Mr Aneel Bhangu, Senior Lecturer at the University of Birmingham, commented: "Around the world 4.2 million people die every year within 30 days after surgery - with half of these deaths occurring in LMICs. Identification of strategies to reduce postoperative mortality is now a global research priority.

"It is encouraging that despite having among the lowest baseline rates globally, both Scotland and England have achieved a greater than one-third reduction in overall POMR. Replicating these gains internationally could avoid thousands of postoperative deaths, with the greatest potential gains in LMICs."

He added that the checklist was essential part of improving perioperative safety, although variable reductions in deaths across specialties suggested that procedure specific initiatives have made a major contribution to reducing overall POMR.

Researchers at the University's NIHR Global Health Research Unit on Global Surgery replicated the analysis of NHS Scotland performance using publicly available inpatient POMR data. They discovered a 37.2% relative reduction (1.21 to 0.76%) in overall inpatient POMR.

The study followed the Unit's research, published earlier this year in The Lancet, which discovered the figure of 4.2 million deaths every year within 30 days after surgery.

There is also a significant unmet need for surgery in LMICs and researchers believe that if operations were provided for all patients who need them the number of global post-operative deaths would increase to 6.1 million.

Around 4.8 billion people worldwide lack timely access to safe and affordable surgery and it is estimated that there is an annual unmet need for 143 million procedures in LMICs.

Credit: 
University of Birmingham

Research reveals exotic quantum states in double-layer graphene

image: A new type quasiparticle is discovered in graphene double-layer structure. This so-called composite fermion consists of one electron and two different types of magnetic flux, illustrated as blue and gold colored arrows in the figure. Composite fermions are capable of forming pairs, such unique interaction lead to experimental discovery of unexpected new quantum Hall phenomena.

Image: 
Michelle Miller and Jia Li/Brown University

New York, NY—June 24, 2019—Researchers from Brown and Columbia Universities have demonstrated previously unknown states of matter that arise in double-layer stacks of graphene, a two-dimensional nanomaterial. These new states, known as the fractional quantum Hall effect, arise from the complex interactions of electrons both within and across graphene layers.

"The findings show that stacking 2D materials together in close proximity generates entirely new physics," said Jia Li, assistant professor of physics at Brown, who initiated this work while a post-doc at Columbia working with Cory Dean, professor of physics, and Jim Hone, professor of mechanical engineering. "In terms of materials engineering, this work shows that these layered systems could be viable in creating new types of electronic devices that take advantage of these new quantum Hall states."

The research is published in the journal Nature Physics.

Importantly, says Hone, Wang Fong-Jen Professor of Mechanical Engineering at Columbia Engineering, several of these new quantum Hall states "may be useful in making fault-tolerant quantum computers."

The Hall effect emerges when a magnetic field is applied to a conducting material in a perpendicular direction to a current flow. The magnetic field causes the current to deflect, creating a voltage in the transverse direction, called the Hall voltage. The strength of the Hall voltage increases with the strength of the magnetic field. The quantum version of the Hall effect was first discovered in experiments performed in 1980 at low temperatures and strong magnetic fields. The experiments showed that rather than increasing smoothly with magnetic field strength, the Hall voltage increases in step-wise (or quantized) fashion. These steps are integer multiples of fundamental constants of nature and are entirely independent of the physical makeup of the material used in the experiments. The discovery was awarded the 1985 Nobel Prize in Physics.

A few years later, researchers working at temperatures near absolute zero and with very strong magnetic fields found new types of quantum Hall states in which the quantum steps in Hall voltage correspond to fractional numbers, hence the name fractional quantum Hall effect. The discovery of the fractional quantum Hall effect won another Nobel Prize, in 1998. Theorists later posited that the fractional quantum Hall effect is related to the formation of quasi-particles called composite fermions. In this state, each electron combines with a quantum of magnetic flux to form a composite fermion carrying a fraction of an electron charge giving rise to the fractional values in Hall voltage.

The composite fermion theory has been successful in explaining a myriad of phenomena observed in single quantum well systems. This new research used double-layer graphene to investigate what happens when two quantum wells are brought close together. Theory had suggested that the interaction between two layers would lead to a new type of composite fermion, but this had never been observed in experiment.

For the experiments, the team built on many years of work at Columbia to improve the quality of graphene devices, creating ultra-clean devices entirely from atomically flat 2D materials. The core of the structure consists of two graphene layer separated by a thin layer of hexagonal boron nitride as an insulating barrier. The double-layer structure is encapsulated by hexagonal boron nitride as a protective insulator, and graphite as a conductive gate to change the charge carrier density in the channel.

"Once again the incredible versatility of graphene has allowed us to push the boundaries of device structures beyond what was previously possible." says Dean, a professor of physics at Columbia University. "The precision and tunability with which we can make these devices is now allowing us to explore an entire realm of physics that was just recently thought to be totally inaccessible."

The graphene structures were then exposed to strong magnetic fields--millions of times stronger than Earth's magnetic field. The research produced a range of fractional quantum Hall states, some of which demonstrate excellent agreement with the composite fermion model, and some that had never been predicted or seen.

"Apart from the interlayer composite fermions, we observed other features that cannot be explained within the composite fermion model," said Qianhui Shi, the paper's co-first author and postdoctoral researcher at Columbia. "A more careful study revealed that, to our surprise, these new states result from pairing between composite fermions. Pairing interaction between adjacent layers and within the same layer give rise to a variety of new quantum phenomena, making double-layer graphene an exciting platform to study."

"Of particular interest," says Hone, "are several new states that have the potential of hosting non-Abelian wave functions—states that don't quite fit the traditional composite fermion model." In non-Abelian states, electrons maintain a kind of "memory" of their past positions relative to each other. That has potential in enabling quantum computers that do not require error correction, which is currently a major stumbling block in the field.

"These are the first new candidates for non-Abelian states in 30 years," Dean said. "It's really exciting to see new physics emerge from our experiments."

Credit: 
Columbia University School of Engineering and Applied Science

How people want to feel determines whether others can influence their emotions

In a new study, Stanford psychologists examined why some people respond differently to an upsetting situation and learned that people's motivations play an important role in how they react.

Their study found that when a person wanted to stay calm, they remained relatively unfazed by angry people, but if they wanted to feel angry, then they were highly influenced by angry people. The researchers also discovered that people who wanted to feel angry also got more emotional when they learned that other people were just as upset as they were, according to the results from a series of laboratory experiments the researchers conducted.

Their findings, published June 13 in Journal of Experimental Psychology: General, reveal that people have more control over how their emotions get influenced than previously realized, the researchers said.

"We have long known that people often try to regulate their emotions when they believe that they are unhelpful," said James Gross, a professor of psychology at Stanford's School of Humanities and Sciences. "This set of studies extends this insight by showing that people can also regulate the way they are influenced by others' emotions."

How do other people influence emotions?

To learn how people react to upsetting situations and respond to others around them, the researchers examined people's anger toward politically charged events in a series of laboratory studies with 107 participants. The team also analyzed almost 19 million tweets in response to the police shooting of Michael Brown in Ferguson, Missouri, in 2014.

In the laboratory studies, the researchers showed participants images that could trigger upsetting emotions, for example, people burning the American flag and American soldiers abusing prisoners in Abu Ghraib prison in Iraq. The researchers also told participants how other people felt about these images.

The researchers found that participants who wanted to feel less angry were three times more likely to be more influenced by people expressing calm emotions than by angry people. But participants who wanted to feel angry were also three times more likely to be influenced by other people angrier than them, as opposed to people with calmer emotions. The researchers also found that these participants got more emotional when they learned that others also felt similar emotions to them.

"The degree to which people said they were motivated to feel or not feel certain emotions predicted how much they would be influenced when they were exposed to emotions from other group members," said Amit Goldenberg, the lead author on the study and a Stanford doctoral candidate in psychology.

Emotional influence on social media

The researchers also looked at social media where they could see how emotions played out in real time. They focused on the unrest that emerged on Twitter following the shooting of Michael Brown in Ferguson, Missouri, in 2014.

After analyzing almost 19 million Twitter posts, the researchers found that Twitter users were more influenced by stronger emotions expressed by people in their social network compared to weaker and calmer reactions. They also found that when Twitter users responded to tweets that were similar in emotional intensity to their earlier reactions, the users amplified their emotions to express stronger outrage than others in their social network.

"The social dimension of emotions, particularly in response to socio-political events, is becoming increasingly important with the use of social media and people's constant exposure to the emotions of others in online platforms," wrote the study's authors, who also included Jamil Zaki, assistant professor of psychology, in the paper.

Emotions as tools

Researchers have largely assumed that people's emotions get influenced automatically - in an unconscious, immediate response to other people's emotions, said Goldenberg. His team's new research challenges that perspective, he said.

"Our emotions are not passive nor automatic," Goldenberg said. "They are a little bit of a tool. We have the ability to use our emotions to achieve certain goals. We express certain emotions to convince other people to join our collective cause. On social media, we use emotions to signal to other people that we care about the issues of a group to make sure people know we're a part of it."

Further research needs to be done in order to understand the relationship between people and their emotions. One of the next topics Goldenberg says he wants to examine further is whether the desire of people to want to see and experience certain emotions around them lies at the core of how they choose their network of friends and other people around them.

"It seems that the best way to regulate your emotions is to start with the selection of your environment," Goldenberg said. "If you don't want to be angry today, one way to do that is to avoid angry people. Do some people have an ingrained preference for stronger emotions than others? That's one of my next questions."

Credit: 
Stanford University

Study reveals elevated cancer risk in children with birth defects

image: Pictured here is Dr. Philip J. Lupo, associate professor of pediatrics -- hematology oncology and member of the Dan L Duncan Comprehensive Cancer Center at Baylor.

Image: 
Baylor College of Medicine

Childhood cancer is a rare occurrence in the overall population but may be somewhat more frequent in children born with birth defects. To better understand the link between cancer risk and birth defects, a collaborative team of scientists led by Baylor College of Medicine has assembled the largest study to date to evaluate cancer risk in children with birth defects. The study appears in JAMA Oncology.

"While cancer risk in children with certain chromosomal defects like Down syndrome is well established, much less is known for children with birth defects where there is no known genetic cause, sometimes called non-chromosomal defects," said Dr. Philip Lupo, associate professor of pediatrics - hematology oncology and member of the Dan L Duncan Comprehensive Cancer Center at Baylor. "Non-chromosomal defects, as a group, affect more children, but one of the primary challenges of understanding risk among these children is that limited sample sizes make studying specific defects, like spina bifida, more difficult."

The research team gathered data from birth, birth defect and cancer registries across Texas, Arkansas, Michigan and North Carolina to generate a birth cohort of more than 10 million children born between 1992 and 2013. The investigators looked at diagnoses of cancer until 18 years of age to determine differences in cancer risk between those with and without birth defects.

Researchers found that, compared to children without any birth defect, children with chromosomal defects were almost 12 times more likely to develop cancer, while children with non-chromosomal defects were 2.5 times more likely to develop cancer. Additionally, children with more than one non-chromosomal defect had a corresponding increase in cancer risk.

"Our two key objectives in this study were to identify children who are at an increased risk for cancer, because subsets of these children may one day benefit from screening and better clinical management, and to uncover clues as to why cancer occurs more frequently in this population," said Dr. Jeremy Schraw, postdoctoral associate in the Section of Epidemiology and Population Sciences at Baylor. "These findings solidify our understanding of cancer risk in these children and show that we need additional research in this area."

Cancer types that were more frequent in children with non-chromosomal defects included hepatoblastoma and neuroblastoma.

While these findings identify specific, strong associations between birth defects and cancer, Schraw said that it is important to remember that both birth defects and cancer are still rare occurrences.

"This study is important in that it is the largest and most informative of its kind. The large sample size allowed us to evaluate cancer risk in children with both chromosomal versus non-chromosomal defects and revealed links between specific cancers and specific birth defects. These data can also help us to study and understand differences in outcomes down the road for children with cancer," said Dr. Sharon Plon, professor of pediatrics - oncology and molecular and human genetics and co-director of the Pediatric Cancer Program in the Dan L Duncan Comprehensive Cancer Center at Baylor.

"In the future, we hope to identify the specific genes behind these associations and systematically research what happens from the time of birth to the time of cancer onset to also understand if environmental factors may be contributing to cancer development," Lupo said. "This study provides new understanding about biology and the mechanisms that may lead to these complex outcomes in this population."

Credit: 
Baylor College of Medicine