Tech

Child brain tumors can be classified by advanced imaging and AI

Brain tumours are the most common solid tumours in childhood and the largest cause of death from cancer in this age group

Being able to classify a brain tumour's type, without the use of biopsy, is hard to do; however diffusion weighted imaging, an advanced imaging technique, when combined with machine learning, can help a UK-based multi-centre study, including WMG, University of Warwick has found.

Being able to characterise the tumour(s) faster and more accurately means they can be treated more efficiently

Diffusion weighted imaging and machine learning can successfully classify the diagnosis and characteristics of common types of paediatric brain tumours a UK-based multi-centre study, including WMG at the University of Warwick has found. This means that the tumour can be characterised and treated more efficiently.

The largest cause of death from cancer in children are brain tumours in a particular part of the brain, called the posterior fossa. However, within this area, there are three main types of brain tumour, and being able to characterise them quickly and efficiently can be challenging.

Currently a qualitative assessment of MRI by radiologists is used, however overlapping radiological characteristics can make it difficult to distinguish which type of tumour it is, without the confirmation of biopsy. In the paper, 'Classification of paediatric brain tumours by diffusion weighted imaging and machine learning', published in the journal Scientific Reports, led by the University of Birmingham including researchers from WMG, University of Warwick. The study found that tumour diagnostic classification can be improved by using non-invasive diffusion weighted imaging, when combined with machine learning (AI).

Diffusion weighted imaging involves the use of specific advanced MRI sequences, as well as software that generates images from the resulting data that uses the diffusion of water molecules to generate contrast in MR image. One can then extract an Apparent Diffusion Coefficient (ADC) map, analysed values of which can be used to tell you more about the tumour.

The study involved 117 patients from five primary treatment centres across the UK with scans from twelve different hospitals on a total of eighteen different scanners, the images from them were then analysed and region of interests were drawn by both an experienced radiologist and an expert scientist in paediatric neuroimaging. Values from the analysis of Apparent Diffusion Coeffcient maps from these images' regions have been fed to AI algorithms to successfully discriminate the three most common types of paediatric posterior fossa brain tumours, non-invasively.

Professor Theo Arvanitis, Director of the Institute of Digital Health at WMG, University of Warwick and one of the authors of the study explains:

"Using AI and advance Magnetic Resonance imaging characteristics, such as Apparent Diffusion Coefficient (ADC) values from diffusion weighted images, can potentially help distinguish, in a non-invasive way, between the main three different types of paediatric tumours in the posterior fossa, the area of the brain where such tumours are most commonly found in children.

"If this advanced imaging technique, combined with AI technology, can be routinely enrolled into hospitals it means that childhood brain tumours can be characterised and classified more efficiently, and in turn means that treatments can be pursued in a quicker manner with favourable outcomes for children suffering from the disease."

Professor Andrew Peet, NIHR Professor in Clinical Paediatric Oncology at the University of Birmingham and Birmingham Children's Hospital adds:

"When a child comes to hospital with symptoms that could mean they have a brain tumour that initial scan is such a difficult time for the family and understandably they want answers as soon as possible. Here we have combined readily available scans with artificial intelligence to provide high levels of diagnostic accuracy that can start to give some answers. Previous studies using these techniques have largely been limited to single expert centres. Showing that they can work across such a large number of hospitals opens the door to many children benefitting from rapid non-invasive diagnosis of their brain tumour. These are very exciting times and we are working hard now to start making these artificial intelligence techniques widely available."

Credit: 
University of Warwick

Scientists developed a novel method of automatic soil mapping

image: A team of soil scientists developed a new approach to the automatic generation and updating of soil maps. Having applied machine learning technologies to a set of rules traditionally used by experts in manual mapping, the team obtained a highly accurate model that provides easy-to-interpret results.

Image: 
RUDN University

A team of soil scientists developed a new approach to the automatic generation and updating of soil maps. Having applied machine learning technologies to a set of rules traditionally used by experts in manual mapping, the team obtained a highly accurate model that provides easy-to-interpret results. The study was published in ISPRS International Journal of Geo-Information.

Many software solutions for digital soil mapping are based on statistical models. The accuracy of such programs is limited because statistical models depend on the quality and quantity of field data and can ignore local irregularities in soil properties. It is difficult to obtain accurate and useful information from such maps because they are built based on extremely complex, sometimes even seemingly illogical models. To make digital mapping models clearer, more manageable, and accurate, one could formalize traditional soil mapping rules and add them to the existing software. A team of scientists from RUDN University together with their colleagues from the Dokuchyaev Soil Science Institute suggested a new approach to using expert knowledge of soil mapping in digital solutions.

"Many digital tools for soil mapping contain certain elements of qualitative analysis. However, the accuracy of the maps could be increased by means of imitating the traditional work process of a soil mapping expert in a software solution," said Prof. Igor Savin, an Academician of the Russian Academy of Sciences and Doctor Agricultural Sciences from the Department of System Ecology, RUDN University.

The new method can be used to develop regional soil maps. It is based on machine learning techniques, namely, the building of the so-called decision trees. The values that a decision tree is based on are key soil formation factors that are usually taken into consideration during manual mapping. The data on which the model was trained was collected from different sources: for example, information about plants was taken from satellite images and a lot of quantitative data came from topographic maps. As for the qualitative data (the rules used to identify types of soil), the model extracted it from obsolete soil maps during training. Experts in decision trees also made some changes to the input data which were considered another source of information by the system.

The new approach was tested on a 1,560 sq.m area in Belgorod Region, Russia. It is a flat terrain with a large share of black soils mainly used in agriculture. The new model turned out to be more compact than traditional digital soil mapping methods: it used 20-29 variables instead of 142-162. This made the model and the maps generated on its basis easier to interpret. After expert manual adjustment of the decision tree, the accuracy of the maps reached 76%.

"The accuracy of map generation can be further increased, but it requires meticulous fine-tuning of the models. At this stage, we intended to confirm our main principle: classical soil formation factors could be used in digital mapping instead of indexes that are often difficult to understand. Our method can produce mapping models that would be easy to work with and to readjust depending on environmental changes," added Prof. Igor Savin from RUDN University.

Credit: 
RUDN University

Regular caffeine consumption affects brain structure

Coffee, cola or an energy drink: caffeine is the world's most widely consumed psychoactive substance. Researchers from the University of Basel have now shown in a study that regular caffeine intake can change the gray matter of the brain. However, the effect appears to be temporary.

No question - caffeine helps most of us to feel more alert. However, it can disrupt our sleep if consumed in the evening. Sleep deprivation can in turn affect the gray matter of the brain, as previous studies have shown. So can regular caffeine consumption affect brain structure due to poor sleep? A research team led by Dr. Carolin Reichert and Professor Christian Cajochen of the University of Basel and UPK (the Psychiatric Hospital of the University of Basel) investigated this question in a study.

The result was surprising: the caffeine consumed as part of the study did not result in poor sleep. However, the researchers observed changes in the gray matter, as they report in the journal Cerebral Cortex. Gray matter refers to the parts of the central nervous system made up primarily of the cell bodies of nerve cells, while white matter mainly comprises the neural pathways, the long extensions of the nerve cells.

A group of 20 healthy young individuals, all of whom regularly drink coffee on a daily basis, took part in the study. They were given tablets to take over two 10-day periods, and were asked not to consume any other caffeine during this time. During one study period, they received tablets with caffeine; in the other, tablets with no active ingredient (placebo). At the end of each 10-day period, the researchers examined the volume of the subjects' gray matter by means of brain scans. They also investigated the participants' sleep quality in the sleep laboratory by recording the electrical activity of the brain (EEG).

Sleep unaffected, but not gray matter

Data comparison revealed that the participants' depth of sleep was equal, regardless of whether they had taken the caffeine or the placebo capsules. But they saw a significant difference in the gray matter, depending on whether the subject had received caffeine or the placebo. After 10 days of placebo - i.e. "caffeine abstinence" - the volume of gray matter was greater than following the same period of time with caffeine capsules.

The difference was particularly striking in the right medial temporal lobe, including the hippocampus, a region of the brain that is essential to memory consolidation. "Our results do not necessarily mean that caffeine consumption has a negative impact on the brain," emphasizes Reichert. "But daily caffeine consumption evidently affects our cognitive hardware, which in itself should give rise to further studies." She adds that in the past, the health effects of caffeine have been investigated primarily in patients, but there is also a need for research on healthy subjects.

Although caffeine appears to reduce the volume of gray matter, after just 10 days of coffee abstinence it had significantly regenerated in the test subjects. "The changes in brain morphology seem to be temporary, but systematic comparisons between coffee drinkers and those who usually consume little or no caffeine have so far been lacking," says Reichert.

Credit: 
University of Basel

Kagome graphene promises exciting properties

video: Rémy Pawlak shows how Kagome graphene is produced and what is so special about it.

Image: 
(C. Möller, Swiss Nanoscience Institute, University of Basel)

For the first time, physicists from the University of Basel have produced a graphene compound consisting of carbon atoms and a small number of nitrogen atoms in a regular grid of hexagons and triangles. This honeycomb-structured "kagome lattice" behaves as a semiconductor and may also have unusual electrical properties. In the future, it could potentially be used in electronic sensors or quantum computers.

Researchers around the world are searching for new synthetic materials with special properties such as superconductivity -- that is, the conduction of electric current without resistance. These new substances are an important step in the development of highly energy-efficient electronics. The starting material is often a single-layer honeycomb structure of carbon atoms (graphene).

Theoretical calculations predict that the compound known as "kagome graphene" should have completely different properties to graphene. Kagome graphene consists of a regular pattern of hexagons and equilateral triangles that surround one another. The name "kagome" comes from Japanese and refers to the old Japanese art of kagome weaving, in which baskets were woven in the aforementioned pattern.

Kagome lattice with new properties

Researchers from the Department of Physics and the Swiss Nanoscience Institute at the University of Basel, working in collaboration with the University of Bern, have now produced and studied kagome graphene for the first time, as they report in the journal Angewandte Chemie. The researchers' measurements have delivered promising results that point to unusual electrical or magnetic properties.

To produce the kagome graphene, the team applied a precursor to a silver substrate by vapor deposition and then heated it to form an organometallic intermediate on the metal surface. Further heating produced kagome graphene, which is made up exclusively of carbon and nitrogen atoms and features the same regular pattern of hexagons and triangles.

Strong interactions between electrons

"We used scanning tunneling and atomic force microscopes to study the structural and electronic properties of the kagome lattice," reports Dr. Rémy Pawlak, first author of the study. With microscopes of this kind, researchers can probe the structural and electrical properties of materials using a tiny tip -- in this case, the tip was terminated with individual carbon monoxide molecules.

In doing so, the researchers observed that electrons of a defined energy, which is selected by applying an electrical voltage, are "trapped" between the triangles that appear in the crystal lattice of kagome graphene. This behavior clearly distinguishes the material from conventional graphene, where electrons are distributed across various energy states in the lattice - in other words, they are delocalized.

"The localization observed in kagome graphene is desirable and precisely what we were looking for," explains Professor Ernst Meyer, who leads the group in which the projects were carried out. "It causes strong interactions between the electrons -- and, in turn, these interactions provide the basis for unusual phenomena, such as conduction without resistance."

Further investigations planned

The analyses also revealed that kagome graphene features semiconducting properties -- in other words, its conducting properties can be switched on or off, as with a transistor. In this way, kagome graphene differs significantly from graphene, whose conductivity cannot be switched on and off as easily.

In subsequent investigations, the team will detach the kagome lattice from its metallic substrate and study its electronic properties further. "The flat band structure identified in the experiments supports the theoretical calculations, which predict that exciting electronic and magnetic phenomena could occur in kagome lattices. In the future, kagome graphene could act as a key building block in sustainable and efficient electronic components," says Ernst Meyer.

Credit: 
Swiss Nanoscience Institute, University of Basel

Shrubs and soils: A hot topic in the cool tundra

image: Climate change impacts arctic vegetation. In turn, the vegetation shift affects soil microclimate and carbon stocks.

Image: 
Julia Kemppinen

Climate change is rapid in the Arctic. As the climate warms, shrubs expand towards higher latitudes and altitudes. Researcher Julia Kemppinen together with her colleagues investigated the impacts of dwarf shrubs on tundra soils in the sub-Arctic Fennoscandia.

The study revealed that the dominance of dwarf shrubs impacts soil microclimate and carbon stocks. Microclimate describes the moisture and temperature conditions close to ground surface. Shrubs are the largest plant life form in the Arctic, and in comparison, to other arctic plants, shrubs use more water and cast more shade.

"The results indicate that the dominance of dwarf shrubs decreases soil moisture, soil temperatures and soil organic carbon stocks", says Kemppinen.

Due to climate change, the dominance of dwarf shrubs has increased in the Fennoscandian tundra, especially the evergreen crowberry (Empetrum nigrum). While in other parts of the Arctic, larger deciduous shrubs have increased. This expansion is called shrubification.

The carbon cycle links shrubification back to global climate change. When the dominance of shrubs increases, less carbon is stored in the soils compared to other plant communities. The soil carbon stocks are important, because they store carbon that would otherwise be released to the atmosphere.

"Arctic soils store about half of the global belowground organic carbon pool. If the carbon stocks decrease as the conditions in the Arctic are changing, this may feedback to global climate wawrming. Therefore, everyone should know what is going on in the Arctic", says researcher Anna-Maria Virkkala.

Investigating the connections between shrubs and soils requires a lot of data. The researchers collected large field datasets for this study. In addition, the researchers used openly available data produced by the Finnish Meteorological Institute and the National Land Survey of Finland.

"Although, our research group the BioGeoClimate Modelling Lab collected a lot of data in the field, we couldn't have done this study without high-quality, open data", says professor Miska Luoto from the University of Helsinki.

Credit: 
University of Helsinki

Moiré patterns facilitate discovery of novel insulating phases

image: Photo shows Xiong Huang (left) and Yongtao Cui.

Image: 
Microwave Nano-Electronics Lab, UC Riverside.

RIVERSIDE, Calif. -- Materials having excess electrons are typically conductors. However, moiré patterns -- interference patterns that typically arise when one object with a repetitive pattern is placed over another with a similar pattern -- can suppress electrical conductivity, a study led by physicists at the University of California, Riverside, has found.

In the lab, the researchers overlaid a single monolayer of tungsten disulfide (WS2) on a single monolayer of tungsten diselenide (WSe2) and aligned the two layers against each other to generate large-scale moiré patterns. The atoms in both the WS2 and WSe2 layers are arranged in a two-dimensional honeycomb lattice with a periodicity, or recurring intervals, of much less than 1 nanometer. But when the two lattices are aligned at 0 or 60 degrees, the composite material generates a moiré pattern with a much larger periodicity of about 8 nanometers. The conductivity of this 2D system depends on how many electrons are placed in the moiré pattern.

"We found that when the moiré pattern is partially filled with electrons, the system exhibits several insulating states as opposed to conductive states expected from conventional understanding," said Yongtao Cui, an assistant professor of physics and astronomy at UC Riverside, who led the research team.  "The filling percentages were found to be simple fractions like 1/2, 1/3, 1/4, 1/6, and so on. The mechanism for such insulating states is the strong interaction among electrons that restricts the mobile electrons into local moiré cells. This understanding may help to develop new ways to control conductivity and to discovery new superconductor materials."

Study results appear today in Nature Physics.

The moiré patterns generated on the composite material of WS2 and WSe2 can be imagined to be with wells and ridges arranged similarly in a honeycomb pattern.

"WS2 and WSe2 have a slight mismatch where lattice size is concerned, making them ideal for producing moiré patterns," Cui said. "Further, coupling between electrons becomes strong, meaning the electrons 'talk to each other' while moving around across the ridges and the wells."

Typically, when a small number of electrons are placed in a 2D layer such as WS2 or WSe2, they have enough energy to travel freely and randomly, making the system a conductor. Cui's lab found that when moiré lattices are formed using both WS2 and WSe2, resulting in a periodic pattern, the electrons begin to slow down and repel from each other.

"The electrons do not want to stay close to each other," said Xiong Huang, the first author of the paper and a doctoral graduate student in Cui's Microwave Nano-Electronics Lab. "When the number of electrons is such that one electron occupies every moiré hexagon, the electrons stay locked in place and cannot move freely anymore. The system then behaves like an insulator."

Cui likened the behavior of such electrons to social distancing during a pandemic.

"If the hexagons can be imagined to be homes, all the electrons are indoors, one per home, and not moving about in the neighborhood," he said. "If we don't have one electron per hexagon, but instead have 95% occupancy of hexagons, meaning some nearby hexagons are empty, then the electrons can still move around a little through the empty cells. That's when the material is not an insulator. It behaves like a poor conductor."

His lab was able to fine-tune the number of electrons in the WS2- WSe2 lattice composite in order to change the average occupancy of the hexagons. His team found insulating states occurred when average occupancy was less than one. For example, for an occupancy of one-third, the electrons occupied every other hexagon.

"Using the social distancing analogy, instead of a separation of 6 feet, you now have a separation of, say, 10 feet," Cui said. "Thus, when one electron occupies a hexagon, it forces all neighboring hexagons to be empty in order to comply with the stricter social distancing rule. When all electrons follow this rule, they form a new pattern and occupy one third of the total hexagons in which they again lose the freedom to move about, leading to an insulating state."

The study shows similar behaviors can also occur for other occupancy fractions such as 1/4, 1/2, and 1/6, with each corresponding to a different occupation pattern.

Cui explained that these insulating states are caused by strong interactions between the electrons. This, he added, is the Coulomb repulsion, the repulsive force between two positive or two negative charges, as described by the Coulomb's law.

He added that in 3D materials, strong electron interactions are known to give rise to various exotic electronic phases. For example, they likely contribute to the formation of unconventional high temperature superconductivity.

"The question we still have no answer for is whether 2D structures, the kind we used in our experiments, can produce high temperature superconductivity," Cui said.

Next, his group will work on characterizing the strength of the electron interactions.

"The interaction strength of the electrons largely determines the insulation state of the system," Cui said. "We are also interested in being able to manipulate the strength of the electron interaction."

Cui and Huang were funded by grants from the National Science Foundation, a Hellman Fellowship, and a seed grant from SHINES.

Credit: 
University of California - Riverside

Insight about tumor microenvironment could boost cancer immunotherapy

image: Cancer evades the immune system by "feeding" the T cells that protect the tumor and "starving" the T cells that would attack.

Image: 
UPMC, created with BioRender.com

PITTSBURGH, Feb. 15, 2021 - A paper published today in Nature shows how chemicals in the areas surrounding tumors--known as the tumor microenvironment--subvert the immune system and enable cancer to evade attack. These findings suggest that an existing drug could boost cancer immunotherapy.

The study was conducted by a team of scientists at UPMC Hillman Cancer Center and the University of Pittsburgh School of Medicine, led by Greg Delgoffe, Ph.D., Pitt associate professor of immunology. By disrupting the effect of the tumor microenvironment on immune cells in mice, the researchers were able to shrink tumors, prolong survival and increase sensitivity to immunotherapy.

"The majority of people don't respond to immunotherapy," said Delgoffe. "The reason is that we don't really understand how the immune system is regulated within this altered tumor microenvironment."

The immune system is made up of many kinds of cells, chief among them T cells. One type, called killer T cells, fights off invaders, such as viruses, bacteria and even cancer. Another type, called regulatory T cells, or "T-reg cells" for short, counteracts killer T cells by acting as protectors of the cells that belong to the body. T-reg cells are important for preventing autoimmune diseases, such as type I diabetes, Crohn's disease and multiple sclerosis, where overactive killer T cells assault the body's healthy tissues.

For all of these different immune cells to do their jobs, they need to produce energy. Delgoffe's team studied how these different types of T cells have different appetites, and how tumors--which have large appetites--compete for nutrients with infiltrating immune cells. The researchers found that killer T cells and regulatory T cells have very different appetites, and cancer cells exploit this.

"Cancer is wise to the whole situation," Delgoffe said. "Cancer cells don't just starve T cells that would kill them but actually feed these regulatory T cells that would protect them."

In short, Delgoffe's team found that tumors gobble up all the vital nutrients in their vicinity that killer T cells would need to attack. Further, they also excrete lactic acid, which feeds the regulatory T cells, convincing them to stand guard. T-regs can turn the lactic acid into energy, using a protein called MCT1, so nuzzling up with the tumor is a good way for these immune cells to stay fed.

"What better way to recruit a cell than food?" Delgoffe said.

Then, using mice with melanoma, the researchers found that silencing the gene that codes for the MCT1 protein caused tumor growth to slow down. The mice also lived longer.

"We starved the T-regs," said Delgoffe. "When T-reg cells are not being sustained by the tumor, killer T cells can come in and kill the cancer."

Importantly, when Delgoffe's team combined MCT1 inhibition with immunotherapy, the anti-cancer effects were stronger than either strategy alone.

Clinically, the same effect might be achievable using drugs that inhibit MCT1--one of which is currently being tested in people with advanced lymphoma, and it appears to be well-tolerated.

Credit: 
University of Pittsburgh

Light used to detect quantum information stored in 100,000 nuclear quantum bits

Researchers have found a way to use light and a single electron to communicate with a cloud of quantum bits and sense their behaviour, making it possible to detect a single quantum bit in a dense cloud.

The researchers, from the University of Cambridge, were able to inject a 'needle' of highly fragile quantum information in a 'haystack' of 100,000 nuclei. Using lasers to control an electron, the researchers could then use that electron to control the behaviour of the haystack, making it easier to find the needle. They were able to detect the 'needle' with a precision of 1.9 parts per million: high enough to detect a single quantum bit in this large ensemble.

The technique makes it possible to send highly fragile quantum information optically to a nuclear system for storage, and to verify its imprint with minimal disturbance, an important step in the development of a quantum internet based on quantum light sources. The results are reported in the journal Nature Physics.

The first quantum computers - which will harness the strange behaviour of subatomic particles to far outperform even the most powerful supercomputers - are on the horizon. However, leveraging their full potential will require a way to network them: a quantum internet. Channels of light that transmit quantum information are promising candidates for a quantum internet, and currently there is no better quantum light source than the semiconductor quantum dot: tiny crystals that are essentially artificial atoms.

However, one thing stands in the way of quantum dots and a quantum internet: the ability to store quantum information temporarily at staging posts along the network.

"The solution to this problem is to store the fragile quantum information by hiding it in the cloud of 100,000 atomic nuclei that each quantum dot contains, like a needle in a haystack," said Professor Mete Atatüre from Cambridge's Cavendish Laboratory, who led the research. "But if we try to communicate with these nuclei like we communicate with bits, they tend to 'flip' randomly, creating a noisy system."

The cloud of quantum bits contained in a quantum dot don't normally act in a collective state, making it a challenge to get information in or out of them. However, Atatüre and his colleagues showed in 2019 that when cooled to ultra-low temperatures also using light, these nuclei can be made to do 'quantum dances' in unison, significantly reducing the amount of noise in the system.

Now, they have shown another fundamental step towards storing and retrieving quantum information in the nuclei. By controlling the collective state of the 100,000 nuclei, they were able to detect the existence of the quantum information as a 'flipped quantum bit' at an ultra-high precision of 1.9 parts per million: enough to see a single bit flip in the cloud of nuclei.

"Technically this is extremely demanding," said Atatüre, who is also a Fellow of St John's College. "We don't have a way of 'talking' to the cloud and the cloud doesn't have a way of talking to us. But what we can talk to is an electron: we can communicate with it sort of like a dog that herds sheep."

Using the light from a laser, the researchers are able to communicate with an electron, which then communicates with the spins, or inherent angular momentum, of the nuclei.

By talking to the electron, the chaotic ensemble of spins starts to cool down and rally around the shepherding electron; out of this more ordered state, the electron can create spin waves in the nuclei.

"If we imagine our cloud of spins as a herd of 100,000 sheep moving randomly, one sheep suddenly changing direction is hard to see," said Atatüre. "But if the entire herd is moving as a well-defined wave, then a single sheep changing direction becomes highly noticeable."

In other words, injecting a spin wave made of a single nuclear spin flip into the ensemble makes it easier to detect a single nuclear spin flip among 100,000 nuclear spins.

Using this technique, the researchers are able to send information to the quantum bit and 'listen in' on what the spins are saying with minimal disturbance, down to the fundamental limit set by quantum mechanics.

"Having harnessed this control and sensing capability over this large ensemble of nuclei, our next step will be to demonstrate the storage and retrieval of an arbitrary quantum bit from the nuclear spin register," said co-first author Daniel Jackson, a PhD student at the Cavendish Laboratory.

"This step will complete a quantum memory connected to light - a major building block on the road to realising the quantum internet," said co-first author Dorian Gangloff, a Research Fellow at St John's College.

Besides its potential usage for a future quantum internet, the technique could also be useful in the development of solid-state quantum computing.

Credit: 
University of Cambridge

New surgery may enable better control of prosthetic limbs

image: MIT researchers in collaboration with surgeons at Harvard Medical School have devised a new type of amputation surgery that can help amputees better control their residual muscles and receive sensory feedback.

Image: 
MIT

CAMBRIDGE, MA -- MIT researchers have invented a new type of amputation surgery that can help amputees to better control their residual muscles and sense where their "phantom limb" is in space. This restored sense of proprioception should translate to better control of prosthetic limbs, as well as a reduction of limb pain, the researchers say.

In most amputations, muscle pairs that control the affected joints, such as elbows or ankles, are severed. However, the MIT team has found that reconnecting these muscle pairs, allowing them to retain their normal push-pull relationship, offers people much better sensory feedback.

"Both our study and previous studies show that the better patients can dynamically move their muscles, the more control they're going to have. The better a person can actuate muscles that move their phantom ankle, for example, the better they're actually able to use their prostheses," says Shriya Srinivasan, an MIT postdoc and lead author of the study.

In a study that will appear this week in the Proceedings of the National Academy of Sciences, 15 patients who received this new type of surgery, known as agonist-antagonist myoneural interface (AMI), could control their muscles more precisely than patients with traditional amputations. The AMI patients also reported feeling more freedom of movement and less pain in their affected limb.

"Through surgical and regenerative techniques that restore natural agonist-antagonist muscle movements, our study shows that persons with an AMI amputation experience a greater phantom joint range of motion, a reduced level of pain, and an increased fidelity of prosthetic limb controllability," says Hugh Herr, a professor of media arts and sciences, head of the Biomechatronics group in the Media Lab, and the senior author of the paper.

Other authors of the paper include Samantha Gutierrez-Arango and Erica Israel, senior research support associates at the Media Lab; Ashley Chia-En Teng, an MIT undergraduate; Hyungeun Song, a graduate student in the Harvard-MIT Program in Health Sciences and Technology; Zachary Bailey, a former visiting researcher at the Media Lab; Matthew Carty, a visiting scientist at the Media Lab; and Lisa Freed, a Media Lab research scientist.

Restoring sensation

Most muscles that control limb movement occur in pairs that alternately stretch and contract. One example of these agonist-antagonist pairs is the biceps and triceps. When you bend your elbow, the biceps muscle contracts, causing the triceps to stretch, and that stretch sends sensory information back to the brain.

During a conventional limb amputation, these muscle movements are restricted, cutting off this sensory feedback and making it much harder for amputees to feel where their prosthetic limbs are in space or to sense forces applied to those limbs.

"When one muscle contracts, the other one doesn't have its antagonist activity, so the brain gets confusing signals," says Srinivasan, a former member of the Biomechatronics group now working at MIT's Koch Institute for Integrative Cancer Research. "Even with state-of-the-art prostheses, people are constantly visually following the prosthesis to try to calibrate their brains to where the device is moving."

A few years ago, the MIT Biomechatronics group invented and scientifically developed in preclinical studies a new amputation technique that maintains the relationships between those muscle pairs. Instead of severing each muscle, they connect the two ends of the muscles so that they still dynamically communicate with each other within the residual limb. In a 2017 study of rats, they showed that when the animals contracted one muscle of the pair, the other muscle would stretch and send sensory information back to the brain.

Since these preclinical studies, about 25 people have undergone the AMI surgery at Brigham and Women's Hospital, performed by Carty, who is also a plastic surgeon at the Brigham and Women's hospital. In the new PNAS study, the researchers measured the precision of muscle movements in the ankle and subtalar joints of 15 patients who had AMI amputations performed below the knee. These patients had two sets of muscles reconnected during their amputation: the muscles that control the ankle, and those that control the subtalar joint, which allows the sole of the foot to tilt inward or outward. The study compared these patients to seven people who had traditional amputations below the knee.

Each patient was evaluated while lying down with their legs propped on a foam pillow, allowing their feet to extend into the air. Patients did not wear prosthetic limbs during the study. The researchers asked them to flex their ankle joints -- both the intact one and the "phantom" one -- by 25, 50, 75, or 100 percent of their full range of motion. Electrodes attached to each leg allowed the researchers to measure the activity of specific muscles as each movement was performed repeatedly.

The researchers compared the electrical signals coming from the muscles in the amputated limb with those from the intact limb and found that for AMI patients, they were very similar. They also found that patients with the AMI amputation were able to control the muscles of their amputated limb much more precisely than the patients with traditional amputations. Patients with traditional amputations were more likely to perform the same movement over and over in their amputated limb, regardless of how far they were asked to flex their ankle.

"The AMI patients' ability to control these muscles was a lot more intuitive than those with typical amputations, which largely had to do with the way their brain was processing how the phantom limb was moving," Srinivasan says.

In a paper that recently appeared in Science Translational Medicine, the researchers reported that brain scans of the AMI amputees showed that they were getting more sensory feedback from their residual muscles than patients with traditional amputations. In work that is now ongoing, the researchers are measuring whether this ability translates to better control of a prosthetic leg while walking.

Freedom of movement

The researchers also discovered an effect they did not anticipate: AMI patients reported much less pain and a greater sensation of freedom of movement in their amputated limbs.

"Our study wasn't specifically designed to achieve this, but it was a sentiment our subjects expressed over and over again. They had a much greater sensation of what their foot actually felt like and how it was moving in space," Srinivasan says. "It became increasingly apparent that restoring the muscles to their normal physiology had benefits not only for prosthetic control, but also for their day-to-day mental well-being."

The research team has also developed a modified version of the surgery that can be performed on people who have already had a traditional amputation. This process, which they call "regenerative AMI," involves grafting small muscle segments to serve as the agonist and antagonist muscles for an amputated joint. They are also working on developing the AMI procedure for other types of amputations, including above the knee and above and below the elbow.

"We're learning that this technique of rewiring the limb, and using spare parts to reconstruct that limb, is working, and it's applicable to various parts of the body," Herr says.

Credit: 
Massachusetts Institute of Technology

In predicting shallow but dangerous landslides, size matters

image: A shallow landslide turned into a debris flow that swept away a house in Sausalito, California, at 3 a.m. on Feb. 14, 2019. A woman was buried in the remains of her house, but survived with only minor injuries.

Image: 
Photo courtesy of the City of Sausalito

The threat of landslides is again in the news as torrential winter storms in California threaten to undermine fire-scarred hillsides and bring deadly debris flows crashing into homes and inundating roads.

But it doesn't take wildfires to reveal the landslide danger, University of California, Berkeley, researchers say. Aerial surveys using airborne laser mapping -- LiDAR (light detection and ranging) -- can provide very detailed information on the topography and vegetation that allow scientists to identify which landslide-prone areas could give way during an expected rainstorm. This is especially important for predicting where shallow landslides -- those just involving the soil mantle -- may mobilize and transform as they travel downslope into destructive debris flows.

The catch, they say, is that such information cannot yet help predict how large and potentially hazardous the landslides will be, meaning that evacuations may target lots more people than are really endangered by big slides and debris flows.

In a new paper appearing this week in the journal Proceedings of the National Academy of Sciences, the scientists, UC Berkeley geologist William Dietrich and project scientist Dino Bellugi report their latest attempt at tagging landslide-prone areas according to their likely size and hazard potential, in hopes of more precise predictions. Their model takes into account the physical aspects of hillsides -- steepness, root structures holding the slope in place and soil composition -- and the pathways water follows as it runs downslope and into the soil.

Yet, while the model is better at identifying areas prone to larger and potentially more dangerous landslides, the researchers discovered factors affecting landslide size that can't easily be determined from aerial data and must be assessed from the ground -- a daunting task, if one is concerned about the entire state of California.

The key unknowns are what the subsurface soil and underlying bedrock are like and the influence of past landslides on ground conditions.

"Our studies highlight the problem of overprediction: We have models that successfully predict the location of slides that did occur, but they end up predicting lots of places that didn't occur because of our ignorance about the subsurface," said Dietrich, UC Berkeley professor of earth and planetary science. "Our new findings point out specifically that the spatial structure of the hillslope material -- soil depth, root strength, permeability and variabilities across the slope -- play a role in the size and distribution and, therefore, the hazard itself. We are hitting a wall -- if we want to get further with landslide prediction that attempts to specify where, when and how big a landslide will be, we have to have knowledge that is really hard to get, but matters."

Models key to targeted evacuations

Decades of studies by Dietrich and others have led to predictive models of where and under what rainfall conditions slopes will fail, and such models are used worldwide in conjunction with weather prediction models to pinpoint areas that could suffer slides in an oncoming storm and warn residents. But these models, triggered by a so-called "empirical rainfall thresholds," are conservative, and government agencies often end up issuing evacuation warnings for large areas to protect lives and property.

Dietrich, who directs the Eel River Critical Zone Observatory -- a decade-long project to analyze how water moves all the way from the tree canopy through the soil and bedrock and into streams -- is trying to improve landslide size prediction models based on the physics of slopes. Airborne laser imaging using LiDAR can provide submeter-scale detail, not only of vegetation, but also of the ground under the vegetation, allowing precise measurements of slopes and a good estimate of the types of vegetation on the slopes.

Slopes fail during rainstorms, he said, because the water pressure in the soil -- the pore pressure -- pushes soil particles apart, making them buoyant. The buoyancy reduces the friction holding the soil particles against gravity, and once the mass of the slide is enough to snap the roots holding the soil in place, the slope slumps. Shallow slides may involve only the top portion of the soil, or scour down to bedrock and push everything below it downslope, creating deadly debris flows that can travel several meters per second.

Each wet year along the Pacific Coast, homes are swept away and lives lost from large landslides, though the threat is worldwide. As illustrated by a landslide in Sausalito exactly two years ago, landslides can originate just a short distance upslope and mobilize as a debris flow traveling meters per second before striking a house. The size of the initial landslide will influence the depth and speed of the flow and the distance it can travel downslope into canyons, Dietrich said.

With earlier computer models, Dietrich and his colleagues were able to pinpoint more precisely the places on hillslopes that would suffer landslides. In 2015, for example, Bellugi and Dietrich used their computer model to predict shallow landslides on a well-studied hillslope in Coos Bay, Oregon, during a sequence of landslide-triggering rainstorms, based solely on these physical measures. Those models employed LiDAR data to calculate steepness and how water would flow downslope and affect pore pressure inside the slope; the seasonal history of rainfall in the area, which helps assess how much groundwater is present; and estimates of the soil and root strength.

In the new paper, Bellugi and David Milledge of Newcastle University in Newcastle upon Tyne in the United Kingdom tested the landslide prediction model on two very different landscapes: a very steep, deeply etched and forested hillside in Oregon, and a smooth, grassy, gently sloped glacial valley in England's storied Lake District.

Surprisingly, they found that the distribution of small and large shallow landslides were quite similar across both landscapes and could be predicted if they took into account one extra piece of information: the variability of hillslope strength across these hillsides. They discovered that small slides can turn into major slides if the conditions -- soil strength, root strength and pore pressure -- do not vary sufficiently over short distances. Essentially, small slides can propagate across the slope and become larger by connecting isolated slide-prone areas, even if they're separated by more solid slope.

"These areas that are susceptible to shallow landslides, even though you may be able to define them, may coalesce, if close enough to each other. Then you can have a big landslide that encompasses some of these little patches of low strength," Bellugi said. "These patches of low strength may be separated by areas that are strong -- they may be densely forested or less steep or drier -- but if they are not well separated, then those areas can coalesce and make a giant landslide."

"On hillsides, there are trees and topography, and we can see them and quantify them," Dietrich added. "But starting from the surface and going down into the ground, there is a lot that we need in models that we can't now quantify over large areas: the spatial variation in soil depth and root strength and the influence of groundwater flow, which can emerge from the underlying bedrock and influence soil pore pressure."

Getting such detailed information across an entire slope is a herculean effort, Dietrich said. On the Oregon and Lake District slopes, researchers walked or scanned the entire area to map vegetation, soil composition and depth, and past slides meter by meter, and then painstakingly estimated root strength, all of which is impractical for most slopes.

"What this says is that to predict the size of a landslide and a size distribution, we have a significant barrier that is going to be hard to cross -- but we need to -- which is to be able to characterize the subsurface material properties," Dietrich said. "Dino's paper says that the spatial structure of the subsurface matters."

The researchers' previous field studies found, for example, that fractured bedrock can allow localized subsurface water flow and undermine otherwise stable slopes, something not observable -- yet -- by aerial surveys.

They urge more intensive research on steep hillsides to be able to predict these subsurface features. This could include more drilling, installing hydrologic monitoring equipment and application of other geophysical tools, including cone penetrometers, which can be used to map soil susceptible to failure.

Credit: 
University of California - Berkeley

Corn belt farmland has lost a third of its carbon-rich soil

More than one-third of the Corn Belt in the Midwest – nearly 30 million acres – has completely lost its carbon-rich topsoil, according to University of Massachusetts Amherst research that indicates the U.S. Department of Agriculture has significantly underestimated the true magnitude of farmland erosion.

In a paper published in the Proceedings of the National Academy of Sciences, researchers led by UMass Amherst graduate student Evan Thaler, along with professors Isaac Larsen and Qian Yu in the department of geosciences, developed a method using satellite imagery to map areas in agricultural fields in the Corn Belt of the Midwestern U.S. that have no remaining A-horizon soil. The A-horizon is the upper portion of the soil that is rich in organic matter, which is critical for plant growth because of its water and nutrient retention properties. The researchers then used high-resolution elevation data to extrapolate the satellite measurements across the Corn Belt and the true magnitude of erosion.

Productive agricultural soils are vital for producing food for a growing global population and for sustaining rural economies. However, degradation of soil quality by erosion reduces crop yields. Thaler and his colleagues estimate that erosion of the A-horizon has reduced corn and soybean yields by about 6%, leading to nearly $3 billion in annual economic losses for farmers across the Midwest.

The A-horizon has primarily been lost on hilltops and ridgelines, which indicates that tillage erosion - downslope movement of soil by repeated plowing - is a major driver of soil loss in the Midwest. Notably, tillage erosion is not included in national assessments of soil loss and the research highlights the urgent need to include tillage erosion in the soil erosion models that are used in the U.S. and to incentivize adoption of no-till farming methods.

Further, their research suggests erosion has removed nearly 1.5 petagrams of carbon from hillslopes. Restoration of organic carbon to the degraded soils by switching from intensive conventional agricultural practices to soil-regenerative practices, has potential to sequester carbon dioxide from the atmosphere while restoring soil productivity.

Credit: 
University of Massachusetts Amherst

Invasive flies prefer untouched territory when laying eggs

image: An invasive spotted wing drosophila (Drosophila suzukii) on a raspberry.

Image: 
Hannah Burrack, NC State University

A recent study finds that the invasive spotted wing drosophila (Drosophila suzukii) prefers to lay its eggs in places that no other spotted wing flies have visited. The finding raises questions about how the flies can tell whether a piece of fruit is virgin territory - and what that might mean for pest control.

D. suzukii is a fruit fly that is native to east Asia, but has spread rapidly across North America, South America, Africa and Europe over the past 10-15 years. The pest species prefers to lay its eggs in ripe fruit, which poses problems for fruit growers, since consumers don't want to buy infested fruit.

To avoid consumer rejection, there are extensive measures in place to avoid infestation, and to prevent infested fruit from reaching the marketplace.

"Ultimately, we're talking about hundreds of millions of dollars in potential crop losses and increases in pest-management costs each year in the United States," says Hannah Burrack, co-author of a paper on the study and a professor of entomology at North Carolina State University. "These costs have driven some small growers out of business.

"The first step toward addressing an invasive pest species is understanding it. And two fundamental questions that we had are: Which plants will this species attack? And why does it pick those plants?"

One of the things that researchers noticed when observing infestations on farms was that the species' egg-laying behavior was different, depending on the size of the infestation.

When D. suzukii populations were small, there would only be a few eggs laid in each piece of fruit, and they would only be in ripe fruit. If there were more D. suzukii present, more eggs would be laid in each piece of fruit. The researchers had also noticed that large populations of D. suzukii were also more likely to lay eggs in fruit that wasn't ripe.

To better understand the egg-laying behavior of D. suzukii, the researchers conducted a series of experiments. And the results surprised them.

Specifically, the researchers found that, given a choice, female D. suzukii preferred to lay their eggs in fruit that other flies had never visited.

"It doesn't matter if the other flies lay eggs," Burrack says. "It doesn't even matter if the other flies are male or female. It only matters if other flies have touched a piece of fruit. If untouched fruit is available, D. suzukii will reject fruit that other flies have visited.

"We're not sure if the flies leave behind a chemical or bacterial marker, or something else entirely - but the flies can tell where other flies have been."

The researchers say that the next step is to determine what, exactly, the D. suzukii are detecting.

"If we can get a better understanding of what drives the behavior of this species, that could inform the development of new pest-control techniques," Burrack says. "We're not making any promises, but this is a significant crop pest - and the more we know, the better."

Credit: 
North Carolina State University

A machine-learning approach to finding treatment options for Covid-19

When the Covid-19 pandemic struck in early 2020, doctors and researchers rushed to find effective treatments. There was little time to spare. "Making new drugs takes forever," says Caroline Uhler, a computational biologist in MIT's Department of Electrical Engineering and Computer Science and the Institute for Data, Systems and Society, and an associate member of the Broad Institute of MIT and Harvard. "Really, the only expedient option is to repurpose existing drugs."

Uhler's team has now developed a machine learning-based approach to identify drugs already on the market that could potentially be repurposed to fight Covid-19, particularly in the elderly. The system accounts for changes in gene expression in lung cells caused by both the disease and aging. That combination could allow medical experts to more quickly seek drugs for clinical testing in elderly patients, who tend to experience more severe symptoms. The researchers pinpointed the protein RIPK1 as a promising target for Covid-19 drugs, and they identified three approved drugs that act on the expression of RIPK1.

The research appears today in the journal Nature Communications. Co-authors include MIT PhD students Anastasiya Belyaeva, Adityanarayanan Radhakrishnan, Chandler Squires, and Karren Dai Yang, as well as PhD student Louis Cammarata of Harvard University and long-term collaborator G.V. Shivashankar of ETH Zurich in Switzerland.

Early in the pandemic, it grew clear that Covid-19 harmed older patients more than younger ones, on average. Uhler's team wondered why. "The prevalent hypothesis is the aging immune system," she says. But Uhler and Shivashankar suggested an additional factor: "One of the main changes in the lung that happens through aging is that it becomes stiffer."

The stiffening lung tissue shows different patterns of gene expression than in younger people, even in response to the same signal. "Earlier work by the Shivashankar lab showed that if you stimulate cells on a stiffer substrate with a cytokine, similar to what the virus does, they actually turn on different genes," says Uhler. "So, that motivated this hypothesis. We need to look at aging together with SARS-CoV-2 -- what are the genes at the intersection of these two pathways?" To select approved drugs that might act on these pathways, the team turned to big data and artificial intelligence.

The researchers zeroed in on the most promising drug repurposing candidates in three broad steps. First, they generated a large list of possible drugs using a machine-learning technique called an autoencoder. Next, they mapped the network of genes and proteins involved in both aging and SARS-CoV-2 infection. Finally, they used statistical algorithms to understand causality in that network, allowing them to pinpoint "upstream" genes that caused cascading effects throughout the network. In principle, drugs targeting those upstream genes and proteins should be promising candidates for clinical trials.

To generate an initial list of potential drugs, the team's autoencoder relied on two key datasets of gene expression patterns. One dataset showed how expression in various cell types responded to a range of drugs already on the market, and the other showed how expression responded to infection with SARS-CoV-2. The autoencoder scoured the datasets to highlight drugs whose impacts on gene expression appeared to counteract the effects of SARS-CoV-2. "This application of autoencoders was challenging and required foundational insights into the working of these neural networks, which we developed in a paper recently published in PNAS," notes Radhakrishnan.

Next, the researchers narrowed the list of potential drugs by homing in on key genetic pathways. They mapped the interactions of proteins involved in the aging and Sars-CoV-2 infection pathways. Then they identified areas of overlap among the two maps. That effort pinpointed the precise gene expression network that a drug would need to target to combat Covid-19 in elderly patients.

"At this point, we had an undirected network," says Belyaeva, meaning the researchers had yet to identify which genes and proteins were "upstream" (i.e. they have cascading effects on the expression of other genes) and which were "downstream" (i.e. their expression is altered by prior changes in the network). An ideal drug candidate would target the genes at the upstream end of the network to minimize the impacts of infection.

"We want to identify a drug that has an effect on all of these differentially expressed genes downstream," says Belyaeva. So the team used algorithms that infer causality in interacting systems to turn their undirected network into a causal network. The final causal network identified RIPK1 as a target gene/protein for potential Covid-19 drugs, since it has numerous downstream effects. The researchers identified a list of the approved drugs that act on RIPK1 and may have potential to treat Covid-19. Previously these drugs have been approved for the use in cancer. Other drugs that were also identified, including ribavirin and quinapril, are already in clinical trials for Covid-19.

Uhler plans to share the team's findings with pharmaceutical companies. She emphasizes that before any of the drugs they identified can be approved for repurposed use in elderly Covid-19 patients, clinical testing is needed to determine efficacy. While this particular study focused on Covid-19, the researchers say their framework is extendable. "I'm really excited that this platform can be more generally applied to other infections or diseases," says Belyaeva. Radhakrishnan emphasizes the importance of gathering information on how various diseases impact gene expression. "The more data we have in this space, the better this could work," he says.

Credit: 
Massachusetts Institute of Technology

Peeking at the pathfinding strategies of the hippocampus in the brain

image: Treadmill experiments to determine the rate and phase code of CA1

Image: 
Korea Institute of Science and Technology(KIST)

We find routes to destination and remember special places because there is an area somewhere in the brain that functions like a GPS and navigation system. When taking a new path for the first time, we pay attention to the landmarks along the way. Owing to such navigation system, it becomes easier to find destinations along the path after having already used the path. Over the years, scientists have learned, based on a variety of animal experiments, that cells in the brain region called hippocampus are responsible for spatial perception and are activated in discrete positions of the environment, which which reason they are called "place cells". However, how place cells store long-term memories of locations and encode particular positions in the environment is still not understood.

The Korea Institute of Science and Technology (KIST) announced that the research team led by Sebastien Royer at KIST Brain Science Institute (BSI), in collaboration with a research team at New York University (NYU), uncovered that place cells in the hippocampus encode spatial information using interchangeably two distinct information processing mechanisms referred to as a rate code and a phase code, somewhat analogue to the number and spatial arrangement of bars in bar codes. In addition, the research team found that parallel neural circuits and information processing mechanisms are used depending on the complexity of the landmarks along the path.

The KIST and NYU research teams identified fundamental principles of information processing in the hippocampus by conducting two types of spatial exploration experiments. In the first type of experiment, the researchers used a treadmill with a long belt, a well controlled spatial environment for mice, and trained the animals to run on the belt sequentially through a section cleared of any objects and another section furnished with small objects. The second type of experiment was carried with rats foraging in a circular arena that was either completely empty or filled with objects. To analyze neural activity, they implanted silicon probe electrodes in CA1, the subregion generating the main output of the hippocampus, and in CA3, a subregion of the hippocampus suspected of playing an important role in spatial memory formation.

Results of the two experiments were consistent. It was found that the hippocampus uses different neural circuits and information processing strategies depending on the environmental conditions. In the object-free environments, a group of cells located in the superficial region of CA1 tends to be active and uses a rate code since the animal position is best predicted by changes in the frequency of action potentials discharged by single neurons. In contrast, in a complex, object-strewn environment, a group of cells in the deep region of CA1 tend to be active and uses a phase code since the animal position is best predicted by the timing of a neuron action potentials relative to the ensemble of active neurons.

These findings suggest that the circuit using the rate code is more strongly associated with providing information about overall positioning and spatial perception, whereas the circuit using the phase code is more strongly associated with remembering the precise location of an object and spatial relationships. Aside from this, the respective contribution of inputs from CA3 and the entorhinal cortex was also analysed. It is known that CA1 receive information from both the CA3 region and the entorhinal cortex. In this study, based on differences in fast network oscillations called "gamma", it was found that superficial CA1 cells receive information primarily from CA3 in the simple environments whereas deep CA1 cells receive information primarily from the entorhinal cortex in the complex environments.

Sebastien Royer, Principal investigator at KIST said "this study improves our understanding on how the hippocampus processes information, which is a critical step for understanding the general mechanisms of memory." He added that "such basic level understanding will eventually help the development of technologies for the diagnosis and treatment of brain disorders related to hippocampal injury such as Alzheimer's type dementia, amnesia, and cognitive impairment, and might inspire the development of some AI."

The research team headed by Sebastien Royer, PhD has been gradually expanding the understanding of information storage and processing in memory-related brain areas using diverse approaches. In a study combining mouse experiment and neural network modeling published last year in the journal "Nature Communications", the same team identified the process by which granule cells in the dentate gyrus region of the hippocampus develop a uniform mapping of the space during spatial learning.

Credit: 
National Research Council of Science & Technology

Metabolic response behind reduced cancer cell growth

peer review/experimental study/animals/cells

Researchers from Uppsala University show in a new study that inhibition of the protein EZH2 can reduce the growth of cancer cells in the blood cancer multiple myeloma. The reduction is caused by changes in the cancer cells' metabolism. These changes can be used as markers to discriminate whether a patient would respond to treatment by EZH2 inhibition. The study has been published in the journal Cell Death & Disease.

Multiple myeloma is a type of blood cancer where immune cells grow in an uncontrolled way in the bone marrow. The disease is very difficult to treat and is still considered incurable, and thus it is urgent to identify new therapeutic targets in the cancer cells.

The research group behind the new study has previously shown that cultivated multiple myeloma cells had reduced growth and were even killed if they were treated with a substance that inhibited the EZH2 protein. Now they found that EZH2 inhibition also reduced cancer growth in a multiple myeloma mouse model.

"We treated mice with a type of cancer that corresponds to human multiple myeloma with a substance that inhibits EZH2 and discovered several signs that the treated mice had slower cancer growth than non-treated mice. This provided further evidence for the potential of EZH2 as a target for clinical intervention," says Helena Jernberg Wiklund, professor at the Department of Immunology, Genetics and Pathology, who has led the study.

The results from the mouse model encouraged the researchers to further investigate what it is that makes the cells sensitive to EZH2 inhibition. Human multiple myeloma cells are more heterogeneous than mouse model cells and they found that some types of human cultivated multiple myeloma cells were sensitive whereas others were resistant.

To study this phenomenon further, the researchers employed a global analysis of cellular metabolites in combination with analysis of gene activity. Sensitivity was found to be associated with alterations in specific metabolic pathways in the cells.

"In cells that were sensitive to EZH2 inhibition, the methionine cycling pathways were altered, an effect we did not detect in non-sensitive cells. This alteration was caused by a downregulation of methionine cycling-associated genes," says Helena Jernberg Wiklund.

The alterations in metabolite abundance in the methionine cycling pathways could be used as markers to discriminate whether a patient is responding to EZH2 inhibition, which is of great importance for the potential clinical use of this treatment.

"Our findings provide an increased understanding of the mechanisms behind the sensitivity of multiple myeloma cells to EZH2 inhibition. They also highlight that global analysis of metabolites and gene activity are powerful tools and when used together, they provide a better understanding of what happens in cancer cells when exposed to novel treatments. We believe that our results are relevant to both preclinical and clinical researchers, as a step towards finding new ways to treat patients with multiple myeloma," says Helena Jernberg Wiklund.

Credit: 
Uppsala University