Tech

Susceptibility to carcinogens varies due to genetics

image: Michele Carbone in his lab at the UH Cancer Center.

Image: 
UH Cancer Center

A new study led by the University of Hawai?i Cancer Center looks into how and why certain individuals develop cancer and others do not. The research was published in Nature Reviews Cancer, one of the most influential and prestigious journals in medicine and science. In addition to lead author Michele Carbone and co-author Haining Yang, collaborators include top cancer researchers, including a Nobel Laureate, and several members of the U.S. and European Academy of Sciences.

In the publication, "Tumour predisposition and cancer syndromes as models to study gene-environment interactions," Carbone and collaborators discuss recent evidence indicating that cancer most often results from the interaction between an individual's genetics and exposure to environmental carcinogens. They emphasize that the risk of developing cancer upon exposure to carcinogens varies due to gene mutations that can make certain individuals much more susceptible than others.

"By identifying those who carry genetic mutations, we can implement more effective prevention and early detection strategies targeting those who are most vulnerable, and thus, we have a much greater impact in saving lives from cancer," said Carbone.

This is the first time that the role of gene-environment interaction in cancer is being discussed at this level of molecular detail by some of the top scientists in the world. Several of the advancements in this field of research were made possible by Carbone and collaborators' discovery of the "BAP1 cancer syndrome," a disease that allows researchers to study and elucidate the mechanisms that regulate gene-environment interaction in causing cancer. 

"This important 'Perspective' article highlights some of the most cutting-edge studies on how genes and environmental factors interact in causing cancer, which is a critical research topic that needs people's attention," said Yang.

"This comprehensive publication was made possible thanks to the support of Barry and Virginia Weinman of the Weinman Foundation, who have played a very significant role in advancing innovations in cancer research," said Carbone. "The Weinmans generously funded meetings at the UH Cancer Center, allowing the co-authors to meet and discuss the topic at hand. As revealed by this publication, their contributions continue to tremendously impact and enhance cancer research, not only in the state of Hawai'i, but nationally and internationally as well."

Credit: 
University of Hawaii at Manoa

Wounded plants: How they coordinate their healing

video: The scientists used live microscopy to track the thale cress root while healing. When they interfered with the auxin amount, either no cells or too many cells responded to the wound, sometimes leading to tumorous swelling of the root.

Image: 
Lukas Hoermayer / IST Austria

All living organisms suffer injuries. Animals and humans have movable cells, specialized in finding, approaching, and healing wounds. Plant cells, however, are immobile and can't encapsulate the damage. Instead, adjacent cells multiply or grow to fill the injury. In this precision process, each unique cell decides whether it will stretch or divide to fill the wound. Even though scientists study regeneration in plants since the mid-19th century, the cell's 'reasons' for either choice remained unclear.

Now, scientists in the group of Professor Ji?í Friml from the Institute of Science and Technology Austria (IST Austria) discovered that the hormone Auxin and pressure guide the plant's way of regenerating.

"It is incredibly fascinating how robust and flexible plant regeneration is, considering how static those organisms are," says Lukas Hoermayer, a leading scientist in this study.

To investigate wound healing, the scientists injured a thale cress root with a laser. They then tracked cells during regeneration with a microscope. The scientists found that the hormone Auxin, which is essential in plant growth and development, also plays a vital role in wound healing. It builds up in those cells directly touching the wound and facilitates the plant's response to injury.

When the scientists artificially changed the Auxin amounts, either no cells or too many cells responded to the wound. This uncoordinated process, sometimes even led to tumorous swelling of the root.

"Only the precise coordination of many cells throughout the whole tissue yields a defined and localized wound response," explains Lukas Hoermayer.

Furthermore, the team recorded a pressure change within the plant, caused by the collapsing cells of the wound. When the scientists reduced the cellular pressure before cutting the plant, the pressure difference vanished, and the regeneration was weakened.

By observing plant regeneration and modifying it with chemical treatments, the scientists identified Auxin concentration and pressure changes as governing processes. Their results advance the understanding of how roots manage to heal wounds and hence survive in sandy soil or the presence of root-attacking herbivores.

Credit: 
Institute of Science and Technology Austria

A carbon sink shrinks in the arctic

image: Melting ice in the Arctic Ocean is a bellwether for climate change, an apt illustration of environmental changes in a warming world.

Image: 
Photo courtesy of Zhangxian Ouyang

New research by University of Delaware doctoral student Zhangxian Ouyang and oceanographer Wei-Jun Cai, and an international team of researchers, demonstrates that rapid warming and sea-ice loss have induced major changes in the western Arctic Ocean.

The research team's findings -- published Monday, June 15 in Nature Climate Change -- show that the Arctic Ocean's ability to remove carbon dioxide from the atmosphere can vary greatly depending on location.

Arctic Ocean sea-ice loss is a critical consequence of climate change. As sea ice continues to melt in the western Arctic Ocean, more fresh water is entering the upper portion of the water in the Canada Basin, which sits off the coast of Alaska and Canada, south of the Chukchi Shelf.

This summertime melt cycle is exacerbating seasonal changes and increasing the amount of carbon dioxide present in the water's topmost layer, which comprises the upper 70 feet of the water column. This is reducing the basin's capacity to remove carbon dioxide from the atmosphere.

Prevailing thought, based on data measurements from under the ice and in newly melted ocean margin areas in the 1990s and early 2000s, had suggested that when the ice melted it would allow the Arctic Ocean to draw large amounts of carbon dioxide out of the atmosphere, acting as a carbon sink and helping to mitigate greenhouse gases. However, this may not be the case in all places, particularly in the Canada Basin where summer ice retreat has advanced into the deep basin since 2007.

The research team's latest findings are based on an analysis of over 20 years of global data sets collected between 1994-2017 by researchers across the United States, China, Japan and Canada. They provide a more accurate depiction of what is happening in this region and build on Cai's previous work from 2010, which indicated that carbon dioxide levels at the sea surface increase rapidly and unexpectedly toward levels found in the atmosphere in newly ice-free Arctic Ocean basins.

For example, the research team's work showed that as the ice breaks up and melts in the Canada Basin, this meltwater lays on top of the sea surface, creating a "blanket" of sorts that inhibits the ocean's ability to absorb carbon dioxide from the atmosphere into the deep ocean and store it there. Cai's team refers to this phenomenon as a "new normal" that is created by extreme seasonal warming and meltwater in the region.

"As carbon dioxide accumulates in the surface layer of the water from melting ice, the amount of carbon dioxide this area of the Arctic Ocean can take from the atmosphere will continue to shrink," said Cai, the Mary A.S. Lighthipe Professor in the College of Earth, Ocean and Environment. "We predict by 2030, the Canada Basin's ability to serve as a carbon sink will be really minimal."

Additionally, this rapid increase of carbon dioxide content in the basin may have rapidly acidified the surface water, a process that can endanger marine calcifying organisms and disrupt ecosystem functioning there.

In stark contrast, farther south in the shallow Chukchi Sea, the amount of carbon dioxide in the water's topmost layer remains very low, much lower than what is present in the atmosphere. This means that as air passes over the water's surface, the sea can more quickly absorb carbon dioxide from the air.

The researchers suggest that this difference is the result of high biological production in the Chukchi Sea due to rich nutrients being transported there on currents coming from the Pacific Ocean since the Bering Strait has opened up due to earlier ice loss. These nutrients enable abundant growth of phytoplankton and other marine organisms that form the base of the marine food web and feed the broader ecosystem. Phytoplankton also consume carbon dioxide dissolved in the water during photosynthesis, allowing more carbon dioxide to be taken from the surrounding atmosphere.

The research team suspects that the Chukchi Sea will become a larger carbon sink in the future and impact the deep ocean carbon cycle and ecosystem, while the Canada Basin likely will remain less so as sea ice in the region continues to melt and change the water chemistry.

According to Lisa Robbins, a retired senior scientist with the United States Geological Survey (USGS) and a co-author on the paper, these changes could have important implications for organisms in the Arctic. For instance, Arctic cod is an important fishery in the western Arctic that contributes to the region's overall economy and serves an important role in the marine food web as a food source for other organisms, such as Beluga whales and ringed seals. Biologists have noted that as temperature and sea ice melt have increased, Atlantic cod are responding by moving farther north. Changing water chemistry also may be playing a role, said Robbins, who led three expeditions to study the region's water chemistry in the Arctic aboard the United States Icebreaker R/V Healy while with the USGS.

Long-term data sets, such as those used in this study, are key to understanding and predicting future changes in the Arctic.

"The amount of insight we get from these data sets into how our earth-ocean works is tremendous. If scientists hadn't collected data in 1994, we wouldn't have a place to start and compare with," said Robbins, now a courtesy professor in the College of Marine Science at University of South Florida.

A 2019 article in Wired magazine found that in northern Canada near Greenland, glacial meltwater seems to be aiding watersheds in absorbing carbon dioxide from the atmosphere. While alone it cannot counterbalance the amount of carbon dioxide in the atmosphere due to carbon emissions, it is an important illustration that the changes aren't uniform and the subsequent effects -- positive and negative -- are the result of a complex combination of multiple different drivers. Further research and more international collaborative efforts can help to answer challenging unanswered questions.

As sea-ice loss accelerates, the researchers expect these seasonal variations will cause the ocean water in the Canada Basin to have high levels of carbon dioxide and become increasingly acidic. This will further reduce the basin's capacity to take up carbon dioxide from the atmosphere and potentially reduce its capacity to mitigate climate change.

While this problem might seem very far away from Delaware, it's important to remember that the ocean is one global system with circulation currents that transport water around the world, even to the Atlantic Ocean on the East Coast. And greenhouse gases are a global issue.

Understanding how fundamentally important ice melt is to driving carbonate chemistry and seasonal changes in carbon dioxide in this region of the Arctic Ocean will help advance the science in this area, maybe not immediately but over the long-run, said Cai.

"We are trying to understand the processes at work and if the Arctic Ocean will continue to be a large carbon sink, while providing data that can help Earth systems modelers to predict global changes to the carbon cycle, and the ocean's biology and water chemistry," Cai said.

Credit: 
University of Delaware

Accelerating biological systems design for sustainable biomanufacturing

Northwestern University synthetic biologists have developed a new rapid-prototyping system to accelerate the design of biological systems, reducing the time to produce sustainable biomanufacturing products from months to weeks.

As global challenges like climate change, population growth, and energy security intensify, the need for low-cost biofuels and bioproducts -- like medicines and materials -- produced using sustainable resources increases. Industrial biotechnology, which uses microbial cellular factories to harness enzyme sets that can convert molecules to desirable chemical product, has shown potential to address these needs. However, designing, building, and optimizing these pathways in cells remains complex and slow, unable to keep up with the dynamic shifts in needs.

The new platform, called in vitro Prototyping and Rapid Optimization of Biosynthetic Enzymes (iPROBE), provides a quick and powerful design-build-test framework to discover optimal biosynthetic pathways for cellular metabolic engineering that could impact a range of industries (or issues) from clean energy to consumer products.

"For the first time, we show that cell-free platforms can inform and accelerate the design of industrial cellular systems," said Michael Jewett, Walter P. Murphy Professor of Chemical and Biological Engineering and Charles Deering McCormick Professor of Teaching Excellence at the McCormick School of Engineering, who directs Northwestern's Center for Synthetic Biology. "We accomplished in approximately two weeks what traditionally would have taken six to 12 months. Our findings will help accelerate the pace at which we can enable sustainable biomanufacturing practices."

The platform leverages Northwestern's leadership in cell-free synthetic biology and comes into play in three recently published studies, each led by Jewett.

"iPROBE stands to help scientists identify the best sets of enzymes for a variety of sustainable chemicals and bring them into manufacturing at scale," Jewett said. "We envision this cell-free system as an engine to help realize the future bioeconomy."

Adopting a cell-free approach

"In Vitro Prototyping and Rapid Optimization of Biosynthetic Enzymes for Cell Design," published June 15 in the journal Nature Chemical Biology, describes how iPROBE works.

To manufacture sustainable chemicals, synthetic biologists stitch together protein enzymes to carry out individual molecular transformations, converting readily available stock -- like glucose or carbon dioxide -- to a new product. Current testing methods require these enzymes get encoded in DNA, placed on a single plasmid molecule, and then inserted into a living cell. The process must be repeated each time to study a different set of enzymes in hopes of determining the most optimal grouping.

"The result is that the design cycles are just too slow," Jewett said. "We end up needing hundreds of combined person years of development to bring a product to market. That's too slow to address challenges like climate change and other rapidly growing problems we face."

iPROBE bypasses the limitations of engineering living organisms using cell-free protein synthesis to enrich biosynthetic enzymes in test tubes to carry out transformations. Combined with computational design algorithms developed by Lockheed Martin, the system rapidly studies pathway enzyme ratios, tuning individual enzymes in the context of the desired multi-step pathway, screening for high-performance enzymes, and discovering enzymes with optimal functionalities.

"iPROBE had to be multifaceted and easy to use," said Ashty Karim, first author on the paper and research fellow and assistant scientific director in the Jewett Lab. "We set out to design a platform that could test hundreds of biosynthetic hypotheses without having to re-engineer microbes simply by mixing and matching enzymes."

Jewett likened the mix-and-match analysis of different enzyme combinations to making a cocktail.

"Imagine you're a bartender interested in making the perfect mixed drink. You would want to bring together all of the possible cocktail ingredients that potentially could be used," Jewett said. "iPROBE allows us to mix and match enzymes in this type of cocktail-based approach to determine the best combinations to carry out the transformation and synthesis of sustainable chemicals -- but instead of taking months to years to do, we can do it in days to weeks."

Finding the optimal pathways in Clostridium

To validate the iPROBE system, the researchers developed optimal biosynthetic pathways for 3-hydroxybutyrate (3-HB) and butanol, two organic compounds in Clostridium autoethanogenum, a bacterium that naturally produces ethanol from metabolized carbon monoxide.

"It was important to us that we demonstrated the practical use of the technology," Karim said. "We had this dream solution to increase the pace of biotechnology research and development that could only be realized through the right collaboration."

After identifying the optimal pathways in vitro, the researchers shared them with collaborators at clean energy startup Lanzatech, which specializes in using Clostridium strains to produce sustainable fuel. Researchers there applied the pathways and found a 20-fold increase in 3-HB production in Clostridium, bridging the iPROBE's success in the lab to an industrial setting.

"Working with an organism like Clostridium is difficult; genetic tools are not as sophisticated, high-throughput workflows are often lacking, and there exist transformation idiosyncrasies," Jewett said. "To have this process work successfully pushes a new vision for sustainability. What could be better than turning waste gases from the atmosphere into sustainable chemicals at scale?"

Synthesizing limonene and styrene

In a second paper published in the journal Metabolic Engineering, Jewett and his team focused on applying iPROBE to optimize the synthesis of limonene, a member of a class of organic compounds called terpenes. Limonene is found in the oil of orange and other citrus peels and responsible for its fruity fragrance. The molecule is not only commonly used to enhance the smell of household cleaners and manufactured foods, but also has also shown the potential to help advance sustainable fuels.

In a matter of weeks, iPROBE's cell-free approach led to the exploration of hundreds of enzyme combinations to synthesize limonene.

"In the past, people have only been able to study 20 or 30 pathways," Jewett said. "We demonstrated how iPROBE could be applied to this particular biosynthetic pathway and scale not just to 100 or 200 pathways, but 500. It sets a new standard for how cell-free systems can accelerate biological design of an important sustainable chemical."

The third paper, also published in Metabolic Engineering, looked at styrene, a petroleum-derived molecule commonly used in disposable silverware and foam packaging. While past efforts have attempted to synthesize the molecule using living organisms like E. coli, styrene's natural toxicity limited production capacity. With iPROBE, Jewett and his team synthesized the highest amount of styrene through a biochemical approach to date without additional process enhancements.

"This advance opens the door to one day moving from production processes reliant on fossil fuels to more sustainable, biosynthetic-based strategies," Jewett said.

Credit: 
Northwestern University

Taking a landslide's temperature to avert catastrophe

image: A bird's-eye view of the massive scale of the Vajont landslide, which created a tsunami more than 800 feet tall that came crashing over the Vajont Dam, devastating the nearby village of Casso, Italy.

Image: 
Carolina Seguí, Duke University

DURHAM, N.C. - Engineers from Duke University have developed a comprehensive new model of deep-seated landslides and demonstrated that it can accurately recreate the dynamics of historic and current landslides that occur under various conditions.

Peering past the standard measurements of velocity and water levels, the model points to the temperature of a relatively thin layer of clay at the base of the landslide as critical to its potential for sudden cataclysmic failure. The approach is currently being used to monitor an evolving landslide in Andorra and suggests methods for mitigating the risk of its escalation as well as any other future deep-seated landslides.

The results appear online on June 15 in the Journal of Geophysical Research - Earth Surface.

"I published a paper more than a decade ago that explained what happened at the Vajont Dam, one of the biggest manmade disasters of all-time," said Manolis Veveakis, assistant professor of civil and environmental engineering at Duke. "But that model was extremely limited and constrained to that specific event. This model is more complete. It can be applied to other landslides, providing stability criteria and guidance on when and how they can be averted."

The disaster Veveakis is referring to occurred at the Vajont Dam, one of the tallest in the world at 860 feet, in northern Italy in 1963. After years of attempting to mitigate a slow, incremental landslide of roughly an inch per day in the adjoining mountainside by lowering the water level of the lake behind the dam, the landslide suddenly accelerated without warning. Nearly 10 billion cubic feet of rock plummeted down the gorge and into the lake at almost 70 miles per hour. That created a tsunami more than 800 feet tall that crashed over the dam, completely wiping out several small towns below and killing nearly 2,000 people.

Before the catastrophe occurred, scientists did not believe any potential landslide would result in a tsunami more than 75 feet tall. They remain puzzled at how this landslide had moved so violently and so suddenly.

In 2007, Veveakis put the pieces together and developed a model that fit the scientific observations of the disaster. It showed how water seeping into rock above an unstable layer of clay caused a creeping landslide, which in turn heated up and further destabilized the clay in a feedback loop until it rapidly failed.

"Clay is a very thermally sensitive material and it can create a shear band that is very susceptible to friction," said Carolina Segui, a PhD candidate in Veveakis's laboratory and first author of the new paper. "It's the worst material to have in such a critical place and is a nightmare for civil engineers constructing anything anywhere."

This early model, however, used only the last month of data from the Vajont Dam, when the water level was almost constant. It ignored any sort of groundwater variation, essentially assuming that the external loading remained constant. While that model worked to explain the unexpected failure of the Vajont landslide, the model's assumptions made it impossible to offer real-time assessments or use in other scenarios.

In the new study, Veveakis, Segui and Hadrien Rattez, a postdoctoral researcher in Veveakis's laboratory, plug the old model's holes and provide the ability to incorporate a combination of time-dependent external loading and internal degradation. The resulting model is able to recreate and predict observations taken from very different, deep-seated landslides.

"Traditional landslide models have a static internal material strength, and if you exceed it the landslide fails," said Veveakis. "But in examples such as these, the landslide is already moving because its strength has already been exceeded, so those models don't work. Others have tried to use machine learning to fit the data, which has worked sometimes, but it doesn't explain the underlying physics. Our model incorporates the properties of soft materials, allowing it to be applied to more landslides with different loading characteristics and provide an operational stability criterion by monitoring its basal temperature."

Besides using the model to recreate the movements of the Vajont slide and explaining the mechanisms underpinning its motion for more than two years, Veveakis and Segui show that their model can accurately recreate and predict the movements from the Shuping landslide, another slow-moving landslide at the Three Gorges Dam in China, the largest dam in the world. But while that landslide is also the result of a manmade lake beside a dam, that's where the similarities end.

Before the Vajont Dam failed, there was a fairly linear relationship between the lake level and the velocity of the creeping landslide. The lower the lake level, the slower the landslide. The Shuping landslide, however, behaves in the opposite manner -- the lower the lake level, the faster the landslide. And while the relationship between lake level and velocity was roughly linear at the Vajont Dam, the velocity of the Shuping landslide is non-linear, responding to additional sources of water and loading, such as seasonal monsoons. It is also composed of different materials.

Despite these differences, the researchers' new model is able to accurately reproduce the Shuping landslide's movements over the past decade.

In this case, the researchers do not have direct access to measurements taken from the shear band, which is less than one meter of brown breccia soil and silty clay. They have to make assumptions about the levels of friction and the internal temperatures to make their model work.

In the mountains of Andorra, however, the slow-moving El Forn landslide threatens the safety of a nearby village called Canillo and is being closely monitored by the government. Unlike China or Italy, there is no dam or lake involved -- this landslide is being accelerated by melting snow feeding the groundwater levels in the mountains above the city.

Even though the conditions are completely different from the previous two landslides, the researchers are confident their model is up to the task.

Thanks to numerous boreholes that have been taken to gain a better understanding of the El Forn landslide, Veveakis and Segui have been able to insert thermometers directly into the shear band of a small lobe that is sliding faster than the rest. With this level of data available, the researchers expect to validate and refine their model even more, and even provide advice as to how to avoid a potential catastrophe should one begin to develop.

"One could imagine pumping water out of the ground, or circulating another cold fluid through the shear layer to cool it down and slow the landslide," said Segui. "Or at the very least, if we couldn't stop it, to provide enough warning to evacuate. That is exactly why we are there."

Credit: 
Duke University

Directly printing 3D tissues within the body

image: Image of a 3D lattice structure of a tissue implanted directly onto a soft living tissue.

Image: 
Ohio State University

(LOS ANGELES) - In the TV series Westworld, human body parts are built on robotic frames using 3D printers. While still far from this scenario, 3D printers are being increasingly used in medicine. For example, 3D printing can be used to produce parts of the body such as orthopedic joints and prosthetics, as well as portions of bone, skin and blood vessels. However, the majority of these tissues are created in an apparatus outside of the body and surgically implanted. Such a procedure may involve making large surgical incisions, posing the added risk of infection and increased recovery time for the patient. And since there is a time lapse between when the tissue is created and when it is implanted in the patient, further complications may occur. To prevent these complications, a team of scientists have developed a technology to print tissues directly in the body.

There are two basic components needed to produce an engineered tissue: (1) a fluid-like "bio-ink" that consists of a framework material mixed with living cells, and (2) growth factors to help the cells grow and develop into regenerated tissue. When developing tissues for direct implantation into the body, there are other things to consider: the construction of tissue would have to be conducted at body temperature (37°C), the tissue needs to be attached effectively to soft, live organ tissue and any procedural steps should not be harmful to the patient. One such harmful step in current methods is the application of harmful UV light necessary to solidify the constructed tissue.

A collaboration among Ali Khademhosseini, Ph.D., Director and CEO of the Terasaki Institute, David J Hoelzle, Ph.D., from the Ohio State University Department of Mechanical and Aerospace Engineering and Amir Sheikhi, Ph.D. from the Pennsylvania State University Department of Chemical Engineering, has produced a specially-formulated bio-ink designed for printing directly in the body.

"This bio-ink formulation is 3D printable at physiological temperature, and can be crosslinked safely using visible light inside the body." said first author Ali Asghari Adib, Ph.D. In order to build the tissue, they used robotic 3D printing, which uses robotic machinery affixed with a nozzle. Bio-ink may be dispensed through the nozzle, much like an icing tube squeezes out writing gel, only in a highly-precise, programmable manner.

The team also worked on methods to attach pieces of the tissue formed with this bio-ink onto soft surfaces. In experiments attempting to attach the tissue onto pieces of raw chicken strips and agarose, the team employed a unique interlock technique using the robotic 3D printer and their specially-formulated bio-ink. The nozzle tip was modified to be able to penetrate the soft surfaces and fill the punctured space with bio-ink as it withdrew; this created an anchor for the tissue construct. As the nozzle tip reached the surface, it dispensed an additional blob of bio-ink to "lock in" the anchor. "The interlocking mechanism enables stronger attachments of the scaffolds to the soft tissue substrate inside the patient body," said Asghari Adib.

Such improvements in tissue engineering are instrumental in providing lower-risk, minimally-invasive laparoscopic options for procedures such as the repair of tissue or organ defects, engineering/implanting patches to enhance ovarian function, or creating bio-functional hernia repair meshes. Such options would be safer for the patient, save time and be more cost-effective. Further modifications in tissue engineering design and the adjustment of other conditions may increase the potential for customization, thus leading the way to limitless possibilities for enhancing patient health.

"Developing personalized tissues that can address various injuries and ailments is very important for the future of medicine. The work presented here addresses an important challenge in making these tissues, as it enables us to deliver the right cells and materials directly to the defect in the operating room," said Khademhosseini, "This work synergizes with our Personalized Implant Technology Platform at the Terasaki Institute which aims to develop approaches that address the variability in tissue defects in patients."

Credit: 
Terasaki Institute for Biomedical Innovation

A clique away from more efficient networks

image: The graph (center) contains two cliques, with members of one clique shown in yellow and of the other clique shown in grey.

Image: 
© 2020 KAUST

A framework that uses graph theory, which considers how networks are coded, could help make digital communication networks more efficient.

For modeling social networks, no branch of mathematics is more integral than graph theory. The standard representation of a social network, in fact, is a graph. It comprises a set of points with lines joining some of the points. The points represent the network's members, while the lines represent the connections between them.

Working with KAUST's Tareq Al-Naffouri and Mohamed-Slim Alouini, former KAUST student Ahmed Douik now at Caltech and former postdoc Hayssam Dahrouj now at Effat University, have found a further area to which graph theory can be usefully applied: communications and signal processing.

"We've built a framework for using graph theory to solve problems of discrete optimization with excellent results," says Dahrouj. Their method is to formulate a given digital communication network as a graph and then find "cliques" within it. In graph theory, this is known as solving the "clique problem."

In any graph, a clique is a subset of points in which each point is connected to every other point. In a social network that means a group in which each member is friends with every other member in the group. Facebook, for example, solves the clique problem to work out the optimum friend suggestions and advertisements to send each of its many millions of members.

In previous work, Douik and Dahrouj showed how communications networks can be optimized using the same approach. A base station feeding wireless data to passing cars, for example, can be programmed to send data packets for common use once instead of repeatedly to individual vehicles. Applying the clique problem to large networks can, Douik reckons, improve their throughput by up to 30 percent.

Because the complexity of any graph increases exponentially as it grows in size, computers need clever algorithms to solve the clique problem for all but the smallest graphs. "A huge number of algorithms have been described in more than a century of research into graph theory; some before the appearance of computers," says Douik. "This means there is a rich body of literature waiting to be drawn on."

Another beauty of the approach lies in its future applicability. As networks increase in size and complexity, so do the gains from optimization. Tomorrow's internet of things will feature many more users, with 5G and 6G enabling much larger volumes of data to be accommodated.

Credit: 
King Abdullah University of Science & Technology (KAUST)

Royal Marsden trial leads to practice changing milestone for advanced anal cancer

Results from the first ever randomised clinical trial in advanced anal cancer patients, led and supported by The Royal Marsden NHS Foundation Trust and Cancer Research UK, in collaboration with colleagues in the US, Norway and Australia, has led to a practice changing milestone with a new approach to treatment which is safer and more effective than previously recommended treatments for this group of patients.

Published in the Journal of Clinical Oncology today, (Friday 12th June 2020), researchers found that a chemotherapy combination of carboplatin and paclitaxel - which is primarily used to treat other cancers, including ovarian, womb and lung cancer - performed better overall with anal cancer patients living seven months longer compared to chemotherapy treatment with cisplatin and 5-fluorouracil. These results have led to a direct change in recommended treatment guidelines in both European and American clinical practice (The European Society for Medical Oncology and National Comprehensive Cancer Network clinical practice guidelines).

Around 1,300 people are diagnosed with anal cancer each year in the UK and this number is rising by around three per cent per year. Due to small patient populations, there has previously been very limited evidence to guide treatment decisions. The findings from this study have set a new standard of care for this rare type of cancer, gaining international consensus among clinicians for the first time.

The international randomised phase II trial, led by The Royal Marsden NHS Foundation Trust, analysed data from 91 patients in four countries, including the UK, Norway, the US and Australia. This study was supported by the International Rare Cancers Initiative (IRCI) a global collaboration aimed at improving outcomes for patients with rare cancers. This is the first IRCI trial of its kind to complete and report its results.

This study was funded by The Royal Marsden's GI and Lymphoma Unit, the NIHR Biomedical Research Centre at The Royal Marsden and The Institute of Cancer Research, London (ICR), Cancer Research UK and The Royal Marsden Cancer Charity.

Study Chief-Investigator Dr Sheela Rao, Consultant Medical Oncologist at The Royal Marsden NHS Foundation Trust, said:

"The results of this study have led to an immediate change in patient care. While treatment with cisplatin and 5-fluorouracil was generally considered a reasonable option for advanced anal cancer, we now know that carboplatin and paclitaxel is more effective and better tolerated. In our study, these patients lived seven months longer overall and experienced less treatment side effects.

"This is a great example of international collaboration for a rare cancer with the results having a direct impact on recommended guidelines which will benefit patients with anal cancer from around the world.

"Around 30 per cent of people with anal cancer will develop advanced disease which cannot be treated surgically, and all of these patients are eligible to receive this chemotherapy combination."

Deborah Pink, 58, was a patient on this trial who has benefitted from receiving the newly recommended chemotherapy combination. Deborah has been receiving treatment at The Royal Marsden since she was diagnosed with anal cancer in 2011. She said:

"After my diagnosis things moved very quickly and I've had a number of different treatments, surgery and radiotherapy. When I first started this trial a few years ago, I had a complete response to the chemotherapy combination of carboplatin and paclitaxel and doctors at the Royal Marsden saw a reduction in the size of my tumour which was such positive news to hear. Being on the trial was a great opportunity and knowing the results are benefitting other patients like myself makes a real difference, it's really encouraging to know that research is being done in this area for such a rare type of cancer. Currently, I am continuing with this type of chemotherapy treatment and am monitored under the hospital with regular scans. Without this treatment and the ongoing care I've had at The Royal Marsden over the last nine years, I might not have been here."

Martin Ledwick, Cancer Research UK's head information nurse, said:

"These results clearly demonstrate the need for ongoing research so that we can continually improve treatments for people with cancer. Without support for well-designed clinical trials, important improvements in treatment like this can't be identified."

Credit: 
The Royal Marsden NHS Foundation Trust

Inhibitory interneurons in hippocampus excite the developing brain

WASHINGTON (June 12, 2020) - Brain function depends on inhibitory cells that balance or 'brake' excitation. These neurons allow the brain to process information and also prevent runaway seizures. A new study from the George Washington University (GW), however, reports that in some critical structures of the developing brain, the inhibitory neurons cause excitation rather than suppression of brain activity. The findings, published in Science Advances, could have implications for the treatment of neonatal seizures.

Gamma-aminobutyric acid (GABA)-releasing, or GABAergic, interneurons mediate inhibition in the mammalian brain. Despite their importance in adults and implication in neurodevelopmental disorders such as autism, the role of interneurons during brain development is poorly understood. A more complete understanding of GABA's role in early network function is essential for designing treatments for neonatal seizures because standard treatments that augment GABA receptor function in infants with seizures are often ineffective.

Researchers at GW tested a longstanding hypothesis that the brain's inhibitory system is actually excitatory during development, which had not previously been proven in vivo.

"This is the first evidence that these neurons are actually excitatory in vivo," said Matthew Colonnese, PhD, associate professor of pharmacology and physiology at the GW School of Medicine and Health Sciences and the senior author on the study. "The inhibitory system within the fetal brain, which in the adult acts as a kind of braking system, instead can act as an accelerator in the young brain."

In this study, the researchers locally manipulated interneuron activity in a murine model. Their results prove that GABAergic neurons are excitatory in the hippocampus at ages equivalent to the early third trimester and only later become inhibitory. The study also showed interneurons in a closely related but different region, the visual cortex, are inhibitory throughout early development.

The team's evidence of GABA's heterogeneity across regions of the brain gives one potential explanation why simply trying to change early excitatory GABA into inhibitory GABA - as anti-seizure treatments currently do - may not work, explained Colonnese. "It's really going to depend on the origin of the seizures, whether they originate in the cortical or hippocampal tissue, their spread, the infant's age, and the type of seizure."

"Currently, we don't have an effective treatment for neonatal seizures," said Yasunobu Murata, PhD, a member of Colonnese's lab at GW and first author on the study. "My hope is that we can use these findings to develop more targeted therapies for infants suffering from seizure disorders."

Murata and Colonnese see the potential for their data to inform development of drugs for neonatal seizures, however, having examined just two regions in this study, they note that a more global mapping of where and when interneurons are excitatory or inhibitory is needed, as well as an understanding of what determines whether neurons act as an accelerator or a brake.

Credit: 
George Washington University

New algorithm uses artificial intelligence to help manage type 1 diabetes

Researchers and physicians at Oregon Health & Science University, using artificial intelligence and automated monitoring, have designed a method to help people with type 1 diabetes better manage their glucose levels.

The research was published in the journal Nature Metabolism.

"Our system design is unique," said lead author Nichole Tyler, an M.D.-Ph.D. student in the OHSU School of Medicine. "We designed the AI algorithm entirely using a mathematical simulator, and yet when the algorithm was validated on real-world data from people with type 1 diabetes at OHSU, it generated recommendations that were highly similar to recommendations from endocrinologists."

That's significant because the people with diabetes typically go three to six months between appointments with their endocrinologist.

In that time, they can be at risk of dangerous complications if glucose levels in their blood rise too high or fall too low. People with type 1 diabetes do not produce their own insulin, so they must take it continuously through the day using an insulin pump or through multiple daily injections. The algorithm developed by OHSU scientists uses data collected from a continuous glucose monitor and wireless insulin pens to provide guidance on adjustments.

Paired with a smart phone app called DailyDose, the recommendations from the algorithm were shown to be in agreement with physicians 67.9% of the time.

The new study involved monitoring 16 people with type 1 diabetes over the course of four weeks, showing that the model can help reduce hypoglycemia, or low glucose. If left untreated, hypoglycemia can cause coma or death.

The engine was developed in a collaboration between the OHSU Harold Schnitzer Diabetes Health Center and the Artificial Intelligence for Medical Systems Lab led by Peter Jacobs, Ph.D., associate professor of biomedical engineering in the OHSU School of Medicine.

"There are other published algorithms on this, but not a lot of clinical studies," said Jacobs, senior author on the study. "Very few have shown a statistically relevant outcome - and most do not compare algorithm recommendations with those of a physician. In addition to showing improvement in glucose control, our algorithm generated recommendations that had very high correlation with physician recommendations with over 99% of the algorithm's recommendations delivered across 100 weeks of patient testing considered safe by physicians."

OHSU intends to continue to advance the technology.

"We have plans over the next several years to run several larger trials over eight and then 12 weeks and to compare DailyDose with other insulin treatment strategies, including automated insulin delivery," said co-author Jessica Castle, M.D., associate professor of medicine (endocrinology, diabetes and clinical nutrition) in the OHSU School of Medicine.

Credit: 
Oregon Health & Science University

University of Michigan researchers identify new approach to turning on the heat in energy-burning fat cells

Researchers have discovered a new set of signals that cells send and receive to prompt one type of fat cell to convert fat into heat. The signaling pathway, discovered in mice, has potential implications for activating this same type of thermogenic fat in humans.

Thermogenic fat cells, also called beige fat or beige adipocytes, have gained attention in recent years for their potential to curb obesity and other metabolic disorders, due to their ability to burn energy stored as fat. But scientists have yet to translate this potential into effective therapies.

The challenge of activating beige fat in humans arises, in part, because this process is regulated through so-called adrenergic signaling, which uses the hormone catecholamine to instruct beige fat cells to start burning energy. But adrenergic signaling also controls other important biological functions, including blood pressure and heartbeat regulation, so activating it in humans with agonists has potentially dangerous side effects.

In a new study scheduled for online publication June 12 in the journal Developmental Cell, a team of researchers led by the University of Michigan Life Sciences Institute describes a pathway that can regulate beige fat thermogenesis independently of adrenergic signaling. Instead, it operates through a receptor protein called CHRNA2, short for Cholinergic Receptor Nicotinic Alpha 2 Subunit.

"This pathway opens a whole new direction for approaching metabolic disorders," said Jun Wu, an assistant professor at the LSI and the study's senior author. "Of course, this cholinergic pathway also is involved in other important functions, so there is still much work to do to really figure out how this might work in humans. But we are encouraged by these initial findings."

For their study, Wu and her colleagues blocked the CHRNA2 pathway only in adipocytes in mice, and then fed the mice a high-fat diet. Without the CHRNA2 receptor proteins, the mice showed greater weight gain than normal mice, and were less able to activate thermogenesis in response to excess food intake.

Wu believes the findings are particularly exciting in light of another research team's recent discovery of a new type of beige fat that is not regulated by catecholamine. This newest study from the LSI indicates that this subpopulation of beige fat, called glycolytic beige fat (or g-beige fat), can be activated through the CHRNA2 pathway.

"Many patients with metabolic disorders have catecholamine resistance, meaning their cells do not detect or respond to catecholamine," said Wu, who is also an assistant professor of molecular and integrative physiology at the U-M Medical School.

"So even if it could be done safely, activating that adrenergic pathway would not be an effective treatment option for such patients. This new pathway, with this new subtype of beige fat, could be the beginning of a whole new chapter for approaching this challenge."

Credit: 
University of Michigan

Changes in frequency of sex among US adults

What The Study Did: This survey study of U.S. adults ages 18 to 44 looked at changes in the reported frequency of sexual activity, the number of sexual partners and factors associated with frequency and numbers of partners.

Authors: Peter Ueda, M.D., Ph.D., of the Karolinska Institutet in Stockholm, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamanetworkopen.2020.3833)

Editor's Note: The article includes conflicts of interest disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, conflicts of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

Silicones may lead to cell death

image: Silicone molecules from breast implants can initiate processes in human cells that lead to cell death. Researchers from Radboud University have demonstrated this in a new study that will be published on 12 June in Scientific Reports. 'However, there are still many questions about what this could mean for the health effects of silicone breast implants. More research is therefore urgently needed,' says Ger Pruijn, professor of Biomolecular Chemistry at Radboud University.

Image: 
Radboud University

Silicone molecules from breast implants can initiate processes in human cells that lead to cell death. Researchers from Radboud University have demonstrated this in a new study that will be published on 12 June in Scientific Reports. "However, there are still many questions about what this could mean for the health effects of silicone breast implants. More research is therefore urgently needed," says Ger Pruijn, professor of Biomolecular Chemistry at Radboud University.

The possible side effects of silicone breast implants have been debated for decades. There are known cases where the implants have led to severe fatigue, fever, muscle and joint aches, and concentration disturbance. However, there is as yet no scientific study demonstrating the effect silicone molecules can have on human cells that could explain these side effects.

Silicone in the body

It is a known fact that breast implants 'bleed', i.e. silicone molecules from the implant pass through the shell and enter the body. Earlier research, in 2016, by Dr Rita Kappel, plastic surgeon, and Radboud university medical center, found that silicone molecules can then migrate through the body via the bloodstream or lymphatic system. The biochemists at Radboud University next asked themselves the follow-up question: what effect might silicone molecules have on cells exposed to it?

Cultured cells

Experiments with cultured cells showed that silicones appeared to initiate molecular processes that lead to cell death. "We observed similarities with molecular processes related to programmed cell death, a natural process called apoptosis that has an important function in clearing cells in our body. This effect appeared to depend on the dose of silicone and the size of the silicone molecules. The smaller the molecule, the stronger the effect," according to Pruijn.

To investigate the effect of silicones on human cells, the researchers have added small silicone molecules - which also occur in silicone breast implants - to three different types of cultured human cells. "One cell was more sensitive to the effect of silicones than the other two cell types. This suggests that the sensitivity of human cells to silicones varies."

Open questions

The effects the researchers have found lead to many new questions. "We observed that silicones induce molecular changes in cells, but we don't know yet whether these changes could, for example, lead to an autoimmune response, which could in part explain the negative side effects of implants," says Pruijn.

"Caution is advised with drawing conclusions based on these findings because we used cultured cells in our research, not specific human cells such as brain cells or muscle cells. Further research is required to get more clarity."

Credit: 
Radboud University Nijmegen

Scientists call for long-term research on ozone source apportionment

Tropospheric ozone is produced via the photochemical reaction of volatile organic compounds, carbon monoxide, and nitrogen oxides in the presence of sunlight. Over the past 20 years, serious ozone pollution has been found in the most highly populated and industrialized city clusters in China, such as the Beijing-Tianjin-Hebei, Yangtze River Delta, Pearl River Delta, and Sichuan Basin regions.

To control ozone pollution is a great challenge because of the diverse range of precursors and the nonlinear relationship between ozone concentrations and these precursors. Accurately determining the main sources of ozone is therefore the key to the formulation of a reasonable and efficient ozone control strategy.

A recent article published in Atmospheric and Oceanic Science Letters by the research team of Prof. Zhang Meigen, from the State Key Laboratory of Atmospheric Boundary Layer Physics and Atmospheric Chemistry, Institute of Atmospheric Physics, Chinese Academy of Sciences, summarizes the approaches and main conclusions of studies on the sources of ozone and its precursors for both regional and sectoral sources in China, including back-trajectory analyses and ozone source apportionment based on the observation-based method (OBM) and emissions-based method (EBM).

"The OBM avoids the uncertainties of the emission inventories required in the EBM. However, in practical applications, results based on the OBM are limited to a fixed time and fixed site, without the regional results obtained, due to the high requirements for the accuracy and representativeness of the observational data," explains Professor Zhang.

The results of ozone source apportionment over China suggest that, in addition to contributions from local and surrounding areas, super-regional (outside the simulated area) transport also makes a significant contribution to the high concentrations of ozone in China, indicating the importance of strengthening regional defense and control measures. Mobile and industrial sources are the major sources of surface ozone, and this is where future control actions should be focused.

"Existing ozone sensitivity analysis and source apportionment are mostly limited to a single ozone pollution episode or one particular month, with few long-term studies," says Professor Zhang. "In the future, more long-term research should be carried out to provide references for the development of emission reduction strategies to achieve long-term ozone attainment in China."

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Statistical analyses of plant metabolites allow solid testing of plant defense theories

image: The African cotton leafworm Spodoptera littoralis feeds on the leaves of the wild tobacco Nicotiana attenuata. This pest insect is a food generalist and feeds on many different plants, Extensive analyses of the metabolites produced by the plant after attack by the two herbivores and the application of statistical assumptions based on information theory demonstrated that the chemical defense of tobacco plants is a directed response to herbivore attack.

Image: 
Danny Kessler, Max Planck Institute for Chemical Ecology

Do plants attacked by herbivores produce substances that are most effective against attackers in a targeted manner, or are herbivore-induced changes in a plant metabolism random, which could thwart the performance of herbivores? Scientists at the Max Planck Institute for Chemical Ecology in Jena, Germany, and at the CNRS Institute of Plant Molecular Biology/University of Strasbourg, France, tested these long-standing hypotheses for the first time using the coyote tobacco Nicotiana attenuata and its close relatives. They combined extensive measurements of known and unknown plant metabolites using mass spectrometry with statistical measures derived from information theory. The results show that plants regulate their metabolism directionally to produce effective defenses. Furthermore, a comparative approach using different populations and closely related species demonstrated that the amount of certain plant hormones are crucial for the directionality of the plant's response to its enemy. (Science Advances, doi: 10.1126/sciadv.aaz0381, June 2020).

All living organisms on earth can be divided into two major groups: those who produce their own food from abiotic sources, such as light, and those who feed on other organisms. These different ways of feeding affect an organism's metabolism. Plants, which gain their energy from light, produce a much greater diversity of metabolites than animals. Scientists have long wondered which evolutionary forces are behind this difference. As early as the in the 1950s, researchers assumed that that the ability of plants to produce certain substances to defend themselves could be one reason. There are two different theories on how the production of such defensive substances is regulated in a plant: One possibility is that plants adapt their metabolism in such a way that they produce compounds with defensive functions according to the probability of a future attack. This theory is called "Optimal Defense". In contrast, a second theory assumes that plants change their metabolism randomly. For herbivores, these random changes would not be predictable and therefore they cannot adapt to a plant's defense. This assumption is called the theory of a Moving Target, because the defense strategy is not targeted, but hits attackers at random.

Even though many individual defensive substances plants produce after being attacked have by now been well characterized, it has not yet been possible to test experimentally which of the two theories applies on a broad metabolic scale. To answer this question, scientists from the Max Planck Institute for Chemical Ecology and the the CNRS Institute of Plant Molecular Biology/University of Strasbourg have studied the ecological model plant Nicotiana attenuata after attack by two different herbivores. The larvae of the tobacco hawk moth are specialists that only feed on nightshade plants and are generally well adapted to the defenses of their food plants, while the caterpillars of the African cotton leafworm are generalists and can feed on many different plants; however, they are less adapted to individual plant species and their defensive compounds. The scientists used mass spectrometry to analyze known as well as new and unknown metabolites that plants formed after herbivore attack and used statistical principles of information theory for their data evaluation. Modern methods of mass spectrometry allow for an unbiased measurement of as many substances as possible. "What is then needed is defining a mean of statistically scoring metabolic diversity within a plant extract and compare it across multiple experimental conditions and for different plant species. In addition, it is crucial to identify compounds present in a given plant sample which remains a cornerstone challenge in metabolomics. In this study, we provide such an approach combining innovations in computational metabolomics and the use of statistical concepts developed as a part of information theory and applied this approach to test the predictions of plant defense theories," explains Emmanuel Gaquerel, one of the study leaders.

"The computational workflow that allowed for unbiased and holistic analysis of plant metabolism described in our paper clearly shows that Nicotiana attenuata's metabolic changes when attacked by both generalist and specialist caterpillars is highly directional," Ian Baldwin, one of the senior authors, summarizes the results of the study. The study thus backs up the theory of optimal defense: plants reorganize their metabolism in a way that is directional towards the formation of defensive substances.

"It came as a surprise to us that statistical indices obtained for metabolic profiles resulting from feeding by the two herbivores, a nocturnal feeding generalist and a nightshade specialist, largely overlapped despite their distinct feeding behaviors," Dapeng Li, the first author of the study, says. In order to find out how this defense strategy evolved and which compounds provide a crucial contribution to its success, the scientists used plants in which a trait had been genetically modified, as well as plant populations of the same species and various closely related species. They discovered that the metabolic changes in response to herbivore attack are primarily controlled by marginal modifications in the signaling cascade of plant hormones, in particular of jasmonic acid.

In further experiments, the researchers want to apply this computational workflow to understand how circadian and diurnal patterns influence metabolism. This is a fundamental problem for all organisms that are directly dependent on sunlight for their nutrition, but which is only available at certain times of the day, due to the earth's rotation around the sun.

Credit: 
Max Planck Institute for Chemical Ecology