Earth

Moffitt uses mathematical modeling to identify factors that determine adaptive therapy success

TAMPA, Fla. - One of the most challenging issues in cancer therapy is the development of drug resistance and subsequent disease progression. In a new article featured on this month's cover of Cancer Research, Moffitt Cancer Center researchers, in collaboration with Oxford University, report results from their study using mathematical modeling to show that cell turnover impacts drug resistance and is an important factor that governs the success of adaptive therapy.

Cancer treatment options have increased substantially over the past few decades; however, many patients eventually develop drug resistance. Physicians strive to overcome resistance by either trying to target cancer cells through an alternative approach or targeting the resistance mechanism itself, but success with these approaches is often limited, as additional resistance mechanisms can arise.

Researchers in Moffitt's Integrated Mathematical Oncology Department and Center of Excellence for Evolutionary Therapy believe that resistance may partly develop because of the high doses of drugs that are commonly used during treatment. Patients are typically administered a maximum tolerated dose of therapy that kills as many cancer cells as possible with the fewest side effects. However, according to evolutionary theories, this maximum tolerated dose approach could lead to drug resistance because of the existence of drug resistant cells before treatment even begins. Once sensitive cells are killed by anti-cancer therapies, these drug resistant cells are given free rein to divide and multiply. Moffitt researchers believe an alternative treatment strategy called adaptive therapy may be a better approach to kill cancer cells and minimize the development of drug resistance.

"Adaptive therapy aims not to eradicate the tumor, but to control it. Therapy is applied to reduce tumor burden to a tolerable level but is subsequently modulated or withdrawn to maintain a pool of drug-sensitive cancer cells," said Alexander Anderson, Ph.D., chair of the Integrated Mathematical Oncology Department and founding director of the Center of Excellence for Evolutionary Therapy.

Previous laboratory studies have shown that adaptive therapy can prolong the time to cancer progression for several different tumor types, including ovarian, breast and melanoma. Additionally, a clinical trial in prostate cancer patients at Moffitt has shown that compared to standard treatment, adaptive therapy increased the time to cancer progression by approximately 10 months and reduced the cumulative drug usage by 53%.

Despite these encouraging results, it is unclear which tumor types will respond best to adaptive therapy in the clinic. Recent studies have shown that the success of adaptive therapy is dependent on different factors, including levels of spatial constraint, the fitness of the resistant cell population, the initial number of resistant cells and the mechanisms of resistance. However, it is unclear how the cost of resistance factors into a tumor's response to adaptive therapy.

The cost of resistance refers to the idea that cells that become resistant have a fitness advantage over non-resistant cells when a drug is present, but this may come at a cost, such as a slower growth rate. However, drug resistance is not always associated with a cost and it is unclear whether a cost of resistance is necessary for the success of adaptive therapy.

The research team at Moffitt used mathematical modeling to determine how the cost of resistance is associated with adaptive therapy. They modeled the growth of drug sensitive and resistant cell populations under both continuous therapy and adaptive therapy conditions and compared their time to disease progression in the presence and absence of a cost of resistance.

The researchers showed that tumors with higher cell density and those with smaller levels of pre-existing resistance did better under adaptive therapy conditions. They also showed that cell turnover is a key factor that impacts the cost of resistance and outcomes to adaptive therapy by increasing competition between sensitive and resistance cells. To do so, they made use of phase plane techniques, which provide a visual way to dissect the dynamics of mathematical models.

"I'm a very visual person and find that phase planes make it easy to gain an intuition for a model. You don't need to manipulate equations, which makes them great for communicating with experimental and clinical collaborators. We are honored that Cancer Research selected our collage of phase planes for their cover and hope this will help making mathematical oncology accessible to more people," said Maximilian Strobl, lead study author and doctoral candidate at University of Oxford.

To confirm their models, the researchers analyzed data from 67 prostate cancer patients undergoing intermittent therapy treatment, a predecessor of adaptive therapy.

"We find that even though our model was constructed as a conceptual tool, it can recapitulate individual patient dynamics for a majority of patients, and that it can describe patients who continuously respond, as well as those who eventually relapse," said Anderson.

While more studies are needed to understand how adaptive therapies may benefit patients, researchers are hopeful their data will lead to better indicators of which tumors will respond to adaptive therapy.

"With better understanding of tumor growth, resistance costs, and turnover rates, adaptive therapy can be more carefully tailored to patients who stand to benefit from it the most and, more importantly, highlight which patients may benefit from multi-drug approaches," said Anderson.

Credit: 
H. Lee Moffitt Cancer Center & Research Institute

Oncotarget: AKT isoforms have discrete expression in triple negative breast cancers

image: Down-regulation of AKT2 decreases tumorigenic capacity (A) Tumorigenic capacity in vivo. 1 × 106 cells were injected into the right flank of female Balb/c mice. Representative photo shows differential volume of tumor at 30 days after cisplatin treatment and tumor burden was assessed after every 3 days in injected animals, for a total of 30 days. Error bars represent mean ± S. D. versus control.

Image: 
Correspondence to - Fayaz Malik - fmalik@iiim.ac.in

Oncotarget recently published "AKT isoforms have discrete expression in triple negative breast cancers and roles in cisplatin sensitivity" which reported that the authors investigated the expression and net effect of the individual isoforms in triple negative breast cancers and response to cisplatin treatment using cellular, mice models and clinical samples.

Interestingly, analysis of the expressions of AKT isoforms in clinical samples showed relatively higher expression of AKT1 in primary tissues; whereas lung and liver metastatic samples showed elevated expression of AKT2.

Similarly, triple-negative breast cancer cell lines, BT-549 and MDA-MB-231, with high proliferative and invasive properties, displayed higher expression levels of AKT1/2. By modulating AKT isoform expression in MCF-10A and BT-549 cell lines, the Oncotarget authors found that the presence of AKT2 was associated with invasiveness, stemness and sensitivity to drug treatment.

It was further demonstrated that loss of function of AKT1 isoform is associated with reduced sensitivity towards cisplatin treatment in triple-negative breast cancers cellular and syngeneic mice models.

The decrease in cisplatin treatment response in shAKT1 cells was allied with the upregulation in the expression of transporter protein ABCG2, whereas silencing of ABCG2 restored cisplatin sensitivity in these cells through AKT/SNAIL/ABCG2 axis.

In conclusion, the Oncotarget study demonstrated the varied expression of AKT isoforms in triple-negative breast cancers and also confirmed the differential role of isoforms in stemness, invasiveness and response towards the cisplatin treatment.

"The Oncotarget study demonstrated the varied expression of AKT isoforms in triple-negative breast cancers."

Dr. Fayaz Malik from The Academy of Scientific and Innovative Research as well as The CSIR-Indian Institute of Integrative Medicine said, "Breast cancer is the second-most lethal cancer in women around the world."

Their team explored cisplatin resistance via AKT2 and AKT3 isoforms that lead to malignant human uterine cancer cells.

Nevertheless, to understand the significance of the outcomes driven by the AKT isoforms, in respect to normal versus malignant breast cancer it is important to characterize which AKT isoform leads to oncogenesis, or exerts self-contradictory effects, both promoting and impeding neoplastic phenotypes.

Therefore, the authors sought to determine the isoform-specific functions of AKT in triple-negative breast cancers.

To this end, they modulated AKT isoform expression in human mammary nonmalignant immortalized cell line, MCF-10A and malignant breast cancer cell line, BT-549 by knocking down endogeneous AKT isoforms using short hairpin RNA.

Interestingly, analysis of triple-negative breast clinical samples from primary and metastatic sites have shown differential expression of AKT isoforms. These studies, highlight the role of specific AKT isoforms in invasiveness and poor response to cisplatin treatment in Triple-negative breast cancers that needs to be evaluated further for the development of isoform specific inhibitors for better clinical outcome.

The Malik Research Team concluded in their Oncotarget Research Paper that the present work elucidated expressions of AKT isoforms varies in primary and secondary sites of TNBCs, which needs to be further validated by taking large sample sizes.

These studies unraveled the specific roles of AKT isoforms in stemness, invasion and therapeutic response of cisplatin in TNBCs, therefore suggesting that it is imperative to precisely design isoform specific inhibitors in the treatment of aggressive triple-negative breast cancers.

Sign up for free Altmetric alerts about this article

DOI - https://doi.org/10.18632/oncotarget.27746

Full text - https://www.oncotarget.com/article/27746/text/

Correspondence to - Fayaz Malik - fmalik@iiim.ac.in

Keywords -
AKT isoform,
CSCs,
ABCG2,
drug resistance,
TNBCs

About Oncotarget

Oncotarget is a biweekly, peer-reviewed, open access biomedical journal covering research on all aspects of oncology.

To learn more about Oncotarget, please visit https://www.oncotarget.com or connect with:

SoundCloud - https://soundcloud.com/oncotarget
Facebook - https://www.facebook.com/Oncotarget/
Twitter - https://twitter.com/oncotarget
LinkedIn - https://www.linkedin.com/company/oncotarget
Pinterest - https://www.pinterest.com/oncotarget/
Reddit - https://www.reddit.com/user/Oncotarget/

Oncotarget is published by Impact Journals, LLC please visit http://www.ImpactJournals.com or connect with @ImpactJrnls

Journal

Oncotarget

DOI

10.18632/oncotarget.27746

Credit: 
Impact Journals LLC

Teens may be more likely to use marijuana after legalization for adult recreational use

image: Legal marijuana

Image: 
Photo by Erik Mclean from Pexels

Teens may be more likely to use marijuana after legalization for adult recreational use

PISCATAWAY, NJ - Adolescents who live in California may be more likely to use marijuana since adult recreational marijuana use was legalized in 2016, according to a new report in the Journal of Studies on Alcohol and Drugs.

"The apparent increase in marijuana use among California adolescents after recreational marijuana legalization for adult use in 2016 is surprising given the steady downward trend in marijuana use during years before legalization," says lead researcher Mallie J. Paschall, Ph.D., senior research scientist at the Prevention Research Center of the Pacific Institute for Research and Evaluation in Berkeley, California.

Paschall and his colleagues analyzed data from over three million 7th, 9th, and 11th graders who participated in the California Healthy Kids Survey from 2010-2011 through 2018-2019 school years. The adolescents provided information on their grade, sex, ethnicity, race and lifetime and past-30-day marijuana use. The marijuana use question was updated in the 2017-2018 and 2018-2019 surveys to include the words "smoke, vape, eat, or drink," reflecting the wide variety of marijuana products now available.

The researchers observed significant increases in the prevalence of lifetime and past-30-day marijuana use among nearly all demographic groups from 2017-2018 to 2018-2019 school years, after legalization of adult recreational use: an 18% increase in the likelihood of lifetime use and a 23% increase in past-30-day use. These numbers may reflect greater use of vaping products, and the overall increase was even more likely among those in demographic groups with historically lower rates of marijuana use.

"I was somewhat surprised to see relatively greater increases in the prevalence of marijuana use among younger adolescents (7th graders) relative to 9th and 11th graders, among females versus males, among non-Hispanic versus Hispanic youth, and among Whites versus youth in other racial groups," says Paschall. "In other words, there were greater increases in marijuana use prevalence after recreational marijuana legalization among youth in 'low-risk' groups, which is concerning."

Paschall says he can only speculate as to the reason, but that the greater increases in these normally low-risk groups may be attributed to marijuana use becoming more normative due to legalization, along with relatively greater overall declines in marijuana use among youth in historically 'high-risk' groups during the study period.

The study also indicated greater increases in the frequency of past-30-day marijuana use among older adolescents, males, African American and Asian youth who were regular users. There were notable increases in marijuana use frequency among adolescents in 2018-19, which may reflect national increases in the use of vaping products.

"Recreational marijuana legalization may be contributing to an increase in marijuana use among adolescents in California, but we need to do further research to confirm this," says Paschall. "We also need to look more closely at what's happening at the local level, because there is a lot of variation in marijuana policies in communities across California and the United States. Also, we need to know more about how adolescents are getting marijuana and what forms of marijuana they are using, since there is such a great variety of cannabis products available."

The researchers suggest that recreational marijuana legalization may present increased opportunities for adolescents to obtain marijuana and that the increasing availability of non-smoking products such as edibles may prove appealing as well.

“I’m interested in whether recreational marijuana legalization for adult use may affect use among adolescents, possibly by changing norms regarding the acceptability of marijuana use, perceived harms of marijuana use, or availability or marijuana to youth,” says Paschall.

Paschall and his colleagues also write that states and communities that have legalized adult recreational marijuana use and sales could benefit from implementing both stricter controls on the availability of marijuana to adolescents and evidence-based prevention programs.

Credit: 
Journal of Studies on Alcohol and Drugs

Lower testosterone during puberty increases the brain's sensitivity to it in adulthood

image: Changes in blood flow in brain regions when viewing angry and ambiguous facial expressions. Purple = low puberty testosterone, gray = medium, and green = high.

Image: 
Liao et al., JNeurosci 2021

Young men with lower testosterone levels throughout puberty become more sensitive to how the hormone influences the brain's responses to faces in adulthood, according to new research published in JNeurosci.

During prenatal brain development, sex hormones like testosterone organize the brain in permanent ways. But research suggests that testosterone levels during another developmental period -- puberty -- may have long-lasting effects on brain function, too.

Liao et al. examined the relationship between puberty testosterone levels and the brain's response to faces. Liao's team recruited 500 men around age 19 who had been participants in the Avon Longitudinal Study of Parents and Children, a British birth cohort study established in 1991-1992. The longitudinal study collected blood samples at several time points throughout puberty, which the research team used to determine testosterone levels. The study participants were asked to watch videos of facial expressions while in in an fMRI scanner and provide a saliva sample on the day of the scan. For men with lowest testosterone levels during puberty, high levels of testosterone on the day of the fMRI scan were linked to greater brain activity in areas sensitive to faces. However, men with higher levels of testosterone throughout puberty did not show an increase in activity in these brain areas with high testosterone levels. These results highlight that an individual's history, not just their state on a given day, may contribute to the individual differences often seen in brain responses.

Credit: 
Society for Neuroscience

CO2 dip may have helped dinosaurs walk from South America to Greenland

image: A cliff in Jameson Land Basin in central East Greenland, the northernmost site where sauropodomorph fossils are found. The labels point out several series of layers that helped the researchers precisely date the oldest sauropodomorph fossils in North America.

Image: 
Lars Clemmensen

A new paper refines estimates of when herbivorous dinosaurs must have traversed North America on a northerly trek to reach Greenland, and points out an intriguing climatic phenomenon that may have helped them along the journey.

The study, published today in Proceedings of the National Academy of Sciences, is authored by Dennis Kent, adjunct research scientist at Columbia University's Lamont-Doherty Earth Observatory, and Lars Clemmensen from the University of Copenhagen.

Previous estimates suggested that sauropodomorphs -- a group of long-necked, herbivorous dinosaurs that eventually included Brontosaurus and Brachiosaurus -- arrived in Greenland sometime between 225 and 205 million years ago. But by painstakingly matching up ancient magnetism patterns in rock layers at fossil sites across South America, Arizona, New Jersey, Europe and Greenland, the new study offers a more precise estimate: It suggests that sauropodomorphs showed up in what is now Greenland around 214 million years ago. At the time, the continents were all joined together, forming the supercontinent Pangea.

With this new and more precise estimate, the authors faced another question. Fossil records show that sauropodomorph dinosaurs first appeared in Argentina and Brazil about 230 million years ago. So why did it take them so long to expand into the Northern Hemisphere?

"In principle, the dinosaurs could have walked from almost one pole to the other," explained Kent. "There was no ocean in between. There were no big mountains. And yet it took 15 million years. It's as if snails could have done it faster." He calculates that if a dinosaur herd walked only one mile per day, it would take less than 20 years to make the journey between South America and Greenland.

Intriguingly, Earth was in the midst of a tremendous dip in atmospheric CO2 right around the time the sauropodomorphs would have been migrating 214 million years ago. Until about 215 million years ago, the Triassic period had experienced extremely high CO2 levels, at around 4,000 parts per million -- about 10 times higher than today. But between 215 and 212 million years ago, the CO2 concentration halved, dropping to about 2,000ppm.

Although the timing of these two events -- the plummeting CO2 and the sauropodomorph migration -- could be pure coincidence, Kent and Clemmensen think they may be related. In the paper, they suggest that the milder levels of CO2 may have helped to remove climatic barriers that may have trapped the sauropodomorphs in South America.

On Earth, areas around the equator are hot and humid, while adjacent areas in low latitudes tend to be very dry. Kent and Clemmensen say that on a planet supercharged with CO2, the differences between those climatic belts may have been extreme -- perhaps too extreme for the sauropodomorph dinosaurs to cross.

"We know that with higher CO2, the dry gets drier and the wet gets wetter," said Kent. 230 million years ago, the high CO2 conditions could have made the arid belts too dry to support the movements of large herbivores that need to eat a lot of vegetation to survive. The tropics, too, may have been locked into rainy, monsoon-like conditions that may not have been ideal for sauropodomorphs. There is little evidence they ventured forth from the temperate, mid-latitude habitats they were adapted to in Argentina and Brazil.

But when the CO2 levels dipped 215-212 million years ago, perhaps the tropical regions became more mild, and the arid regions became less dry. There may have been some passageways, such as along rivers and strings of lakes, that would have helped sustain the herbivores along the 6,500-mile journey to Greenland, where their fossils are now abundant. Back then, Greenland would have had a temperate climate similar to New York state's climate today, but with much milder winters, because there were no polar ice sheets at that time.

"Once they arrived in Greenland, it looked like they settled in,'" said Kent. "They hung around as a long fossil record after that."

The idea that a dip in CO2 could have helped these dinosaurs to overcome a climatic barrier is speculative but plausible, and it seems to be supported by the fossil record, said Kent. Sauropodomorph body fossils have not been found in the tropical and arid regions of this time period -- although their footprints do occasionally turn up -- suggesting they did not linger in those areas.

Next, Kent hopes to continue working to better understand the big CO2 dip, including what caused it and how quickly CO2 levels dropped.

Credit: 
Columbia Climate School

Posttraumatic stress after natural disasters

What The Study Did: Data from four studies of children and adolescents exposed to major U.S. hurricanes were pooled to examine posttraumatic stress symptoms after those events and the factors associated with them.

Authors: Betty S. Lai, Ph.D., of Boston College in Chestnut Hill, Massachusetts, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/ 

(doi:10.1001/jamanetworkopen.2020.36682)

Editor's Note: The article includes conflict of interest and funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

New physics rules tested on quantum computer

image: A quantum circuit

Image: 
Aalto University

Aalto researchers have used an IBM quantum computer to explore an overlooked area of physics, and have challenged 100 year old cherished notions about information at the quantum level.

The rules of quantum physics - which govern how very small things behave - use mathematical operators called Hermitian Hamiltonians. Hermitian operators have underpinned quantum physics for nearly 100 years but recently, theorists have realized that it is possible to extend its fundamental equations to making use of Hermitian operators that are not Hermitian. The new equations describe a universe with its own peculiar set of rules: for example, by looking in the mirror and reversing the direction of time you should see the same version of you as in the actual world. In their new paper, a team of researchers led by Docent Sorin Paraoanu used a quantum computer to create a toy-universe that behaves according to these new rules. The team includes Dr. Shruti Dogra from Aalto University, first author of the paper, and Artem Melnikov, from MIPT and Terra Quantum.

The researchers made qubits, the part of the quantum computer that carries out calculations, behave according to the new rules of non-Hermitian quantum mechanics. They demonstrated experimentally a couple of exciting results which are forbidden by regular Hermitian quantum mechanics. The first discovery was that applying operations to the qubits did not conserve quantum information - a behaviour so fundamental to standard quantum theory that it results in currently unsolved problems like Stephen Hawking's Black Hole Information paradox. The second exciting result came when they experimented with two entangled qubits.

Entanglement is a type of correlations that appears between qubits, as if they would experience a magic connection that makes them behave in sync with eachoter. Einstein was famously very uncomfortable with this concept, referring to it as "spooky action at a distance". Under regular quantum physics, it is not possible to alter the degree of entanglement between two particles by tampering with one of the particles on its own. However in non-Hermitian quantum mechanics, the researchers were able to alter the level of entanglement of the qubits by manipulating just one of them: a result that is expressly off-limits in regular quantum physics.

"The exciting thing about these results is that quantum computers are now developed enough to start using them for testing unconventional ideas that have been only mathematical so far" said Sorin Paraoanu. "With the present work, Einstein's spooky action at a distance becomes even spookier. And, although we understand very well what is going on, it still gives you the shivers."

The research also has potential applications. Several novel optical or microwave-based devices developed in recent times do seem to behave according to the new rules. The present work opens the way to simulating these devices on quantum computers.

Credit: 
Aalto University

Managing crab and lobster catches could offer long-term benefits

video: An animation showing how managing crab and lobster catches could offer long-term benefits to fishermen and the environment

Image: 
University of Plymouth

The UK's commercial fishing industry is currently experiencing a number of serious challenges.

However, a study by the University of Plymouth has found that managing the density of crab and lobster pots at an optimum level increases the quality of catch, benefits the marine environment and makes the industry more sustainable in the long term.

Published today in Scientific Reports, a journal published by the Nature group, the findings are the result of an extensive and unprecedented four-year field study conducted in partnership with local fishermen off the coast of southern England.

Over a sustained period, researchers exposed sections of the seabed to differing densities of pot fishing and monitored any impacts using a combination of underwater videos and catch analysis.

They found that in areas of higher pot density, fishermen caught 19% less brown crab and 35% less European lobster, and their catches of brown crab were on average 35 grams per individual (7%) lighter.

The effect on marine species was also significant with two ecologically important reef species, Ross coral (Pentapora foliacea) and Neptune's Heart sea squirt (Phallusia mammillata), 83% and 74% less abundant respectively where pot density was higher.

Researchers say the study provides evidence of a pot fishing intensity 'threshold' and highlights that commercial pot fisheries are likely to be compatible with marine conservation when managed correctly at low, sustainable levels.

The study was carried out by academics from the University's School of Biological and Marine Sciences, with funding from Defra and the Blue Marine Foundation and working with the Lyme Bay Consultative Committee.

It builds on an interim report published by Defra in 2019, and research published in October 2020 which used previously unseen footage to show the environmental impacts of pot fishing.

Dr Adam Rees, Post-Doctoral researcher and lead author on the current research, said: "The effects of bottom-towed fishing have been clearly shown as part of the University's long-term monitoring project in Lyme Bay. But before we started this research, very little was known about the precise impacts of pot fishing over a prolonged period. We have shown that - if left unchecked - it can pose threats but that changing ways of working can have benefits for species on the seabed and the quality and quantity of catches."

The study focussed on the Lyme Bay Reserve, a 206 km² area that has been protected from all bottom-towed fishing since 2008. It is part of the Lyme Bay and Torbay Special Area of Conservation, a 312 km² section of the English Channel that is predominantly fished by small boats operating out of towns and villages.

The University has been assessing the seabed recovery since 2008 and has previously demonstrated that several species have returned to the area since the MPA was introduced. Recommendations from this work have been included within the Government's 25-year Environment Plan, and a major UK government report into Highly Protected Marine Areas (HPMAs), led by former Defra Fisheries Minister Richard Benyon.

This latest study comes just days after the Marine Management Organisation (MMO) signalled its intent to ban bottom trawling at various offshore MPAs around the UK.

Dr Emma Sheehan, Associate Professor of Marine Ecology and one of the study's co-authors, said: "Over a decade ago, the fishing community in Lyme Bay realised that changing the way they fish was essential to the sustainability of their industry. We have worked closely with them ever since to take their concerns into account and attempt to provide them with solutions. This study is the latest part of our ongoing work to establish the best ways to both preserve their traditions and enhance the environment they work in."

Martin Attrill, Professor of Marine Ecology and senior author on the research, added: "The fishing industry is currently facing huge uncertainty. And we of course know that every fishing community is different. But with the drive to further enhance marine protection around the UK, some of the lessons we have learned in Lyme Bay could help other fleets make changes that can secure their long-term future."

Credit: 
University of Plymouth

Capuchin monkey genome reveals clues to its long life and large brain

An international team of scientists has sequenced the genome of a capuchin monkey for the first time, uncovering new genetic clues about the evolution of their long lifespan and large brains.

Published in PNAS, the work was led by the University of Calgary in Canada and involved researchers at the University of Liverpool.

"Capuchins have the largest relative brain size of any monkey and can live past the age of 50, despite their small size, but their genetic underpinnings had remained unexplored until now," explains Professor Joao Pedro De Magalhaes, who researches ageing at the University of Liverpool.

The researchers developed and annotated a reference assembly for white-faced capuchin monkeys (Cebus imitator) to explore the evolution of these traits.

Through a comparative genomics approach spanning a wide diversity of mammals, they identified genes under evolutionary selection associated with longevity and brain development.

"We found signatures of positive selection on genes underlying both traits, which helps us to better understand how such traits evolve. In addition, we found evidence of genetic adaptation to drought and seasonal environments by looking at populations of capuchins from a rainforest and a seasonal dry forest," said senior author and Canada Research Chair Amanda Melin who has studied capuchin monkey behaviour and genetics for almost 20 years.

The researchers identified genes associated with DNA damage response, metabolism, cell cycle, and insulin signalling. Damage to the DNA is thought to be a major contributor to ageing and previous studies by Professor de Magalhaes and others have shown that genes involved in DNA damage responses exhibit longevity-specific selection patterns in mammals.

"Of course, because aging-related genes often play multiple roles it is impossible to be sure whether selection in these genes is related to ageing or to other life-history traits, like growth rates and developmental times, that in turn correlate with longevity," said Professor De Magalhaes.

"Although we should be cautious about the biological significance of our findings, it is tempting to speculate that, like in other species, changes to specific aging-related genes or pathways, could contribute to the longevity of capuchins," he added.

The team's insights were made possible thanks to the development of a new technique to isolate DNA more efficiently from primate faeces.

FecalFACS utilises an existing technique that has been developed to separate cells types in body fluids - for example to separate different cell types in blood for cancer research - and applies it to primate faecal samples.

"This is a major breakthrough because the typical way to extract DNA from faeces results in about 95-99% of the DNA coming from gut microbes and food items. A lot of money has been spent sequencing genomes from different organisms than the mammals we're actually trying to study. Because of this, when wildlife biologists have required entire genomes, they have had to rely on more pure sources of DNA, like blood, saliva, or tissue - but as you can imagine, those are very hard to come by when studying endangered animals," explained the study's lead author, Dr Joseph Orkin, who completed work on this project as a postdoctoral scholar at the University of Calgary, and in his present location at Universitat Pompeu Fabra-CSIC in Barcelona.

"FecalFACS finally provides a way to sequence whole genomes from free-ranging mammals using readily available, non-invasive samples, which could really help future conservation efforts," he added.

Credit: 
University of Liverpool

Move over heavy goggles, here come the ultra-high refractive index lenses

image: Complex refractive index and transparency of low?loss hydrogenated amorphous silicon. The optical properties are manipulated by changing the process temperature TP, and chamber pressure PC. a) The captured images show hydrogenated amorphous silicon films deposited at various TP. Scale bar: 1 cm. b,c) Measured refractive index (n) (b) and extinction coefficient (k) (c) for hydrogenated amorphous silicon films deposited under various TP. The green and orange regions represent the conditions where n > 4.0 and k  3.5 and k 

Image: 
Junsuk Rho (POSTECH), Wiley

A POSTECH research team has developed a transparent amorphous silicon that transmits visible light - which permits us to distinguish the colors of objects - enabling the development of paper-thin lenses usable in head-mounted displays (HMD) that show virtual and augmented reality images in real time.

A research team - led by Professor Junsuk Rho of POSTECH's mechanical engineering and chemical engineering departments, and Ph.D. candidate Younghwan Yang and Dr. Gwanho Yoon of the Department of Mechanical Engineering - has developed visibly transparent amorphous silicon by improving the plasma enhanced chemical vapor deposition (PECVD) method, a practice widely used by Korean display manufacturers. The researchers also succeeded in effectively controlling the light in the visible region using the newly developed silicon. This research was recently published in Advanced Materials, the most respected international journal on materials science.

Since light bends more with higher refractive index, a material with high refractive index is essential in designing devices for virtual and augmented reality. However, most highly refractive materials tend to absorb light and when used in a device that produces an image by controlling the light - such as an ultra-thin lens or a hologram - their performance deteriorates. The optical materials presented so far have high transmittance with low refractive index, or, conversely, high refractive index and low transmittance, thereby limiting the production of lightweight and highly efficient optical devices.

To this, the research team utilized the PECVD method, a common technique to develop the amorphous silicon. While depositing the silicon using the PECVD method, the team explored each parameter of the process, such as temperature, pressure, plasma power, and hydrogen ratio, and uncovered the effect of each variable on the intermolecular bonds.

Moreover, the team discovered a method to increase the regularity between silicon atoms by inserting hydrogen atoms between strained silicon atomic bonds, and through this, the atomic structure of amorphous silicon that possesses a high refractive index and significant transmittance was identified. In addition, the researchers succeeded in steering red, green, and blue lights in the desired direction, which could not be controlled with the conventional silicon before.

Transparent amorphous silicon has the advantage of producing hologram devices or ultra-thin lenses that are one thousandth of the thickness of conventional lenses at a fraction of the cost. The applicability of the silicon has also been expanded in that the amorphous silicon, which has been used only in thermal infrared cameras, can now be used as an optical device in the visible light region.

"The discovery of an optical element capable of controlling all visible light has revealed clues about the relationship between the atomic bonding structure and the visible light region, which has not been of interest until now," explained Professor Junsuk Rho, the corresponding author who led the study. "As we can produce optical devices that can control all colors at low cost, we are now one step closer to commercializing virtual and augmented reality and hologram technologies only seen in movies."

Credit: 
Pohang University of Science & Technology (POSTECH)

Aspirin preferred to prevent blood clots in kids after heart surgery

image: Aspirin should be favoured over warfarin to prevent blood clotting in children who undergo a surgery that replumbs their hearts, according to a new study.

Image: 
jesse orrico

Aspirin should be favoured over warfarin to prevent blood clotting in children who undergo a surgery that replumbs their hearts, according to a new study.

The research, led by the Murdoch Children's Research Institute (MCRI) and published in The Journal of Thoracic and Cardiovascular Surgery, will have implications for clinicians when prescribing blood thinning medications after Fontan surgery, a complex congenital heart disease operation redirecting blood flow from the lower body to the lungs.

The Fontan procedure is offered to children born with severe heart defects, allowing the child to live with just one pumping heart chamber instead of two.

MCRI Dr Chantal Attard said although the operation couldn't completely 'fix' the heart, most were able to live well into adulthood and have relatively normal lives. But she said those who have the procedure were at an increased risk for blood clots.

"Blood clots are dangerous because they can cause the heart to fail or lead to a stroke. For this reason, all patients are given blood thinning medications, with warfarin and aspirin the most common," she said.

"Warfarin can be affected by food, other medications and illness, so patients must have regular blood tests to check their warfarin levels are safe."

The study involved 121 patients enrolled in the Australian and New-Zealand Fontan (ANZ) Registry. It found stroke was common regardless of which medication the patient took. But patients on warfarin had poorer bone mineral density and were at a higher risk of bleeding.

Dr Attard said the research showed for patients who undergo Fontan surgery, and do not have additional blood clotting risk factors, aspirin should be offered over warfarin.

She said given the need for regular INR monitoring of warfarin, a shift to aspirin would also have a cost benefit to the patient and the healthcare system.

About 70,000 post Fontan patients are alive today, with this number expected to double within two decades.

Carley Clendenning's son Lachie, 7, had the Fontan procedure two years ago after being born with one heart ventricle.

She said the aspirin findings were a relief as the medication was much easier to manage and would benefit other families whose children required the procedure in future.

"Lachie has been taking warfarin ever since his surgery and there are things you have to keep on top of with this medication," she said.

"We have to monitor his blood clotting levels with regular finger prick blood tests at home and watch out for injuries because there is a greater chance of bleeding and bruises.

"In what is already a difficult time for families, this new recommendation will make things a little easier."

Credit: 
Murdoch Childrens Research Institute

Gene-based blood test for melanoma spread evaluates treatment progress

A test that monitors blood levels of DNA fragments released by dying tumor cells may serve as an accurate early indicator of treatment success in people in late stages of one of the most aggressive forms of skin cancer, a new study finds.

Led by NYU Grossman School of Medicine and Perlmutter Cancer Center researchers, the investigation looked at adults with undetectable levels of freely circulating tumor DNA (ctDNA) four weeks into drug treatment for metastatic melanoma tumors that cannot be removed surgically (unresectable). The study showed that these patients, all of whom had common genetic changes (BRAFV600 mutations) linked to cancer, were living nearly twice as long without cancer growth as those who continued to have detectable levels.

Normally, the study authors say, physicians would have to wait three months before an X-ray, CT scan, or other measures could reveal whether a tumor is growing or shrinking in response to treatment.

"Our findings suggest that levels of ctDNA may serve as a fast and reliable tool to gauge whether an anticancer medication is working," says study senior author David Polsky, MD, PhD. "The blood test results could help support continuing the current treatment strategy or else encourage patients and physicians to consider other options," adds Polsky, the Alfred W. Kopf, M.D. Professor of Dermatologic Oncology at NYU Langone Health and its Perlmutter Cancer Center.

Polsky notes that swift treatment modification could potentially be helpful in a disease as aggressive as melanoma, which kills nearly 7,000 Americans a year and is notoriously difficult to treat once it spreads to other body parts. Early feedback from a blood test might save lives, he says.

Researchers have long searched for better ways to monitor certain cancers using blood tests, or so-called biomarkers, which can be performed more easily, more often, and less expensively than imaging scans or surgical procedures and provide a clearer picture of tumor behavior over time. With melanoma, one frequently used option, a test for the presence in blood of an enzyme called lactate dehydrogenase (LDH), has had only limited success as such a tool, because non-cancer ailments such as liver injury and bone damage can also cause levels to spike. Although more specific biomarkers have been identified in cancers of the prostate, breast, and colon, the study investigators say a reliable signpost for melanoma has until now remained elusive.

According to Polsky, also a professor in the Department of Pathology at NYU Langone, early research by the same team had pointed to ctDNA as a promising candidate. This method works by targeting the most common mutations in the DNA code found in melanoma cells. This mutated DNA spills into surrounding blood as the cancer cells break down. In previous small studies, the blood test was shown to outperform LDH in predicting melanoma recurrence, as well as tracking progression of other forms of cancer.

The new investigation, publishing Feb. 11 in the journal The Lancet Oncology, was conducted over two years and is the largest analysis to date of the potential utility of this blood test for skin cancer, says Polsky.

For the study, the research team analyzed blood samples from two pivotal clinical trials involving 383 American, European, and Australian men and women. All were receiving targeted treatment with drugs dabrafenib and trametinib for unresectable metastatic melanoma tumors that had mutations in the BRAF gene, which is found in about half of all patients with the disease, investigators say. The investigators measured levels of ctDNA from patients whose tumors had this mutation before treatment and one month into therapy. As part of the clinical trial, patients received periodic checkups using CT scans until treatment ended.

Among the study's other findings were that patients with 64 or fewer copies of ctDNA per milliliter of blood before treatment were likely to respond well to therapy, surviving nearly three years on average. By contrast, levels above that threshold were linked to a significantly poorer chance of survival, with patients living just over a year.

The investigators say the blood test is highly reliable, as they were able to detect ctDNA in 93 percent of patients. These blood test results were also reproduced in a second group of patients in the same stage of the disease who were enrolled in another clinical trial.

"Although this gene-based test focuses on tumors with BRAFV600 mutations, we believe it will be similarly useful for melanomas that have other mutations, such as defects in the NRAS and TERT genes, which are also commonly mutated in this disease," study lead author Mahrukh Syeda, MS. "Ultimately, we'd like to see this test used routinely in the clinic to help guide treatment decisions," adds Syeda, a research scientist in the Department of Dermatology at NYU Langone Health.

She cautions that the blood test is not yet FDA-approved, but notes that the evidence of its accuracy and value supports future applications for approval to make it available for clinical use.

Syeda says the team next plans to evaluate the ctDNA approach in patients in earlier stages of melanoma.

Credit: 
NYU Langone Health / NYU Grossman School of Medicine

Effect of high-dose zinc, ascorbic acid supplementation vs usual care on symptom length, reduction among ambulatory patients with SARS-CoV-2 infection

What The Study Did: These findings suggest that treatment with zinc, ascorbic acid or both doesn't affect SARS-CoV-2 symptoms.

Authors: Milind Y. Desai, M.D., M.B.A., of the Cleveland Clinic in Ohio, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/ 

(doi:10.1001/jamanetworkopen.2021.0369)

Editor's Note: The article includes conflict of interest disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

'See through soil' could help farmers deal with future droughts

video: When researchers added an aqueous solution of ammonium thiocyanate, it cleared the distortion caused by the borosilicate glass beads and allowed for a clear view of the hydrogel. Video by Datta et al/Princeton University

Image: 
Datta et al/Princeton University

In research that may eventually help crops survive drought, scientists at Princeton University have uncovered a key reason that mixing material called hydrogels with soil has sometimes proven disappointing for farmers.

Hydrogel beads, tiny plastic blobs that can absorb a thousand times their weight in water, seem ideally suited to serve as tiny underground reservoirs of water. In theory, as the soil dries, hydrogels release water to hydrate plants' roots, thus alleviating droughts, conserving water, and boosting crop yields.

Yet mixing hydrogels into farmers' fields has had spotty results. Scientists have struggled to explain these uneven performances in large part because soil--being opaque --has thwarted attempts at observing, analyzing, and ultimately improving hydrogel behaviors.

In a new study, the Princeton researchers demonstrated an experimental platform that allows scientists to study the hydrogels' hidden workings in soils, along with other compressed, confined environments. The platform relies on two ingredients: a transparent granular medium--namely a packing of glass beads--as a soil stand-in, and water doped with a chemical called ammonium thiocyanate. The chemical cleverly changes the way the water bends light, offsetting the distorting effects the round glass beads would ordinarily have. The upshot is that researchers can see straight through to a colored hydrogel glob amidst the faux soil.

"A specialty of my lab is finding the right chemical in the right concentrations to change the optical properties of fluids," said Sujit Datta, an assistant professor of chemical and biological engineering at Princeton and senior author of the study appearing in the journal
Science Advances on Feb. 12. "This capability enables 3D visualization of fluid flows and other processes that occur within normally inaccessible, opaque media, such as soil and rocks."

The scientists used the setup to demonstrate that the amount of water stored by hydrogels is controlled by a balance between the force applied as the hydrogel swells with water and the confining force of the surrounding soil. As a result, softer hydrogels absorb large quantities of water when mixed into surface layers of soil, but don't work as well in deeper layers of soil, where they experience a larger pressure. Instead, hydrogels that have been synthesized to have more internal crosslinks, and as a result are stiffer and can exert a larger force on the soil as they absorb water, would be more effective in deeper layers. Datta said that, guided by these results, engineers will now be able to conduct further experiments to tailor the chemistry of hydrogels for specific crops and soil conditions.

"Our results provide guidelines for designing hydrogels that can optimally absorb water depending on the soil they are meant to be used in, potentially helping to address growing demands for food and water," said Datta.

The inspiration for the study came from Datta learning about the immense promise of hydrogels in agriculture but also their failure to meet it in some cases. Seeking to develop a platform to investigate hydrogel behavior in soils, Datta and colleagues started with a faux soil of borosilicate glass beads, commonly used for various bioscience investigations and, in everyday life, costume jewelry. The bead sizes ranged from one to three millimeters in diameter, consistent with the grain sizes of loose, unpacked soil.

In summer 2018, Datta assigned Margaret O'Connell, then a Princeton undergraduate student working in his lab through Princeton's ReMatch+ program, to identify additives that would change water's refractive index to offset the beads' light distortion, yet still allow a hydrogel to effectively absorb water. O'Connell alit upon an aqueous solution with a bit over half of its weight contributed by ammonium thiocyanate.

Nancy Lu, a graduate student at Princeton, and Jeremy Cho, then a postdoc in Datta's lab and now an assistant professor at the University of Nevada, Las Vegas, built a preliminary version of the experimental platform. They placed a colored hydrogel sphere, made from a conventional hydrogel material called polyacrylamide, amidst the beads and gathered some initial observations.

Jean-Francois Louf, a postdoctoral researcher in Datta's lab, then constructed a second, honed version of the platform and performed the experiments whose results were reported in the study. This final platform included a weighted piston to generate pressure on top of the beads, simulating a range of pressures a hydrogel would encounter in soil, depending upon how deep the hydrogel is implanted.

Overall, the results showed the interplay between hydrogels and soils, based on their respective properties. A theoretical framework the team developed to capture this behavior will help in explaining the confounding field results gathered by other researchers, where sometimes crop yields improved, but other times hydrogels showed minimal benefits or even degraded the soil's natural compaction, increasing the risk of erosion.

Ruben Juanes, a professor of civil and environmental engineering at the Massachusetts Institute of Technology who was not involved in the study, offered comments on its significance. "This work opens up tantalizing opportunities for the use of hydrogels as soil capacitors that modulate water availability and control water release to crop roots, in a way that could provide a true technological advance in sustainable agriculture," said Juanes.

Other applications of hydrogels stand to gain from Datta and his colleagues' work. Example areas include oil recovery, filtration, and the development new kinds of building materials, such as concrete infused with hydrogels to prevent excessive drying out and cracking. One particularly promising area is biomedicine, with applications ranging from drug delivery to wound healing and artificial tissue engineering.

"Hydrogels are a really cool, versatile material that also happen to be fun to work with," said Datta. "But while most lab studies focus on them in unconfined settings, many applications involve their use in tight and confined spaces. We're very excited about this simple experimental platform because it is allowing us to see what other people couldn't see before."

The work was supported in part by the National Science Foundation and the High Meadows Environmental Institute at Princeton.

Credit: 
Princeton University, Engineering School

T cells depressed

T lymphocytes, or T cells, are an important component of our immune system. They can recognize foreign proteins, so-called antigens, as peptide fragments - for instance, those derived from viruses or cancer cells. In principle, they could, but usually do not, attack our own ('self') proteins. "That is why it is important for the organism to tightly control the activities of T cells," says Dr. Reinhard Obst, head of a research group at the Institute for Immunology at LMU's Biomedical Center that studies the activation of T cells. The project contributed to the Collaborative Research Center 1054 that explores the plasticity of cell fate decisions in the immune system.

When viruses gain access to our tissues, T cells are activated to eliminate the pathogens. However, if T cells are exposed to their target antigens for too long, they can lose their functionality and become 'exhausted.' They no longer secrete pro-inflammatory messenger molecules, and therefore cannot contribute much to an immune response. On the one hand, it makes sense to keep these cells under control, so as to avoid collateral damage to the organism. On the other, T cell exhaustion makes it difficult to fight chronic diseases, such as those caused by HIV, hepatitis viruses and cancer cells. Understanding immune responses to chronically persisting threats like these is thus one of modern medicine's great challenges. "This is where T cell exhaustion plays a central role." the LMU researcher says.

A new model to study T cell exhaustion

Several years ago, Obst developed an animal model which has now yielded important insights. He focused on T helper cells, which express the CD4 marker molecule and make up the largest subset of T cells. Each of these cells recognizes a defined protein fragment as an antigen.

To control the timing and the amount of the specific antigen expressed in this model system, the LMU scientists used a trick. Their transgenic mice were exposed to different doses of the antibiotic doxycycline, which controls the synthesis of the antigen, via their drinking water. Different amounts of antigen are thus being presented to the T cells in these animals, which avoids the need for experimental infection. "In this way, we are able to regulate the amount of antigen produced," Obst explains. "Our goal was to find out how the corresponding T helper cells respond.

The results showed that the effects were dose dependent: In the presence of high antigen doses, the T cells underwent apoptosis, meaning they died by programmed cell death. At an intermediate dose, however, the T cells survived but quickly lost their functionality. "We demonstrated this state of exhaustion by regulating the amount of antigen that the cells encountered," the LMU researcher explains. At a low dose, it took several weeks before the cells showed signs of exhaustion. When in further experiments the antigen was subsequently removed, the cells were able to partially recover from their exhausted state. Such dynamic adjustments convinced the researchers that T-helper cells are capable of a surprising degree of plasticity.

Supporting the T cells' fight against chronic infections and cancer

Obst and his colleagues believe that their findings could have therapeutic implications. The data indicate that a number of transcription factors (proteins that control gene expression) and signaling pathways regulate the different states of exhaustion.

Two years ago, several groups showed that one of these transcription factors, named Tox, contributes significantly to the exhaustion of T killer cells, another T cell subset. When the Tox gene was deleted, the T-killer cells were less readily exhausted in a chronic infection and could more effectively fight a chronically persisting virus. However, they also attacked the host animals' organs and died sooner. The new findings suggest that there are several mechanisms in place to adjust T helper cells dynamically to different antigen loads.

Obst now hopes to identify molecules that inhibit transcription factors or signaling pathways which contribute to T cell exhaustion. This could provide a possible strategy to support T cells' fight against chronic infections and cancer and boost our natural defences against such diseases.

Credit: 
Ludwig-Maximilians-Universität München