Earth

Rise in serious harm to children caused by powerful painkillers, says study

The proportion of high-strength painkiller poisonings among children which result in emergency hospital admissions has increased, according to research published in the peer-reviewed journal Clinical Toxicology.

A study involving more than 200,000 US paediatric cases of pain-relief misuse, abuse or self-harm highlights how the opioid crisis is affecting young people. The results show that although the number of incidents reported overall has dropped since 2005, the threat to life is rising.

The proportion of paediatric intensive care unit (PICU) admissions rose by more than a third during the study period from 5,203 (6.6%), out of 80,141 reports of poisonings between 2005 and 2009, to 4,586 (9.6%) out of 48,435 between 2015 to 2018.

This trend of children ending up in intensive care is being fuelled by suspected suicide cases among under-19s who have overdosed on legal or prescription opioid drugs.

Methadone, prescription pain-reliever fentanyl and heroin are most associated with the need for intensive care doctors to give medical treatment, according to the findings.

The researchers are calling for a strategy that combines laws to restrict access to opioids with improved mental health support for children and adolescents. Doctors who treat children and young people should continue lobbying for policy changes, they add.

"This study suggests the opioid epidemic continues to have a serious impact on pediatric patients, and the healthcare resources required to care for them," says Dr Megan Land
from Emory University School of Medicine, in Georgia, USA.

"Paediatricians caring for children with opioid ingestions must continue to strive for effective policy changes to mitigate this crisis."

Drug overdose deaths in the US have tripled in the past two decades. Those derived from the opium poppy plant (opioids) can be highly potent and account for two thirds of fatal drug poisonings.

The focus has largely been on adults so this study set out to investigate the impact on children, specifically trends in admissions to PICU.

The researchers consulted the National Poison Data System database for accidental or deliberate incidents of opioid exposure involving babies and children up to age 19. They found 207,543 cases were reported to 55 US poison control centres from 2005 to 2018.

Factors analysed included opioid type, cause of drug poisoning and the rate of cases admitted to psychiatric units. The study also calculated the proportion of patients who ended up in PICUs and the percentage of these requiring medical treatment.

The research suggests that the majority of child drug poisonings did not require an intensive care admission, and either resulted in minor effects such as drowsiness -- or none at all.

But the proportion needing specialist treatment did increase over the study period.

The picture was similar with psychiatric unit admissions -- the percentage of these more than doubled from 2,806 (3.57%), out of 80,141 between 2005 to 2009, to 3,909 (8.18%) out of 48,435 between 2015 to 2018. This was also the case for the proportion of intensive care admissions needing cardiopulmonary resuscitation (CPR) which went from 68 (1.31%) out of 5,203 to 146 (3.18%) out of 4,586 over the same time period.

Credit: 
Taylor & Francis Group

Finding familiar pathways in kidney cancer

(PHILADELPHIA) -- p53 is the most famous cancer gene, not least because it's involved in causing over 50% of all cancers. When a cell loses its p53 gene -- when the gene becomes mutated -- it unleashes many processes that lead to the uncontrolled cell growth and refusal to die, which are hallmarks of cancer growth. But there are some cancers, like kidney cancer, that that had few p53 mutations. In order to understand whether the inactivation of the p53 pathway might contribute to kidney cancer development, Haifang Yang, PhD, a researcher with the Sidney Kimmel Cancer Center - Jefferson Health probed kidney cancer's genes for interactions with p53.

Earlier work found that PBRM1 -- the second most mutated gene in kidney cancer -- could interact with p53. However, other researchers were unable to definitively show that it was truly an important mechanism in kidney cancer.

Rather than looking at the p53 protein itself, first author Weijia Cai a postdoc in Dr. Yang's lab and other collaborators looked at an activated version of p53, one that is studded with an additional chemical marker - an acetyl group - at many specific spots.

In a paper published in Nature Communications, on Friday December 20th, Dr. Yang, an assistant professor of Pathology, Anatomy and Cell Biology at Jefferson examined whether PBRM1 can be a "reader", or translator, of the activated p53. They noticed - by the help of a number of biochemical and molecular tests using both human cancer cell lines and mouse and human tumor samples - that PBRM1 uses its bromodomain 4 to bind to p53, but only in its activated form, with the acetyl group in one specific spot. Tumor-derived point mutations in bromodain 4 can disrupt this interaction, and the resulting mutant PBRM1 loses its ability to suppress tumor growth.

The research suggests that the second-most highly mutated gene in kidney cancer, is strongly linked to a well-studied and understood cancer pathway. Because PBRM1 is present in other cell types and cancers, this finding might be applicable to other cancers as well.

"This shows us that even though p53 isn't directly mutated in many kidney cancers, the cancer is still disrupting p53 pathway to drive cancer initiation and growth. This suggests that there might be a therapeutic window for drugs that activate the p53 pathway, which may preferentially impact PBRM1-defective kidney tumors while sparing normal tissues," says Dr. Yang.

The next steps for the research are to identify the drug or drugs and the therapeutic window. The researchers also plan to determine whether it can be combined with other known therapeutics, and also to investigate which kidney tumor genotypes are most likely to respond to the treatment.

Credit: 
Thomas Jefferson University

Fossil soils reveal emergence of modern forest ecology earlier than previously thought

image: This video shows the rooting system of the ancient tree Archaeopteris at the Cairo fossil Forest site.

Image: 
Charles Ver Straeten

While sifting through fossil soils in the Catskill region near Cairo, New York, researchers uncovered the extensive root system of 386-million-year old primitive trees. The fossils, located about 25 miles from the site previously believed to have the world's oldest forests, is evidence that the transition toward forests as we know them today began earlier in the Devonian Period than typically believed.

"The Devonian Period represents a time in which the first forest appeared on planet Earth," says first author William Stein, an emeritus professor of biological science at Binghamton University, New York. "The effects were of first order magnitude, in terms of changes in ecosystems, what happens on the Earth's surface and oceans, in global atmosphere, CO2 concentration in the atmosphere, and global climate. So many dramatic changes occurred at that time as a result of those original forests that basically, the world has never been the same since."

Stein, along with collaborators, including Christopher Berry and Jennifer Morris of Cardiff University and Jonathan Leake of the University of Sheffield,have been working in the Catskill region in New York, where in 2012 they uncovered "footprint evidence" of a different fossil forest at Gilboa, which, for many years has been termed the Earth's oldest forest. The discovery at Cairo, about a 40-minute drive from the original site, now reveals an even older forest with dramatically different composition.

The Cairo site presents three unique root systems, leading Stein and his team to hypothesize that much like today, the forests of the Devonian Period were composed of different trees occupying different places depending on local conditions.

First, Stein and his team identified a rooting system that they believe belonged to a palm tree-like plant called Eospermatopteris. This tree, which was first identified at the Gilboa site, had relatively rudimentary roots. Like a weed, Eospermatopteris likely occupied many environments, explaining its presence at both sites. But its roots had relatively limited range and probably lived only a year or two before dying and being replaced by other roots that would occupy the same space. The researchers also found evidence of a tree called Archaeopteris, which shares a number of characteristics with modern seed plants.

"Archaeopteris seems to reveal the beginning of the future of what forests will ultimately become," says Stein. "Based on what we know from the body fossil evidence of Archaeopteris prior to this, and now from the rooting evidence that we've added at Cairo, these plants are very modern compared to other Devonian plants. Although still dramatically different than modern trees, yet Archaeopteris nevertheless seems to point the way toward the future of forests elements."

Stein and his team were also surprised to find a third root system in the fossilized soil at Cairo belonging to a tree thought to only exist during the Carboniferous Period and beyond: "scale trees" belonging to the class Lycopsida.

"What we have at Cairo is a rooting structure that appears identical to great trees of the Carboniferous coal swamps with fascinating elongate roots. But no one has yet found body fossil evidence of this group this early in the Devonian." Stein says. "Our findings are perhaps suggestive that these plants were already in the forest, but perhaps in a different environment, earlier than generally believed. Yet we only have a footprint, and we await additional fossil evidence for confirmation."

Moving forward, Stein and his team hope to continue investigating the Catskill region and compare their findings with fossil forests around the world.

"It seems to me, worldwide, many of these kinds of environments are preserved in fossil soils. And I'd like to know what happened historically, not just in the Catskills, but everywhere," Says Stein. "Understanding evolutionary and ecological history--that's what I find most satisfying."

Credit: 
Cell Press

Advanced imaging tips T cell target recognition on its head

T cells represent a key component of our immune system, and play a critical role in protecting us against harmful pathogens like viruses and bacteria, and cancers. The more we understand about how they recognise, interact with and even kill infected or cancer cells moves us closer to developing therapies and treatments for a range of conditions.

In a paper published today in the premier international journal Science, an Australian team of scientists led by Monash University, the Australian Research Council Centre of Excellence in Advanced Molecular Imaging and the Doherty Institute at the University of Melbourne, has redefined what we thought we knew about T cell recognition for the past 20 years.

In order to interact with other cells in the body, T cells rely on specialised receptors known as T Cell Receptors that recognise virus or bacteria fragments that are bound to specialised molecules called major histocompatibility complex (MHC) or MHC-like. Over the past 20 years, the prevailing view was that T Cell Receptors sat atop the MHC and MHC-like molecules for recognition.

The team of scientists characterised a new population within a poorly understood class of T cells called gamma delta T Cells that can recognise an MHC-like molecule known as MR1. Using a high intensity X-ray beam at the Australian Synchrotron, the scientists obtained a detailed 3D image of the interplay between the gamma delta T cell receptor and MR1 revealing an intriguing result whereby the gamma delta T cell receptor bound underneath the MHC-like molecule for recognition. This highly unusual recognition mechanism reshapes our understanding of how TCR can interact with their target molecules, and represents a major development in the field of T cell biology.

"Think of it like a flag attached to a cell. We always thought the T cells were coming along and reading that flag by sitting atop it. We have determined that instead, some T cells can approach and interact with it from underneath," said Dr Jérôme Le Nours from Monash Biomedicine Discovery Institute, co-lead author on the paper.

"These are the types of fine and important details that can change how we approach future research avenues in T cell biology," said Dr Le Nours.

"This is important because T cells are a critical weapon in our immune system, and understanding how they target and interact with cells is crucial to harnessing their power in therapies such as infection and cancer immunotherapy."

"Our study shows that MR1 is a new type of molecular target for gamma delta T cells. These cells play a decisive role in immunity to infection and cancer, yet what they respond to is poorly understood. MR1 may be signalling to gamma detla T cells that there is a virus, or cancer cell and triggering these cells to initiate a protective immune response" said University of Melbourne Dr Nicholas Gherardin from the Doherty Institute, co-lead author on the paper.

"We're very excited to follow up these findings in studies that will aim to harness this new biology in disease settings."

Credit: 
University of Melbourne

Comparing future risks associated with gastric bypass and gastric sleeve surgery

(Boston, MA) - Research from the Harvard Pilgrim Health Care Institute finds that gastric bypass is associated with a higher risk of additional operations or other invasive procedures, compared to a gastric sleeve procedure. The study, "Risk of Operative and Nonoperative Interventions Up to 4 Years After Roux-en-Y Gastric Bypass vs Vertical Sleeve Gastrectomy in a National US Commercial Insurance Claims Database," appears online December 18 in JAMA Network Open.

Bariatric surgery is the most effective weight loss treatment for patients with severe obesity. The surgery provides significant health benefits in terms of improvement in weight-related conditions such as diabetes and high blood pressure. However, after bariatric surgery, some patients require additional surgical procedures to deal with surgical complications or problems that arise as a result of weight loss. Avoiding these return trips to the operating room is a high priority for both patients and surgeons. Roux-en-Y gastric bypass (bypass) and vertical sleeve gastrectomy (sleeve) are the two most common bariatric procedures currently performed, yet few large studies have compared the risk of repeat surgeries or other kinds of invasive interventions after these procedures. Knowledge about risk of these outcomes could help patients and surgeons make a more informed choice between the two procedures.

Researchers used data from a large national health insurance plan to determine whether patients have greater risk of additional operations or procedures after the bypass versus sleeve procedures. The study population consisted of 13,027 U.S. adults age 18-64 years who underwent an initial bypass or sleeve procedure between 2010 and 2017. The researchers identified instances of new abdominal surgeries or other types of procedures after the initial bariatric surgery, and compared the risk of these additional procedures between bypass and sleeve patients. The study results showed that patients undergoing sleeve gastrectomy had a lower overall risk of subsequent operative and nonoperative interventions up to 4 years after their initial bariatric surgery.

"It's important for patients to understand not only the many benefits of bariatric surgery, but also the risks, including the possible need for more surgery down the road," said Frank Wharam, senior author and Associate Professor of Population Medicine at the Harvard Pilgrim Health Care Institute and Harvard Medical School. Dr. Kristina Lewis, first author and Assistant Professor at Wake Forest School of Medicine adds, "Bariatric surgery has definitely become much safer for patients over the years, but our findings underscore that there are still risks involved. Comparing risks between bariatric procedure types should be part of the shared decision making process with patients who are considering one of these surgeries."

Credit: 
Harvard Pilgrim Health Care Institute

Study: yes, even wild tigers struggle with work/life balance

image: Camera trap image Varvara, the tigress who struggled with work/life balance.

Image: 
WCS

VLADIVOSTOK, Russia (December 19, 2019) - A new study by a team of Russian and American scientists revealed the first-ever detailed analysis of a tigress from the birth of her cubs through their first four months. What did they find? Tiger motherhood involves lots of frantic running around, big meals instead of small ones, and constantly checking on the little ones.

Publishing their findings in the journal Mammal Research, scientists equipped an Amur tigress they named Varvara with a GPS collar in the Russian Far East and followed her for eight months: four months before she gave birth to her cubs and four months afterwards. The study is the first to analyze in detail the behavior of a tigress from the birth of her cubs through their first 4 months.

Authors include: Yury Petrunenko of the Far Eastern Branch of Russian Academy of Sciences; Ivan Seryodkin of Far Eastern Federal University; Eugenia Bragina and Dale Miquelle of the Wildlife Conservation Society (WCS); and Svetlana Soutyrina, Anna Mukhacheva, and Nikolai Rybin of the Sikhote-Alin Biosphere Reserve.

The authors found that after having cubs, Varvara immediately shrunk the size of her home range. Then she spent less time moving, but when she did move, it was at a much faster rate to reduce time away from home to keep cubs safe from predators such as leopards, lynx, bears, and wolves. And when it was time to return to the den site she made a "beeline" directly to it, moving much faster compared to other types of movement such as while hunting.

"Female tigers face three major constraints when they rear cubs: they must balance the costs of home range maintenance, they must obtain adequate food to feed themselves (and then the cubs as well as they get older), and they must protect cubs from predation," said Yuri Petrunenko, lead author of the article. "To protect cubs, they must stay near the den; but to eat, they must leave the den to find, kill, and consume prey, during which time they must be away from the cubs, who face high risks of predation while their mother is out hunting."

When Varvara's cubs were young, she killed larger prey than normal, presumably to reduce hunting time, allowing more time at the den with cubs. Once the cubs left the den site (at about two months of age) she was able to spend much more time with them since she could bring them to kills.

Said co-author Dale Miquelle, Director of WCS's Russia Program: "This study shows that even tigers struggle balancing 'work' and 'family time.' It's a constant balancing act to keep their cubs safe while trying to keep themselves fed."

Credit: 
Wildlife Conservation Society

Amazon forest regrowth much slower than previously thought

image: Secondary forests are increasingly fragmented, and isolated from remaining primary forests.

Image: 
Marizilda Cruppe/Rede Amazônia Sustentável

The regrowth of Amazonian forests following deforestation can happen much slower than previously thought, a new study shows.

The findings could have significant impacts for climate change predictions as the ability of secondary forests to soak up carbon from the atmosphere may have been over-estimated.

The study, which monitored forest regrowth over two decades, shows that climate change, and the wider loss of forests, could be hampering regrowth in the Amazon.

By taking large amounts of carbon from the atmosphere, forests regrowing after clear-felling - commonly called secondary forests - have been thought an important tool in combatting human-caused climate change.

However, the study by a group of Brazilian and British researchers shows that even after 60 years of regrowth, the studied secondary forests held only 40% of the carbon in forests that had not been disturbed by humans. If current trends continue, it will take well over a century for the forests to fully recover, meaning their ability to help fight climate change may have been vastly overestimated.

The study, published in the journal Ecology, also shows that secondary forests take less carbon from the atmosphere during droughts. Yet, climate change is increasing the number of drought-years in the Amazon.

First author Fernando Elias from the Federal University of Pará explained: "The region we studied in the Amazon has seen an increase in temperature of 0.1 C per decade, and tree growth was lower during periods of drought. With predictions of more drought in the future, we must be cautious about the ability of secondary forests to mitigate climate change. Our results underline the need for international agreements that minimise the impacts of climate change."

Beyond helping fight climate change, secondary forests can also provide important habitat for threatened species. However, the researchers found that biodiversity levels in the secondary forests were only 56% of those seen local undisturbed forests, with no increase in species diversity during the 20 years of monitoring.

Many nations have made large reforestation pledges in recent years, and Brazil committed to restoring 12 million ha of forest under the Paris climate agreement. Taken together, these results suggest that these large forest restoration pledges need to accompanied by firmer action against deforestation of primary forests, and careful consideration about where and how to reforest.

The research was undertaken in the Bragança, Brazil, the oldest deforestation frontier region in the Amazon that has lost almost all of its original forest cover.

Biologist Joice Ferreira, a researcher at the Brazilian Agricultural Research Corporation, said: "Our study shows that in heavily deforested areas, forest recovery needs additional support and investment to overcome the lack of seed sources and seed-dispersing animals. This is different from other areas we have studied where historic deforestation is much lower and secondary forests recover much faster without any human intervention."

Jos Barlow, Professor of Conservation Science at Lancaster University in the United Kingdom, points out the need for more long-term studies. He said: "Secondary forests are increasingly widespread in the Amazon, and their climate change mitigation potential makes them of global importance. More long-term studies like ours are needed to better understand secondary forest resilience and to target restoration to the areas that will do most to combat climate change and preserve biodiversity."

Credit: 
Lancaster University

Addressing committed emissions in both US and China requires carbon capture and storage

Stabilizing global temperatures will require deep reductions in carbon dioxide (CO2) emissions worldwide. Recent integrated assessments of global climate change show that CO2 emissions must approach net-zero by mid-century to avoid exceeding the 1.5°C climate target. However, "committed emissions," those emissions projected from existing fossil fuel infrastructure operating as they have historically, already threaten that 1.5°C goal. With the average lifespan of a coal plant being over 40 years, proposed or under-construction power plants only add to that burden, further increasing the challenge of achieving net-zero emissions by 2050.

The deep decarbonization required for net-zero emissions will require existing and proposed fossil-energy infrastructure to follow one of two pathways: either prematurely retiring or capturing and storing their emissions, thus preventing their release into the atmosphere. Carbon capture and storage (CCS), represents the only major viable path for fossil-fuel plants to be net-zero, short of being shuttered.

In a Viewpoint Article recently published in Environmental Science & Technology, Haibo Zhai outlines how the U.S. and China, the world's two largest emitters, should address their committed emissions. "In both countries, CCS retrofits to existing infrastructure are essential for reducing emissions to net-zero," said Zhai, an Associate Research Professor of Engineering and Public Policy at Carnegie Mellon University. However, differences in the power-plant fleets and the energy mix in the two countries point to separate routes for achieving deep decarbonization.

In the U.S., the energy landscape has changed dramatically over the past two decades. Coal was the dominant source of electricity (51% of total power generation in 2000) for most of the twentieth century, but has recently been displaced by cheap and abundant natural gas as well as growth in renewables. Coal accounted for just 27% of U.S. power generation in 2019. Coal's decline is expected to continue in favor of cheaper alternatives, due to the U.S.'s relatively old (40 years) and inefficient (32% efficiency) fleet of coal-fired plants.

Zhai does not see CCS retrofits to U.S coal plants as a fleet-wide approach to decarbonization, though there is potential for partial capture at the most efficient plants. CCS development should instead be focused on combined-cycle natural gas plant retrofits. "Natural gas has helped reduce the carbon intensity of the U.S. power sector, but this wave of new gas plants still represents a significant amount of committed emissions," said Zhai.

China is the opposite of the U.S. in terms of its energy mix and its fossil energy infrastructure. Coal supplies almost 65% of the nation's electricity. Coal-fired plants in China have a median age of only 12 years and much higher efficiencies (often greater than 40%) compared to the U.S. "Such a young fleet is unlikely to be phased out anytime soon," said Zhai. "Any path for China to achieve deep decarbonization must include CCS retrofits to its recently-built coal plants."

Despite the necessity of CCS, the technology has not been proven on a large scale and remains very costly. Only two commercial-scale CCS projects currently operate in the world: Petra Nova in the U.S. and Boundary Dam in Canada. Current CCS technologies have high energy and capital costs associated with separating CO2 out of process waste streams.

CCS, according to Zhai, currently sits on the steep part of the "learning curve." With any technology first-of-a-kind deployments are always expensive. However, industry-wide learning--through technology developments such as improved separation materials and processes, supply chain expansion, and increases in operational efficiency--makes latter deployments cheaper. Moving down the learning curve represents a kind of chicken and egg dilemma for CCS: to be widely deployed, it needs to be cheap. And for CCS to be cheap, it needs to have been deployed.

Therefore, there is a strong reason that governments should incentivize early adoption of CCS through regulatory, economic, and policy means, argues Zhai. He points to case studies of other low-carbon technologies, like photovoltaic solar panels, that have become cost-competitive after incentives helped lower high initial costs. Because early deployment is required to make future deployment economical, the time to act is now, he says.

"If you accept the premise that committed emissions are a problem, there is no choice other than CCS," he said. "And incentives are required to kick-start deployment of CCS on the scale needed to address the issue."

In the U.S., Zhai points to incentives for retrofitting natural gas-fired plants with CCS, expecting market forces to address the committed emissions from coal-fired plants, as the aging coal fleet continues to phase out. Zhai's article points to a tax credit for carbon sequestration in the U.S. as a major policy lever to incentivize these CCS efforts.

In China, on the other hand, CCS development for coal plant retrofits should be the major focus. There, Zhai notes that the national emissions trading system, where emitters can buy or sell CO2 emissions credits, will be the major policy lever that can spur development of mitigation technologies. In both cases, the current high costs of CCS point to government policies as a key step in overcoming the expensive initial phases of deployment.

Major co-benefits of incentivizing CCS development for existing fossil-fuel infrastructure are the role CCS will likely play in certain negative emissions technologies (NETs) and a decreased dependence on expensive NETs in the future. Bioenergy with CCS (BECCS), for instance, is outlined as the most prominent NET option. However, a key subsystem for any BECCS is CCS. Developing CCS now, argues Zhai, means that BECCS will be poised to help address global climate change in the future.

Credit: 
College of Engineering, Carnegie Mellon University

Penn researchers predict 10-year breast cancer recurrence with MRI scans

Diverse diseases like breast cancer can present challenges for clinicians, specifically on a cellular level. While one patient's tumor may differ from another's, the cells within the tumor of a single patient can also vary greatly. This can be problematic, considering that an examination of a tumor usually relies on a biopsy, which only captures a small sample of the cells.

According to a new study from researchers at Penn Medicine, Magnetic Resonance Imaging (MRI) and an emerging field of medicine called radiomics -- which uses algorithms to extract a large amount of features from medical images -- could help to characterize the heterogeneity of cancer cells within a tumor and allow for a better understanding of the causes and progression of a person's individual disease. The findings were published in Clinical Cancer Research.

"If we're only taking out a little piece of a tissue from one part of a tumor, that does not give the full picture of a person's disease and of his or her response to specific therapies," said principal investigator Despina Kontos, PhD, an associate professor of Radiology in the Perelman School of Medicine at the University of Pennsylvania. "We know that in a lot of instances, patients are over-treated, getting therapy that may not be beneficial. Or, conversely, patients who need more aggressive therapy may not end up receiving it. The method we currently have for choosing the appropriate treatment for patients with breast cancer is not perfect, so the more steps we can take toward more personalized treatment approaches, the better."

Kontos and her colleagues wanted to determine whether they could use imaging and radiomics for more personalized tumor characterization. Using MRI, the researchers extracted 60 radiomic features, or biomarkers, from 95 women with primary invasive breast cancer. After following up with the patients 10 years later, the group found that a scan that showed high tumor heterogeneity at the time of diagnosis -- meaning a high diversity of cells -- could successfully predict a cancer recurrence.

"Our study shows that imaging has the potential to capture the whole tumor's behavior without doing a procedure that is invasive or limited by sampling error," said the study's lead author Rhea Chitalia, a PhD candidate in the School of Engineering and Applied Science at the University of Pennsylvania. "Women who had more heterogeneous tumors tended to have a greater risk of tumor recurrence."

The researchers retrospectively analyzed patient scans from a 2002-2006 clinical trial conducted at Penn Medicine. For each woman, the group generated a "signal enhancement ratio" (SER) map and from it, extracted various imaging features in order to understand the relationship between those features and conventional biomarkers (such as gene mutations or hormone receptor status) and patient outcomes.

They found that their algorithm was able to successfully predict recurrence-free survival after 10 years. To validate their findings, the group compared their results to an independent sample of 163 patients with breast cancer from the publicly available Cancer Imaging Archive.

While imaging may not completely replace the need for tumor biopsies, radiologic methods could augment what is currently the "gold standard" of care, Kontos said, by giving a more detailed profile of a patient's disease and guiding personalized treatment. Next steps for the research team will include expanding the analysis to a larger patient cohort and also further exploring which specific markers are more predictive of particular outcomes.

"We've just touched the tip of the iceberg," Kontos said. "Our results and the validation study give us confidence that there are many opportunities for these markers to be used in a prognostic and potentially a predictive setting."

Credit: 
University of Pennsylvania School of Medicine

A new role for a triple-negative breast cancer target

image: Led by Penn Vet scientists, a new study reveals that the protein deltaNp63, which fosters the initiation and progression of triple-negative breast cancer, also helps fuel mammary gland development during puberty in mice. Without it (right panel), the mammary duct had altered structure.

Image: 
Ajeya Nandi and Rumela Chakrabarti/University of Pennsylvania

Unlike almost every other organ, the mammary gland does not develop until after birth. And it's unusually dynamic, shape-shifting during menstrual cycles, puberty, pregnancy, and lactation.

These changes require energy. In a study using a new, genetically altered mouse model, researchers led by Rumela Chakrabarti of Penn's School of Veterinary Medicine have uncovered a key protein involved in supplying the mammary gland with fuel during puberty. It's a protein that her group had earlier shown to play a role in triple-negative breast cancer (TNBC), a particularly aggressive form of the disease.

Besides illuminating an important feature of mammalian biological development, the findings also give reassurance that targeting this protein, known as deltaNp63, to treat cancer in adults could be done without interfering with critical developmental stages that occur later in life.

"Creating a new mouse model that allows us to control when p63 is expressed enabled us to study this molecule in different developmental stages," Chakrabarti says of the work, published in the journal FEBS Letters. "The fact that it is not required later on after puberty means that it's a viable drug target for triple-negative breast cancer. And we think it could be applicable to other cancers, like squamous cell carcinomas and esophageal cancer as well."

Chakrabarti has focused on this molecule since her postdoctoral fellowship, revealing different features of its involvement in the mammary gland stem cells that give rise to every other cell type in the mammary gland tissue. In 2014, Chakrabarti and colleagues found it was important in initiating TNBC, and last year they demonstrated that it acts to direct a type of immune cell to breast tumors, serving to aggravate the progression and spread of cancer.

"We've found that this molecule is like a master regulator," says Chakrabarti. "It can regulate the tumor cells' stem cell activity, and it can regulate the immune cells around the tumor cells. But we also wanted to know how it acted in normal cells."

To do that, the researchers fashioned a new strain of mice in which they could deplete the animals of deltaNp63 as desired. With this mouse model in hand, they were able to assess how deleting that gene affected the mammary gland.

While inducing the deletion of deltaNp63 during pregnancy and adulthood had no significant effect on the mammary gland development and function, the team found significant impacts arose when deletion occurred during puberty.

"It may be that the initial burst of energy that is required during puberty depends on deltaNp63, but once you get through that, it isn't as critical," says Chakrabarti.

Losing the protein during puberty led to a reduction in energy production in the mammary gland cells and caused mammary gland ducts to be malformed. Further analysis suggests that deltaNp63 likely activates other proteins that are involved in both cellular metabolism and in the organization of cell structure during puberty.

"We already knew that p63 was important for mammary gland stem cells, but we didn't know that it was involved in regulating the cells' metabolism," Chakrabarti says. "Mammary stem cells have a high energy need during the extensive tissue remodeling that occurs during puberty. Cancer cells also have a high energy need. So this finding helps tie together a number of roles that p63 seems to be playing in the mammary gland."

In follow-up work, Chakrabarti's lab is investigating the connection between metabolism and TNBC, with an eye toward pursuing deltaNp63 as a possible therapeutic target to slow down the spread of disease.

Credit: 
University of Pennsylvania

A surprising new source of attention in the brain

image: PITd, a newly discovered area for attentional control, is well connected to two previously known attention areas in the brain.

Image: 
Laboratory of Neural Systems at The Rockefeller University

As you read this line, you're bringing each word into clear view for a brief moment while blurring out the rest, perhaps even ignoring the roar of a leaf blower outside. It may seem like a trivial skill, but it's actually fundamental to almost everything we do. If the brain weren't able to pick and choose what portion of the incoming flood of sensory information should get premium processing, the world would look like utter chaos--an incomprehensible soup of attention-hijacking sounds and sights.

Meticulous research over decades has found that the control of this vital ability, called selective attention, belongs to a handful of areas in the brain's parietal and frontal lobes. Now a new study suggests that another area in an unlikely location--the temporal lobe--also steers the spotlight of attention.

The unexpected addition raises new questions in what has long been considered a settled scientific field. "The last time an attention controlling area was discovered was 30 years ago," says Winrich Freiwald, head of Rockefeller's Laboratory of Neural Systems, who published the findings in the Proceedings of the National Academy of Sciences. "This is a fundamental discovery that might require a rethinking of old concepts about attentional control."

A serendipitous discovery

Freiwald and his colleague Heiko Stemmann at the University of Bremen in Germany first encountered this brain area during an experiment a few years ago. They were studying brain activation in monkeys engaged in a task that requires maintaining focus on a subset of rapidly moving dots on a screen. As expected, visual areas specializing in motion detection, as well as areas known for selective attention, lit up on brain scans.

But there was also area PITd, named for its location in the dorsal part of the posterior inferotemporal cortex, whose activation the scientists couldn't quite explain. "All of the areas we found made sense, except for this one," Freiwald says.

Not only was it not known to contain any motion sensitive neurons, PITd also didn't appear particularly sensitive to other types of visual information, suggesting it wasn't a sensory processing area. So in the new study, the scientists asked if this mysterious brain area might be controlling attention. It seemed like a long shot, as PITd was far away from classic attention areas. "But we took the bet," Freiwald says.

Landscape of attention

The brain's attention areas hold an internal map of the outside world, a kind of control panel that ensures we are directing our data-processing resources toward the small part of the world that's relevant to our goal at any given time. A telling sign of an attention control area is that its neurons don't care about what we are looking at--a flying bird, a pitched ball, a single word on a page full of words--only where that thing is. Its neurons code for a specific area in our field of vision, only firing when that part is being attended to.

So the scientists decided to test whether PITd contained any such neurons. They randomly selected about 200 neurons, hoping that at least some of them would turn out to be location specific, responding exclusively to one part of the screen with moving dots that the monkeys in the experiment were looking at.

At the first recording session, Freiwald recalls pacing nervously on his feet, staring at the monitors that track the electrical activity of the neurons and play it in sound form. But his anxiety soon turned to disbelief. The results were just too good: The first randomly picked neuron showed a strong liking for a specific location. And so did the second neuron. And then the third.  "It was absolutely mind boggling," Freiwald says. "We figured out that one of us could close his eyes and tell just by listening to the neuron's response whether the subject is paying attention to the left or right part of the screen. That's how strong the signal was."

The signal could even predict when the monkeys would make a mistake because they weren't paying attention to the right spot. And as closely as these PITd neurons tracked the locus of attention, they ignored what was actually happening on the screen--another feature of an attention area. Unlike a typical sensory neuron, their activity remained the same even if the moving dots changed direction or color.

Lastly, the scientists stimulated PITd to artificially activate it. "We could improve the animal's performance," Freiwald says. "That for me is the linchpin in making a strong claim that this area controls attention."

A new outlook

Area PITd may have been long overlooked because most research efforts have focused on the first areas discovered to govern control attention. "You wouldn't know if a little over to the right you might find something that's even more interesting," Freiwald says.

But why this area exists at all is still an open question, with peculiar mysteries. Neuroscientists have long held that we pay attention to the world through two distinct networks, one concerned with figuring out "what" we see, while the other finds out "where" we see it. When something jumps out in the environment, like a red traffic light, it grabs our attention through the "what" network. In contrast, when something requires our deliberate attention, the "where" network gets involved. PITd appears to help with the latter kind of attention, but it's situated among the "what" areas. In other words, it doesn't quite fit the description of either of these two networks--rather, it seems to fall somewhere in between.

The eccentricities of PITd may be a clue that the classical account of attention is not the full story, Freiwald says. More than just adding to the list of attention control areas, it might actually challenge scientists to rethink how some aspects of our brains are organized.

Credit: 
Rockefeller University

When it's story time, animated books are better for learning

image: Mother and child reading

Image: 
Carnegie Mellon University

Researchers at Carnegie Mellon University found that digital storybooks that animate upon a child's vocalization offer beneficial learning opportunities, especially for children with less developed attention regulation.

"Digital platforms have exploded in popularity, and a huge proportion of the top-selling apps are educational interfaces for children," said Erik Thiessen, associate professor of Psychology at CMU's Dietrich College of Humanities and Social Sciences and senior author on the paper. "Many digital interfaces are poorly suited to children's learning capacities, but if we can make them better, children can learn better."

The results are available in in the December 19 issue of the journal Developmental Psychology.

Shared book reading is a quiet moment that provides a child with the fundamental foundation for developing reading and language skills. The rise of digital platforms, like electronic books, computers, smartphones and tablets, have raised concerns that children may be missing out on this key learning experience.

"Children learn best when they are more involved in the learning process," Thiessen said. "It is really important for children to shape their environment through their behavior to help them learn."

The researchers constructed the study in three parts that build on previous results. In the first experiment, an adult read to the child from either a traditional hardboard book or a digital book. In the digital platform, the pertinent noun/verb and a relevant image are animated upon the child's first vocalization. They found the recall improved using the digital platform compared to the traditional book (60.20 percent to 47.35 percent, respectively).

"This kind of contingent responsiveness from our digital book (or from a parent or teacher) is rewarding. And reward has lots of positive effects on learning. As we get reinforcement, the brain releases dopamine that can serve as a signal for learning at the synaptic level," Thiessen said. "At the cognitive level, reward promotes maintenance of attention to help the child focus on what is important, which could be especially important for children who have less well developed attentional control."

The second experiment delved deeper to evaluate whether the positive results from the first experiment were an artifact of the novel experience using a digital platform. The researchers compared two digital books -- one static and one animated. Again, children's recall improved using the animated digital platform (64.72 percent to 45.89 percent, respectively).

Finally, the researchers explored the role of animations in recall and attention. They compared two digital storybooks -- one that animates at the start of the page and one that animates upon an appropriate child vocalization. Children's recall was higher for the digital book that animated with the child's vocalization (59.42 percent compared to 45.13 percent).

In every experiment, children experienced better recall for stories when they were able to exert active control on the animations in the storybook. According to Thiessen, positive reinforcement enhances the learning experience as do the animated visuals, which integrate nonverbal information and language into the mix. This approach was also particularly beneficial for children who experience difficulty focusing.

"Contingent positive reinforcement may be especially useful for children with lower attentional control because it facilitates learning by directing children's attention to relevant content," said Cassondra Eng, graduate student in Thiessen's lab and first author on the paper. "The contingent responses may provide children with feelings of accomplishment and may serve as positive reinforcement, which in turn enhance learning."

Each experiment consisted of a unique cohort of approximately 30 young children (3-5 years old). For each iteration, the children were read two books (each 14 pages) following a structured approach by the adult reader. After the story, the child was asked ten questions to evaluate story recall.

While this study found animated digital storybooks are beneficial for children, especially children with lower attention skills, the study did not explore why this approach proved to be advantageous. The study did require children to recall information through identification and description, which has proven to be a valid approach to identify a children's competency in understanding the story.

Speech recognition software was inadequate to animate the digital platforms automatically. For the study, a researcher initiated the animations manually upon an appropriate vocalization from the child. Current work in the lab is focused on building an interface with fully automatic speech recognition capacities.

Credit: 
Carnegie Mellon University

Is there a link between lifetime lead exposure and dementia?

Toronto, ON -- To the medical community's surprise, several studies from the US, Canada, and Europe suggest a promising downward trend in the incidence and prevalence of dementia. Important risk factors for dementia, such as mid-life obesity and mid-life diabetes, have been increasing rapidly, so the decline in dementia incidence is particularly perplexing.

A new hypothesis by University of Toronto Professor Esme Fuller-Thomson, recently published in the Journal of Alzheimer's Disease, suggests that the declining dementia rates may be a result of generational differences in lifetime exposure to lead. U of T pharmacy student ZhiDi (Judy) Deng, co-authored the article.

"While the negative impact of lead exposure on the IQ of children is well-known, less attention has been paid to the cumulative effects of a lifetime of exposure on older adults' cognition and dementia," says Fuller Thomson, director of the Institute of Life Course and Aging and professor at the Factor-Inwentash Faculty of Social Work. "Given previous levels of lead exposure, we believe further exploration of the of this hypothesis is warranted."

Leaded gasoline was a ubiquitous source of air pollution between the 1920s and 1970s. As it was phased out, beginning in 1973, levels of lead in citizens' blood plummeted. Research from the 1990s indicates that Americans born before 1925 had approximately twice the lifetime lead exposure as those born between 1936 and 1945.

"The levels of lead exposure when I was a child in 1976 were 15 times what they are today," says Fuller-Thomson, who is also cross appointed with U of T's Faculty of Medicine. "Back then, 88 per cent of us had blood lead levels above 10 micrograms per deciliter. To put this numbers in perspective, during the Flint Michigan water crisis of 2014, one per cent of the children had blood lead levels above 10 micrograms per deciliter."

Lead is a known neurotoxin that crosses the blood-brain barrier. Animal studies and research on individuals occupationally exposed to lead suggest a link between lead exposure and dementia. Other studies have shown a higher incidence of dementia among older adults living closer to major roads and among those with a greater exposure to traffic related pollution.

Fuller-Thomson and Deng are particularly interested in a potential link between lifetime lead exposure and a recently identified subtype of dementia: Limbic-predominant Age-related TDP-43 Encephalopathy (LATE), whose pathological features have been identified in 20 per cent of dementia patients over the age of 80.

Other plausible explanations for the improving trends in dementia incidence include higher levels of educational attainment, lower prevalence of smoking, and better control of hypertension among older adults today compared to previous generations. However, even when these factors are statistically accounted for, many studies still find incidences of dementia declining.

The authors suggest that next steps to assess the validity of this hypothesis could include: comparing 1990s assessment of blood lead levels to current Medicare records, assessing lead levels in teeth and tibia bones (which serve as proxies for life-time exposure) when conducting post-mortems of brains for dementia, and examining the association between particular gene variants associated with higher lead uptake and dementia incidence.

"If lifetime lead exposure is found to be a major contributor to dementia, we can expect continued improvements in the incidence of dementia for many more decades as each succeeding generation had fewer years of exposure to the neurotoxin," says Deng.

Credit: 
University of Toronto

Pollution league tables for UK urban areas reveal the expected and unexpected

The Bedfordshire town of Luton has come bottom of a league table of predicted city-wide air pollution concentrations among UK cities, according to new analysis by the Universities of Birmingham and Lancaster.

Although Luton's air pollution emissions are about as expected for its population, the town's compactness limits dispersal of pollution, meaning it drops to last place among the 146 most populous UK places in terms of predicted air pollution concentrations.

At the other end of the scale, Milton Keynes and Stoke on Trent fare much better than expected, for their respective sizes, with average-to-poor emissions of air pollution mitigated substantially by better dispersal of pollution into less compact city spaces.

The new study, published in Environmental Research Letters, was carried out by researchers in the Birmingham Institute for Forest Research and colleagues in Lancaster University's Environment Centre. The team used government statistics to build relationships between a city's population, built-up area, air pollution released, and expected city-wide pollution concentrations.

The resulting relationships predict what emissions and concentrations are expected for an urban area of any population in the UK.

The team then compared the 146 most populous urban areas across the UK with their predictions to find which settlements were performing relatively better or worse than expected.

The league table for emissions measures how efficiently a city moves people and heats homes compared to the UK average for its population-size. The league table for city-wide concentrations shows how the area of a city modifies the effect of its emissions to give better- or worse-than-expected pollution concentrations across the urban area.

The study looked at a range of air pollutants but focused on traffic-generated nitrogen oxides, which are a major health concern in cities. The relationship converting government emissions statistics into city-wide pollutant concentrations was shown to be consistent with that derived for other cities from satellite measurements.

Key findings included:

Many cities across the Midlands are performing worse than expected, both in terms of emissions and expected concentrations; both Royal Leamington Spa and Coventry start poor in terms of emissions and slide further, towards the bottom of the league, when considering expected concentrations of city-wide pollution

Other cities performing worse than expected include Crawley, Cardiff and Stevenage

London is just outside the top 20 for emissions, i.e. doing much better than expected for its size, but is only mid-table for city-wide pollution concentrations

Weybridge, Aldershot and Macclesfield, in England, along with Livingston in Scotland, benefit greatly from having space to disperse their pollutants, as do Milton Keynes and Stoke-on-Trent

Lead author Professor Rob MacKenzie explains: "What we're interested in is not just how much pollution is produced, but how much is in the air. Our new study shows how effective the particular urban form of a city - its layout and the types of building - is in dispersing the pollution.

"For example, Milton Keynes is at the top of our list, doing much better than we would expect with the biggest gap between the amount of pollution produced and the concentrations in the air we breathe. The town's middling rank for emissions reflects personal transport choices and the town's traffic management; it's much better-than-expected performance for concentrations reflects the way the city is laid out, with its distinctive mix of grids and roundabouts, and the inclusion of parks and green spaces, which all contribute to this overall effect.

"In contrast, we have Luton right down at the bottom. This is a more densely populated urban area doesn't gain much benefit from its compactness in terms of emissions and its compactness works against dispersion of pollution resulting in worse-than-expected city-wide concentrations."

Dr Duncan Whyatt of Lancaster Environment Centre added: "London appears right in the middle of our pollution concentration table, having done well in terms of lower emissions for its size. The lower-than-expected emissions may be to do with the intense concentration of effort in moving high volumes of people through and around the city. Its well-developed public infrastructure, means that, for its size, it produces less pollution emissions than, say, Birmingham, which is still very heavily car-dependent."

This study offers valuable insights for urban planners who can start to take a closer look at the cities that do particularly well for pollution dispersal and analyse what elements should be prioritised to improve overall air quality in future city design.

"Using this type of analysis will help planners make those important decisions that find the right balance between spreading out urban development and providing sufficient green spaces, but also managing emissions by transporting people efficiently and heating homes efficiently," says Professor MacKenzie.

Credit: 
University of Birmingham

New archaeological discoveries reveal birch bark tar was used in medieval England

image: Skeleton from grave 293, Anglo-Saxon child burial.

Image: 
Oxford Archaeology East

Scientists from the University of Bristol and the British Museum, in collaboration with Oxford Archaeology East and Canterbury Archaeological Trust, have, for the first time, identified the use of birch bark tar in medieval England - the use of which was previously thought to be limited to prehistory.

Birch bark tar is a manufactured product with a history of production and use that reaches back to the Palaeolithic. It is very sticky, and is water resistant, and also has biocidal properties mean that it has a wide range of applications, for example, as a multipurpose adhesive, sealant and in medicine.

Archaeological evidence for birch bark tar covers a broad geographic range from the UK to the Baltic and from the Mediterranean to Scandinavia.

In the east and north of this range there is continuity of use to modern times but in western Europe and the British Isles the use of birch bark tar has generally been viewed as limited to prehistory, with gradual displacement by pine tars during the Roman period.

The new identifications, reported today in the Journal of Archaeological Science: Reports, came from two early medieval sites in the east of England.

The first was a small lump of dark material found in a child grave of the Anglo-Saxon period (440-530AD) in Cambridge (analysed by the Organic Geochemistry Unit, University of Bristol, for Oxford Archaeology East).

The other tar (analysed by scientists at the British Museum) was discovered coating the interior of a ceramic container associated with a 5th-6th century cemetery site at Ringlemere in Kent.

The child in the Cambridge Anglo-Saxon period grave was likely a girl, aged seven to nine years old, and she had a variety of grave goods, including brooches and beads on her chest, and a variety of artefacts, including an iron knife, a copper alloy girdle hanger and an iron ring, together with the dark lump of material, all contained within a bag hanging from a belt at her waist.

The different contexts of the finds point to diverse applications of the material.

From pathological indicators on the child skeleton, the team surmise that the birch bark tar may have been used for medicinal purposes as birch bark tar has a long history in medicine, having antiseptic properties.

The tar in the ceramic vessel from Ringlemere might have been used for processing the tar or sealing the container.

Both of the tars were found to contain fatty material, possibly added to soften the tar, or, in the case of the container could indicate multiple uses.

Dr Rebecca Stacey from the British Museum's Department of Scientific Research said: "The manufacturing and use of birch bark tar is well known is well known from prehistoric times but these finds indicate either a much longer continuity of use of this material than has been recognised before or perhaps a reintroduction of the technology in eastern regions at this time."

Dr Julie Dunne, from the University of Bristol's School of Chemistry, added: "These results present the first identification of birch bark tar from early medieval archaeological contexts in the UK.

"Interestingly, they are from two different contexts, one in a ceramic pot, which suggests it may have been used to process birch bark into tar and the other as an 'unknown' lump in a child grave of the Anglo-Saxon period. The pathological indicators on the child skeleton suggests the birch bark tar may have been used for medicinal purposes."

Dr Ian Bull, also from the University of Bristol's School of Chemistry, said: "This is a great example of how state-of-the-art chemical analyses have been able to re-characterise an otherwise mundane object as something of extreme archaeological interest, providing possible insights into medicinal practices in the Middle Ages."

Credit: 
University of Bristol