Earth

URI biologist provides framework for national invasive species policy, implementation

KINGSTON, R.I. - January 23, 2020 - A special issue of the journal Biological Invasions, co-edited by University of Rhode Island ecologist Laura Meyerson and University of Tennessee biologist Daniel Simberloff, provides a pathway to strengthening national policies and implementing strategies for addressing a growing threat to national security - invasive species.

The issue, published Jan. 17, is an output of the 2016-2018 National Invasive Species Council Management Plan. It reflects more than three years of work coordinated by the council's secretariat, with contributions from hundreds of experts.

"This comprehensive special issue has been published at a critical time," said Meyerson, URI professor of natural resources science, who has studied invasive species for 25 years. "The United States is at greater risk for biological invasions and biosecurity breaches because the Trump administration has recently taken steps to weaken U.S. defenses against invasions."

The council's former executive director, Jamie K. Reaser, who oversaw the work and authored most of the papers, said: "Invasive species are a threat to national security, a threat that the U.S. and many other countries are not adequately responding to despite the warnings by experts. This collection of papers is thus a strategic plan for national defense - the defense of our economies, food and water supplies, ecological systems, even our lives."

Meyerson noted that the already meager budget for the National Invasive Species Council was cut by half, and the multi-stakeholder Invasive Species Advisory Committee terminated, making it increasingly difficult for federal agencies to effectively collaborate or garner input on the issue from outside experts.

"It also hamstrings the ability of the federal government to work holistically with non-federal stakeholders to address invasive species," she said. "Invasive species cost the United States hundreds of billions of dollars annually. We need a coordinated federal effort to strengthen our national biosecurity."

The special issue of Biological Invasions features 12 complementary papers intended to inform the development and implementation of a national program for the early detection of and rapid response to invasive species.

"The lack of a comprehensive, coordinated, early-warning rapid-response system is the weakest link in our woefully inadequate response to the economic and ecological devastation wrought by biological invasions," said Simberloff.

"These papers finally show how the United States can fashion a strong, efficient system to defend the nation against this scourge - a long overdue step."

The papers include a new comprehensive framework and blueprint for a national early detection and rapid response program, as well as technical guidance for implementation, including information management, taxonomic identification, technology advancement, risk screening, target analysis, watch lists, legal frameworks and incident command systems.

"The technology paper is especially exciting because of its great ideas for really neat technologies - like acoustic technologies and genetic technologies - that we should be exploiting and investing in to address invasives," Meyerson said.

According to Meyerson and Simberloff, invasive species pose a significant threat to national security by adversely impacting a wide range of critical services, from food and water supplies to infrastructure stability, human health and military readiness. No national security issue poses a greater, more comprehensive national threat yet garners so little preparedness and response than the issue of invasive species, they said.

The biologists note that the federal government is primarily responsible for keeping invasive species from entering the U.S. by providing sufficient border protection. Border security failures result in the burden of invasive species management being passed on to state, tribal and local governments, as well as to industry, private landowners and individual citizens.

"Every sector of the economy is impacted by invasive species, directly or indirectly," Meyerson said. "The costs and seriousness of the issue are underrecognized in this country."

Meyerson and Simberloff are the editors-in-chief of Biological Invasions, and at different times they served on the Invasive Species Advisory Committee. They have also worked on federal invasive species policy issues since President Bill Clinton issued the first executive order on the subject in 1999.

"My goal is to keep a major focus on this issue because it's one of the most important issues of our day, right up there with climate change," concluded Meyerson. "It's an issue that's really costing the United States in terms of dollars, health, and biodiversity, so we need to pay attention and strengthen rather than weaken our actions."

Credit: 
University of Rhode Island

A heart-healthy protein from bran of cereal crop

Foxtail millet is an annual grass grown widely as a cereal crop in parts of India, China and Southeast Asia. Milling the grain removes the hard outer layer, or bran, from the rest of the seed. Now, researchers have identified a protein in this bran that can help stave off atherosclerosis in mice genetically prone to the disease. They report their results in ACS' Journal of Agricultural and Food Chemistry.

Atherosclerosis, or narrowing of the arteries because of plaque buildup, is the leading cause of heart disease and stroke. Plaques form when immune cells called monocytes take up oxidized low-density lipoprotein cholesterol (ox-LDL) in the artery wall. These cells then secrete pro-inflammatory cytokines, causing aortic smooth muscle cells to migrate to the site. Eventually, a plaque made up of cholesterol, cells and other substances forms. Drugs called statins can treat atherosclerosis by lowering LDL levels, but some people suffer from side effects. Zhuoyu Li and colleagues previously identified a protein in foxtail millet bran that inhibits the migration of colon cancer cells. They wondered if the protein, called foxtail millet bran peroxidase (FMBP), could also help prevent atherosclerosis.

To find out, the researchers treated human aortic smooth muscle cells and monocytes in petri dishes with FMBP. The millet protein reduced the uptake of lipids by both cell types and reduced the migration of smooth muscle cells. In monocytes, FMBP treatment blocked the expression of two key proteins involved in atherosclerosis. Next, the team fed mice that were genetically predisposed to atherosclerosis a high-fat diet. Mice that were then treated with either FMBP or a statin had far fewer plaques than untreated mice. The FMBP-treated mice also had elevated blood levels of high-density lipoprotein cholesterol (HDL), the "good cholesterol." Based on these results, FMBP is a natural product with great potential in the prevention and treatment of atherosclerosis, the researchers say.

Credit: 
American Chemical Society

Policymakers join experts at the European Parliament for radioligand therapy report launch

image: The report launched January 22, 2020.

Image: 
The Health Policy Partnership

Ahead of the European Commission's official launch of 'Europe's Beating Cancer Plan', The Health Policy Partnership and an expert-led steering committee met at the European Parliament in Brussels today to launch a new report, Radioligand therapy: realising the potential of targeted cancer care.

The event, co- hosted by Tanja Fajon MEP (S&D, Slovenia) and Ewa Kopacz MEP (EPP, Poland), featured presentations on the growing importance of radioligand therapy as part of cancer care, led by patient representatives and experts in oncology, nuclear medicine and European health policy. Speakers considered the political and practical actions needed to create an enabling environment for radioligand therapy in the EU to better integrate it into current oncology approaches.

Radioligand therapy delivers radiation directly to cancer cells, using structural differences to target these specific cells anywhere in the body while leaving healthy cells largely unaffected. It is an increasingly promising element of cancer care and its use has expanded significantly in recent years - but uptake and availability remain highly variable across Europe. Radioligand therapy is currently approved for use in neuroendocrine cancers and metastatic castrate-resistant prostate cancer that has spread to bones, and may have applications for many different types of cancer and even other diseases.

Discussions at the launch event centred around the barriers and recommendations for radioligand therapy identified in the new report. Barriers include low awareness and understanding within the health community and unclear models of care. The proposed recommendations to address these challenges range from increasing use of multidisciplinary care in oncology to boosting investment in real-world data collection.

Suzanne Wait, Managing Director, The Health Policy Partnership says: 'The challenges to integrating radioligand therapy into cancer care are not unique to this form of treatment - and reflecting on them from all perspectives (that of clinicians, patients, regulatory agencies, hospitals and policymakers) may help progress towards more personalised and integrated models of cancer care.'

Radioligand therapy can be personalised to individual patients and relies on strong multidisciplinary teamwork among expert clinicians. Cancer is the second highest cause of death in Europe and its prevalence is set to increase in the coming years. As more people are living with cancer, and living longer with the disease, quality of life is increasingly being prioritised in treatment and care planning. With fewer side effects than conventional cancer treatments, radioligand therapy can help cancer patients live with improved quality of life.

To download a copy of the Radioligand therapy: realising the potential of targeted cancer care report, visit radioligandtherapy.com.

Credit: 
The Health Policy Partnership

Community-based counselors help mitigate grief among children orphaned in East Africa

image: A lay counselor guides Kenyan children in a stress-relief exercise. A University of Washington/Duke University study examined the effects of cognitive behavioral therapy, and the training of lay counselors to provide that therapy, on children who have been orphaned in East Africa.

Image: 
Ace Africa

A first-of-its-kind clinical trial involving more than 600 children in Kenya and Tanzania, in which community members were trained to deliver mental health treatment, showed improvement in participants' trauma-related symptoms up to a year after receiving therapy, new research shows.

Led by the University of Washington and Duke University, researchers trained laypeople as counselors to deliver treatment in both urban and rural communities in Kenya and Tanzania, and evaluated the progress of children and their guardians through sessions of trauma-focused cognitive behavioral therapy.

The study will be published online Jan. 22 in JAMA Psychiatry.

An estimated 140 million children around the world have experienced the death of a parent, which can result in grief, depression, anxiety and other physical and mental health conditions.

In low and middle-income countries, many of those children end up living with relatives or other caregivers where mental health services are typically unavailable.

"Very few people with mental health needs receive treatment in most places in the world, including many communities in the U.S.," said lead author Shannon Dorsey, a UW professor of psychology. "Training community members, or 'lay counselors' to deliver treatment helps increase the availability of services."

Cognitive behavioral therapy (CBT), a type of talk therapy, generally involves focusing on thoughts and behavior, and how changing either or both can lead to feeling better. When CBT is used for traumatic events, it involves talking about the events and related difficult situations instead of trying to avoid thinking about or remembering them. The approach has been tested before among children, and in areas with little access to mental health services, Dorsey said, but this study is the first clinical trial outside high-income countries to examine sustained improvement in post-traumatic stress symptoms in children over time.

This research built off the work of study co-author Kathryn Whetten, a professor of public policy and global health at Duke. Whetten has been conducting longitudinal research in Tanzania and four other countries on the health outcomes of some 3,000 children who have lost one or both parents. “While concerned with the material needs of the household, caregivers of the children repeatedly asked the study team to find ways to help with the children’s behavioral and emotional ‘problems’ that made it so that that the children did not do well in school, at home or with other children. We knew these behaviors were expressions of anxiety that likely stemmed from their experiences of trauma,” Whetten said. “We therefore sought to adapt and test an intervention that could help these children succeed.”

Africa is home to about 50 million children who have been orphaned, most due to HIV/AIDS and other health conditions.

For the Kenya-Tanzania study, two local nongovernmental organizations, Ace Africa and Tanzania Women Research Foundation, recruited counselors and trained them in CBT methods. Based on researchers' prior experience in Africa, the team adapted the CBT model and terminology, structuring it in groups of children in addition to one-on-one sessions, and referring to the sessions as a "class" offered in a familiar building such as a school, rather than as "therapy" in a clinic. The changes were aimed at reducing stigma and boosting participation in and comfort with the program, Dorsey said.

"Having children meet in groups naturally normalized their experiences. They could see, as one child said, 'I'm not the only one who worries about who will love me now that my mama is gone.' They also got to support each other in practicing different skills to cope with feelings and in thinking in different ways to feel better," she said.

Counselors provided 12 group sessions over 12 weeks, along with three to four individual sessions per child. Caregivers participated in their own group sessions, a few individual sessions, and group activities with the children. Activities and discussions were centered on helping participants process the death of a loved one: being able to think back on and talk about the circumstances surrounding the parent's death, for example, and learning how to rely on memories as a source of comfort. In one activity, children drew a picture of something their parent did with them, such as cooking a favorite meal or walking them to school. Even though children could no longer interact with the parent, the counselors explained that children could hold onto these memories and what they learned from their parents, like how to cook the meal their mother made, or the songs their father taught them.

"The children learned that you don't lose the relationship," Dorsey said. "You have to convert that relationship to one of memory, but it is still a relationship that can bring comfort."

The guardians learned similar coping skills, she said. Children who have experienced parental death are usually cared for by a relative, so caregivers, be they the other parent, a grandparent or an aunt or uncle, were also grieving the loss of a family member while taking on the challenge of an additional child in the home.

Participants were interviewed at the conclusion of the 12-week program, and again six and 12 months later. A control group of children, who received typical community services offered to children who are orphaned, such as free uniforms and other, mostly school fee-related help, was evaluated concurrently.

Improvement in children's post-traumatic stress symptoms and grief was most pronounced in both urban and rural Kenya. Researchers attribute the success there partly to the greater adversity children faced, such as higher food scarcity and poorer child and caregiver health, and thus the noticeable gains that providing services could yield. In contrast, in rural Tanzania, children in both the counseling and control groups showed similar levels of improvement, which researchers are now trying to understand. One possible explanation with some support from an ongoing qualitative analysis, Dorsey said, is that children and caregivers in Tanzania, and particularly in rural areas, may have been more likely to share with others in their village what they learned from therapy.

Even with the different outcomes in the two countries, the intervention by lay counselors who were trained by experienced lay counselors shows the effectiveness and scalability of fostering a local solution, Dorsey said. Members of the research team already have been working in other countries in Africa and Asia to help lay counselors train others in their communities to work with children and adults.

"If we grow the potential for lay counselors to train and supervise new counselors and provide implementation support to systems and organizations in which these counselors are embedded, communities can have their own mental health expertise," Dorsey said. "That would have many benefits, from lowering cost to improving the cultural and contextual fit of treatments."

Credit: 
University of Washington

First ancient DNA from West/Central Africa illuminates deep human past

image: The Shum Laka rock shelter in Cameroon, home to an ancient population that bears little genetic resemblance to most people who live in the region today.

Image: 
Image: Pierre de Maret

An international team led by Harvard Medical School scientists has produced the first genome-wide ancient human DNA sequences from west and central Africa.

The data, recovered from four individuals buried at an iconic archaeological site in Cameroon between 3,000 and 8,000 years ago, enhance our understanding of the deep ancestral relationships among populations in sub-Saharan Africa, which remains the region of greatest human diversity today.

The findings, published Jan. 22 in Nature, provide new clues in the search to identify the populations that first spoke and spread Bantu languages. The work also illuminates previously unknown "ghost" populations that contributed small portions of DNA to present-day African groups.

Map of Africa with Cameroon in dark blue and approximate location of Shum Laka marked with star. Image adapted from Alvaro1984 18/Wikimedia Commons

Research highlights:

DNA came from the remains of two pairs of children who lived around 3,000 years ago and 8,000 years ago, respectively, during the transition from the Stone Age to the Iron Age.

The children were buried at Shum Laka, a rock shelter in the Grassfields region of northwestern Cameroon where ancient people lived for tens of thousands of years. The site has yielded prolific artifacts along with 18 human skeletons and lies in the region where researchers suspect Bantu languages and cultures originated. The spread of Bantu languages--and the groups that spoke them--over the past 4,000 years is thought to explain why the majority of people from central, eastern and southern Africa are closely related to one another and to west/central Africans.

Surprisingly, all four individuals are most closely related to present-day central African hunter-gatherers, who have very different ancestry from most Bantu speakers. This suggests that present-day Bantu speakers in western Cameroon and across Africa did not descend from the sequenced children's population.

One individual's genome includes the earliest-diverging Y chromosome type, found almost nowhere outside western Cameroon today. The findings show that this oldest lineage of modern human males has been present in that region for more than 8,000 years, and perhaps much longer.

Genetic analyses indicate that there were at least four major lineages deep in human history, between 200,000 and 300,000 years ago. This radiation hadn't been identified previously from genetic data.

Contrary to common models, the data suggest that central African hunter-gatherers diverged from other African populations around the same time as southern African hunter-gatherers did.

Analyses reveal another set of four branching human lineages between 60,000 and 80,000 years ago, including the lineage known to have given rise to all present-day non-Africans.

The Shum Laka individuals themselves harbor ancestry from multiple deep lineages, including a previously unknown, early-diverging ancestry source in West Africa.

Credit: 
Harvard Medical School

New research finds Earth's oldest asteroid strike linked to 'big thaw'

video: Curtin University scientists have discovered Earth's oldest asteroid strike occurred at Yarrabubba, in outback Western Australia, and coincided with the end of a global deep freeze known as a Snowball Earth.

Image: 
Curtin University

Curtin University scientists have discovered Earth's oldest asteroid strike occurred at Yarrabubba, in outback Western Australia, and coincided with the end of a global deep freeze known as a Snowball Earth.

The research, published in the leading journal Nature Communications, used isotopic analysis of minerals to calculate the precise age of the Yarrabubba crater for the first time, putting it at 2.229 billion years old - making it 200 million years older than the next oldest impact.

Lead author Dr Timmons Erickson, from Curtin's School of Earth and Planetary Sciences and NASA's Johnson Space Center, together with a team including Professor Chris Kirkland, Associate Professor Nicholas Timms and Senior Research Fellow Dr Aaron Cavosie, all from Curtin's School of Earth and Planetary Sciences, analysed the minerals zircon and monazite that were 'shock recrystallized' by the asteroid strike, at the base of the eroded crater to determine the exact age of Yarrabubba.

The team inferred that the impact may have occurred into an ice-covered landscape, vaporised a large volume of ice into the atmosphere, and produced a 70km diameter crater in the rocks beneath.

Professor Kirkland said the timing raised the possibility that the Earth's oldest asteroid impact may have helped lift the planet out of a deep freeze.

"Yarrabubba, which sits between Sandstone and Meekatharra in central WA, had been recognised as an impact structure for many years, but its age wasn't well determined," Professor Kirkland said.

"Now we know the Yarrabubba crater was made right at the end of what's commonly referred to as the early Snowball Earth - a time when the atmosphere and oceans were evolving and becoming more oxygenated and when rocks deposited on many continents recorded glacial conditions".

Associate Professor Nicholas Timms noted the precise coincidence between the Yarrabubba impact and the disappearance of glacial deposits.

"The age of the Yarrabubba impact matches the demise of a series of ancient glaciations. After the impact, glacial deposits are absent in the rock record for 400 million years. This twist of fate suggests that the large meteorite impact may have influenced global climate," Associate Professor Timms said.

"Numerical modelling further supports the connection between the effects of large impacts into ice and global climate change. Calculations indicated that an impact into an ice-covered continent could have sent half a trillion tons of water vapour - an important greenhouse gas - into the atmosphere. This finding raises the question whether this impact may have tipped the scales enough to end glacial conditions."

Dr Aaron Cavosie said the Yarrabubba study may have potentially significant implications for future impact crater discoveries.

"Our findings highlight that acquiring precise ages of known craters is important - this one sat in plain sight for nearly two decades before its significance was realised. Yarrabubba is about half the age of the Earth and it raises the question of whether all older impact craters have been eroded or if they are still out there waiting to be discovered," Dr Cavosie said.

Credit: 
Curtin University

New groundbreaking method could improve the accuracy of data used to produce lifesaving drugs

A new high-throughput method has revealed metals previously undetected in 3-D protein structures. The study, led by the Universities of Surrey and Oxford, is thought to have major implications for scientists using protein structure data.

Proteins that contain metal, known as metalloproteins, play important roles in biology by regulating various pathways in the body that often become channels for life-saving drugs. While the amount of metal in such proteins is usually tiny, it is crucial to determining the function of these complex molecules; however, there hasn't previously been a reliable analytical method for determining the identity and quantity of metal atoms in metalloproteins.

In a study published in the Journal of the American Chemical Society, an international multi-disciplinary team of researchers report that they have developed a way to unambiguously identify and count metal atoms in proteins in an efficient and routine way. Using this approach, they have revealed new, previously hidden information.

The method was developed by a University of Surrey physicist, Dr Geoffrey W. Grime, and a crystallographer from the University of Oxford, Elspeth F. Garman. In a breakthrough described in the paper, the researchers have now automated the method to allow many samples to be analysed sequentially in a high-throughput 'pipeline'.

The significance of the method was quickly realised by protein chemist Edward H. Snell of the Hauptmann Woodward Institute from the University of Buffalo in the US, who set up a study in which the team measured 30 randomly selected metalloproteins that are already in the global repository of protein structures called the Protein Data Bank (PDB).

The extraordinary results proved the immense importance of the new method: they showed that the techniques previously used to determine some of these 30 random protein structures had either misidentified the metal atoms or, in some cases, completely missed them. In about half of the samples studied the metals built into the PDB models were incorrect.

The PDB is a critical resource for researchers worldwide. In 2017, there were on average 1.86 million downloads per day in the US alone. An enormous number of researchers use PDB structures without knowledge of the potential fundamental errors that may be present. Extrapolating from the published results suggests that over 350,000 models downloaded per day may not contain the correct metal. This has profound implications for those using the models, whose understanding of them may therefore be flawed.

Professor Elspeth Garman from the University of Oxford said: "I sat with my collaborator from the University of Buffalo and when we crunched the numbers, we both immediately realised we'd made a discovery that would - at best - result in a significant debate within our scientific community. We turned the numbers into a picture and hidden within the data was an explanation of how this molecular machine worked."

Dr Geoffrey Grime from the University of Surrey said: "This is an outstanding example of interdisciplinary scientific collaboration where techniques developed initially in nuclear physics labs have made a big contribution to molecular biology."

Credit: 
University of Surrey

Residues in fingerprints hold clues to their age

image: Levels of an unsaturated triacylglycerol decline in fingerprints from an individual from day 0 (top) to day 1 (middle) and day 3 (bottom). 

Image: 
Adapted from <i>Analytical Chemistry</i> <b>2020</b>, DOI: 10.1021/acs.analchem.9b04765

Police have long relied on the unique whorls, loops or arches encoded in fingerprints to identify suspects. However, they have no way to tell how long ago those prints were left behind -- information that could be crucial to a case. A preliminary new study in ACS' Analytical Chemistry suggests that could change. Researchers report that they could link compounds contained in fingerprints with their age.   

By determining the age of fingerprints, police could get an idea of who might have been present around the time a crime was committed. This information could, for example, contradict a suspect's explanation that he or she had visited earlier. Scientists have already begun mining fingerprint residues for clues to the identity of the person who made them, but timing has proven more difficult to reliably pin down. Notably, past research has shown that a gas chromatography-mass spectrometry method succeeded in determining if prints were more or less than eight days old; however, investigators often need more precision. To get a better idea of when prints were deposited, Young Jin Lee and colleagues looked to reactions already suspected to take place in these residues, when ozone in air reacts with unsaturated triacylglycerols left by a fingertip.

Using prints collected from three donors, the researchers tracked shifting levels of triacylglycerols using mass spectrometry imaging, a technique that leaves the evidence intact. They found they could reliably determine the triacylglycerol degradation rate for each person over the course of seven days. But the rate differed among individuals, with one person's triacylglycerols declining more gradually than the others. The researchers attribute this difference to higher levels of lipids in that individual's fingerprints. The method also worked on residues that had been dusted with forensic powder. The researchers say that although a large-scale study is needed to better understand how lipid levels affect triacylglycerol degradation, this analysis is a first step toward developing a better fingerprint aging test. 

Credit: 
American Chemical Society

Spikes in blood pressure among young adults spell trouble in mid-age

DURHAM, N.C. -- Wide swings in blood pressure readings among young adults are associated with a higher risk of cardiovascular disease by middle age, a new analysis led by Duke Health researchers shows.

The finding, publishing Jan. 22 in JAMA Cardiology, suggests that the current practice of averaging blood pressure readings to determine whether medications are necessary could be masking a potential early warning sign from the fluctuations themselves.

"If a patient comes in with one reading in December and a significantly lower reading in January, the average might be within the range that would appear normal," said lead author Yuichiro Yano, M.D., Ph.D., assistant professor in Duke's Department of Family Medicine and Community Health.

"But is that difference associated with health outcomes in later life?" Yano said. "That's the question we sought to answer in this study, and it turns out the answer is yes."

Yano and colleagues arrived at their conclusion after analyzing 30 years of data from a large, diverse cohort of young people enrolled in the Coronary Artery Risk Development in Young Adults study between March 1985 and June 1986.

Of the 3,394 people studied, about 46% were African American and 56% were women. The patients had regular blood pressure checks, with patterns evaluated across five visits, including at two, five, seven and 10 years. At the 10-year mark, the average age of the patients was about 35.

The main reading of concern to Yano's research team was the systolic blood pressure level, the upper number in the equation that measures the pressure in the blood vessels when the heart pumps. A systolic blood pressure reading over 130 is considered hypertensive and has long been a major risk factor for cardiovascular disease.

Yano and colleagues were able to identify which young people had variations in systolic blood pressure by the age of 35 and then track them over the next 20 years and see whether there appeared to be a correlating increase in cardiovascular disease.

Over those years, study participants reported 181 deaths and 162 cardio-vascular events, which included fatal and nonfatal coronary heart disease, hospitalization for heart failure, stroke, transient ischemic attack, or a stent procedure for blocked arteries.

The researchers found that each 3.6-mm spike in systolic blood pressure during young adulthood was associated with a 15-percent higher risk for heart disease events, independent of the averaged blood pressure levels during young adulthood and any single systolic blood pressure measurement in midlife.

"Current guidelines defining hypertension and assessing the need for anti-hypertensive therapies ignore variability in blood pressure readings," Yano said. "I think there has been a belief that variability is a chance phenomenon, but this research indicates maybe not. Variability matters."

Yano said this study provides strong evidence that doctors and patients should be alert to blood pressure variations in early adulthood, when there is time to instill lifestyle changes that could improve and even extend a person's life.

Credit: 
Duke University Medical Center

Is it time to stop ringing the cancer bell?

audio: Cancer patients often ring a bell to commemorate the end of treatment -- but until now, the many effects of ringing the bell have not been studied. Patrick Williams, M.D., and Richard Jennelle, M.D., authors of "The Cancer Bell: Too Much of a Good Thing? discuss their recent article and the surprising results reported within. Psychologist Dr. Andrea Bonior talks about ways that emotional arousal and individual patient desires can factor into the memory of treatment distress. Cancer survivor Valerie Powell tells her story of ringing the bell, what it meant to her at the time, and how she looks back on that experience now. Duration 29.17 min/29.2 mb

Image: 
<em>International Journal of Radiation Oncology * Biology * Physics</em> (Elsevier)

It's a scene that some cancer patients dream about: they celebrate the end of a course of radiation or chemotherapy by ringing a bell, surrounded by family and cheering cancer clinic staff. Indeed, many patients say they love the graduation-like ceremony and the sense of closure it gives them.

But a study published in the October 2019 issue of the International Journal of Radiation Oncology * Biology * Physics (and made available online ahead of publication in June 2019) sheds light on some unintended consequences of the widespread practice and raises questions about whether it's time to discontinue its use. The survey of more than 200 patients with cancer - half of whom rang the bell at the end of treatment and half of whom did not - found that those who rang a bell remembered treatment as more distressful than those who finished without ringing a bell.

That outcome surprised the study's lead investigator, Patrick A. Williams, MD, a radiation oncologist who led the study while completing his residency at the Keck School of Medicine of University of Southern California in Los Angeles. "We expected the bell to improve the memory of treatment distress," he said. "But in fact, the opposite occurred. Ringing the bell actually made the memory of treatment worse, and those memories grew even more pronounced as time passed."

"We think this is because ringing the bell creates a 'flashbulb event' in a patient's life - that is, a vivid snapshot of their memories from that time," said Dr. Williams, explaining that events become more deeply embedded in our memories if emotions are aroused, due to connections in the brain between memory and emotion. "Rather than locking in the good feelings that come with completing treatment, however, ringing the bell appears to lock in the stressful feelings associated with being treated for cancer."

There are other reasons to reconsider the usefulness of ringing a cancer bell at the end of treatment, a practice introduced to the U.S. in 1996 that is now commonplace at cancer clinics across the country - including 51 of 62 National Cancer Institute-designated cancer centers, according to a December 2018 JAMA Oncology article. While this highly symbolic ceremony may be joyful for those ringing the bell, not everyone within earshot feels the same. The practice has received criticism from patient advocates who note that there are other patients whose treatment may not end on a positive note. For these patients, hearing the bell ring can arouse feelings of anger, resentment or depression, since they will not likely be given the option to ever ring a bell.

"If I ran a cancer clinic, there would be no bell in the infusion area," Katherine O'Brien, a patient with stage IV cancer and an advocate for the Metastatic Breast Cancer Network, in a 2018 essay. "How would YOU like to be there week after week in perpetuity attached to an IV pole as others celebrate their final appointments?"

What's more, the ceremony - meant to signify the beginning of life cancer-free - could also set up false hope for people whose cancer recurs, noted Dr. Williams. "Many patients I've spoken with mention a lingering fear of recurrence that may impact their memory of treatment," he said.

Anne Katz, PhD, author of the book After You Ring the Bell: 10 Challenges for the Cancer Survivor, notes that patients may experience negative, long-term consequences following cancer treatment including health worries, depression and fatigue. "While the end of active treatment, be it chemotherapy or radiation therapy, is certainly a milestone, it is not the end of treatment or side effects for many," she wrote.

Dr. Williams said he's not ready to call for a ban on ringing the cancer bell but would like to see his study - the first of its kind, he noted - replicated with larger groups of patients.

He also proposed that clinics consider and investigate alternatives, such as ringing the bell to signal the start of treatment or awarding a certificate at the end of treatment in a quieter, less public ceremony. "The important thing is not to stir emotions at the end of treatment," he said. "Some people have small gifts or certificates of completion to mark the end. I think these are okay because they do not arouse emotions in the same way that ringing a bell to a crowd of applauding people does."

"We can consider other avenues that would allow patients to celebrate reaching the end of their treatment, but without negatively reinforcing things that perhaps might best be forgotten," agreed Richard Jennelle, MD, an associate professor of clinical radiation oncology at the Keck School and senior investigator on the study. Dr. Jennelle joined Dr. Williams, as well as Valerie Powell, a cancer survivor, and psychologist Andrea Bonior, PhD, in a recent podcast to discuss the findings, which the researchers conceded are "counterintuitive."

"Many well-intended practices can lead to bad outcomes," concluded Dr. Williams. "We should study interventions before implementing them - even ones that are well-intended."

Credit: 
American Society for Radiation Oncology

Texas A&M engineers develop recipe to dramatically strengthen body armor

image: A close-up view of boron carbide crystals

Image: 
Texas A&M University College of Engineering

According to ancient lore, Genghis Khan instructed his horsemen to wear silk vests underneath their armor to better protect themselves against an onslaught of arrows during battle. Since the time of Khan, body armor has significantly evolved -- silk has given way to ultra-hard materials that act like impenetrable walls against most ammunition. However, even this armor can fail, particularly if it is hit by high-speed ammunition or other fast-moving objects.

Researchers at Texas A&M University have formulated a new recipe that can prevent weaknesses in modern-day armor. By adding a tiny amount of the element silicon to boron carbide, a material commonly used for making body armor, they discovered that bullet-resistant gear could be made substantially more resilient to high-speed impacts.

"For the past 12 years, researchers have been looking for ways to reduce the damage caused by the impact of high-speed bullets on armor made with boron carbide," said Dr. Kelvin Xie, assistant professor in the Department of Materials Science and Engineering. "Our work finally addresses this unmet need and is a step forward in designing superior body armor that will safeguard against even more powerful firearms during combat."

This study was published in the October issue of the journal Science Advances.

Boron carbide, dubbed "black diamond," is a man-made material, which ranks second below another synthetic material called cubic boron nitride for hardness. Unlike cubic boron nitride, however, boron carbide is easier to produce on a large scale. Also, boron carbide is harder and lighter than other armor materials like silicon carbide, making it an ideal choice for protective gear, particularly ballistic vests.

Despite boron carbide's many desirable qualities, its main shortfall is that it can damage very quickly upon high-velocity impact.

"Boron carbide is really good at stopping bullets traveling below 900 meters per second, and so it can block bullets from most handguns quite effectively," said Xie. "But above this critical speed, boron carbide suddenly loses its ballistic performance and is not as effective."

Scientists know high-speed jolts cause boron carbide to have phase transformations -- a phenomenon where a material changes its internal structure such that it is in two or more physical states, like liquid and solid, at the same time. The bullet's impact thus converts boron carbide from a crystalline state where atoms are systematically ordered to a glass-like state where atoms are haphazardly arranged. This glass-like state weakens the material's integrity at the site of contact between the bullet and boron carbide.

"When boron carbide undergoes phase transformation, the glassy phase creates a highway for cracks to propagate," said Xie. "So, any local damage caused by the impact of a bullet easily travels throughout the material and causes progressively more damage."

Previous work using computer simulations predicted that adding a small quantity of another element, such as silicon, had the potential to make boron carbide less brittle. Xie and his group investigated if adding a tiny quantity of silicon also reduced phase transformation.

To simulate the initial impact of a high-speed bullet, the researchers made well-controlled dents on boron carbide samples with a diamond tip, whose width is smaller than a human hair. Then, under a high-powered electron microscope, they looked at the microscopic damage that was formed from the blows.

Xie and his collaborators found that even with tiny quantities of silicon, the extent of phase transformation went down by 30%, noticeably reducing the damage from the indentation.

Although silicon serves well to enhance boron carbide's properties, Xie explained that more experiments need to be done to know if other elements, like lithium and aluminum, could also improve boron carbide's performance.

In the near future, Xie predicts these stronger cousins of pure boron carbide will find other nonmilitary applications. One such use is in nuclear shields. He said using a touch of silicon in boron carbide changes the spacing between atoms and the empty spaces created might be good sites to absorb harmful radiation from nuclear reactors.

"Just as in cooking where a small sprinkle of spices can greatly boost flavor, by using a small amount of silicon we can dramatically improve the properties of boron carbide and consequently find novel applications for these ultrahard materials," Xie said.

Credit: 
Texas A&M University

Anthropologists confirm existence of specialized sheep-hunting camp in prehistoric Lebanon

TORONTO, ON - Anthropologists at the University of Toronto (U of T) have confirmed the existence more than 10,000 years ago of a hunting camp in what is now northeastern Lebanon - one that straddles the period marking the transition from nomadic hunter-gatherer societies to agricultural settlements at the onset of the last stone age.

Analysis of decades-old data collected from Nachcharini Cave high in the Anti-Lebanon mountain range that forms the modern-day border between Lebanon and Syria, shows the site was a short-term hunting camp that served as a temporary outpost to emerging and more substantial villages elsewhere in the region, and that sheep were the primary game.

The finding confirms the hypothesis of retired U of T archaeologist Bruce Schroeder, who excavated the site on several occasions beginning in 1972, but who had to discontinue his work when the Lebanese Civil War began in 1975.

"The site represents the best evidence of a special-purpose camp - not a village or settlement - in the region," said Stephen Rhodes, a PhD candidate in the Department of Anthropology in the Faculty of Arts & Science at U of T and lead author of a study published today in PLOS ONE. "The cave was a contemporary of larger settlements further south in the Jordan Valley, and is the first site of its kind to show the predominance of sheep among the animals hunted by its temporary inhabitants."

Radiocarbon dating of animal bones recovered from the site shows that it dates to an era known as the Pre-Pottery Neolithic A (PPNA), a period from about 10,000-8,000 BCE during which the cultivation of crops, the construction of mud-brick dwellings and other practices of domestication began to emerge. The stone tools found at the sites are mostly tiny arrowheads used for hunting.

The new dates presented place the main deposits at the cave securely in the PPNA.

"Previous dates established in the 1970s were problematic and far too recent for unknown reasons, possibly due to contamination or incorrect processing," said Rhodes, who coauthored the study with Professors Edward Banning and Michael Chazan, both members of the Department of Anthropology at U of T. "The results highlight the fact that people in the PPNA took advantage of a wide variety of habitats in a complex system of subsistence practices."

It was already known that sheep hunting was practiced in this region throughout periods that preceded the PPNA, and the evidence found at Nachcharini Cave reinforces that understanding. According to Rhodes, it consolidates our knowledge of the natural range of sheep, which pertains to a potential beginning of domestication in later years.

"We are not saying that hunters at Nachcharini were engaged in early stages of this domestication," he said. "But the evidence of a local tradition makes this area a possible centre of sheep domestication later on."

Credit: 
University of Toronto

Study reveals 2 writers penned landmark inscriptions in 8th-century BCE Samaria

image: Ostraca (ink on clay inscriptions) from Samaria, the capital of biblical Israel. The inscriptions are dated to the early 8th century BCE. Colorized Ostraca images are courtesy of the Semitic Museum, Harvard University.

Image: 
American Friends of Tel Aviv University. Colorized images courtesy of the Semitic Museum, Harvard University.

The ancient Samaria ostraca -- eighth-century BCE ink-on-clay inscriptions unearthed at the beginning of the 20th century in Samaria, the capital of the biblical kingdom of Israel -- are among the earliest collections of ancient Hebrew writings ever discovered. But despite a century of research, major aspects of the ostraca remain in dispute, including their precise geographical origins -- either Samaria or its outlying villages -- and the number of scribes involved in their composition.

A new Tel Aviv University (TAU) study finds that just two writers were involved in composing 31 of the more than 100 inscriptions and that the writers were contemporaneous, indicating that the inscriptions were written in the city of Samaria itself.

Research for the study was conducted by Ph.D. candidate Shira Faigenbaum-Golovin, Dr. Arie Shaus, Dr. Barak Sober and Prof. Eli Turkel, all of TAU's School of Mathematical Sciences; Prof. Eli Piasetzky of TAU's School of Physics; and Prof. Israel Finkelstein, Jacob M. Alkow Professor of the Archaeology of Israel in the Bronze and Iron Ages, of TAU's Sonia and Marco Nadler Institute of Archaeology. The study was published in PLOS ONE on January 22, 2020.

The inscriptions list repetitive shipment details of wine and oil supplies to Samaria and span a minimal period of seven years. For archaeologists, they also provide critical insights into the logistical infrastructure of the kingdom of Israel. The inscriptions feature the date of composition (year of a given monarch), commodity type (oil, wine), name of a person, name of a clan and name of a village near the capital. Based on letter-shape considerations, the ostraca have been dated to the first half of the eighth century BCE, possibly during the reign of King Jeroboam II of Israel.

"If only two scribes wrote the examined Samaria texts contemporaneously and both were located in Samaria rather than in the countryside, this would indicate a palace bureaucracy at the peak of the kingdom of Israel's prosperity," Prof. Finkelstein explains.

"Our results, accompanied by other pieces of evidence, seem also to indicate a limited dispersion of literacy in Israel in the early eighth century BCE," Prof. Piasetzky says.

"Our interdisciplinary team harnessed a novel algorithm, consisting of image processing and newly developed machine learning techniques, to conclude that two writers wrote the 31 examined texts, with a confidence interval of 95%," said Dr. Sober, now a member of Duke University's mathematics department.

"The innovative technique can be used in other cases, both in the Land of Israel and beyond. Our innovative tool enables handwriting comparison and can establish the number of authors in a given corpus," adds Faigenbaum-Golovin.

The new research follows up from the findings of the group's 2016 study, which indicated widespread literacy in the kingdom of Judah a century and a half to two centuries later, circa 600 BCE. For that study, the group developed a novel algorithm with which they estimated the minimal number of writers involved in composing ostraca unearthed at the desert fortress of Arad. That investigation concluded that at least six writers composed the 18 inscriptions that were examined.

"It seems that during these two centuries that passed between the composition of the Samaria and the Arad corpora, there was an increase in literacy rates within the population of the Hebrew kingdoms," Dr. Shaus says. "Our previous research paved the way for the current study. We enhanced our previously developed methodology, which sought the minimum number of writers, and introduced new statistical tools to establish a maximum likelihood estimate for the number of hands in a corpus."

Next, the researchers intend to use their methodology to study other corpora of inscriptions from various periods and locations.

Credit: 
American Friends of Tel Aviv University

Deep diving scientists discover bubbling CO2 hotspot

image: A researcher collects gas samples at Soda Springs in the Philippines.

Image: 
University of Texas at Austin Jackson School of Geosciences

Diving 200 feet under the ocean surface to conduct scientific research can lead to some interesting places. For University of Texas at Austin Professor Bayani Cardenas, it placed him in the middle of a champagne-like environment of bubbling carbon dioxide with off-the-chart readings of the greenhouse gas.

Cardenas discovered the region - which he calls "Soda Springs" - while studying how groundwater from a nearby island could affect the ocean environment of the Verde Island Passage in the Philippines. The passage is one of the most diverse marine ecosystems in the world and is home to thriving coral reefs.

The amazing bubbling location, which Cardenas captured on video, is not a climate change nightmare. It is linked to a nearby volcano that vents out the gases through cracks in the ocean floor and has probably been doing so for decades or even millennia. However, Cardenas said that the high CO2 levels could make Soda Springs an ideal spot for studying how coral reefs may cope with climate change. The site also offers a fascinating setting to study corals and marine life that are making a home among high levels of CO2.

"These high CO2 environments that are actually close to thriving reefs, how does it work?" said Cardenas, who is a professor in the Jackson School of Geosciences at UT Austin. "Life is still thriving there, but perhaps not the kind that we are used to. They need to be studied."

Cardenas and his coauthors from institutions in the Philippines, the Netherlands and UT described Soda Springs along with multiple scientific findings about groundwater in a paper published this month in the journal Geophysical Research Letters.

The scientists measured CO2 concentrations as high as 95,000 parts per million (ppm), more than 200 times the concentration of CO2 found in the atmosphere. The readings range from 60,000 to 95,000 and are potentially the highest ever recorded in nature. The CO2 levels fall quickly away from the seeps as the gas is diluted in the ocean, but the gas still creates an elevated CO2 environment along the rest of the coastline of the Calumpan Peninsula, with levels in the 400 to 600 ppm range.

Cardenas is a hydrologist and not an expert on reef systems. He discovered Soda Springs while researching whether groundwater from the nearby land could be discharging into the submarine ocean environment, which is a phenomenon that is generally ignored by scientists looking at the water cycle, Cardenas said.

"It's an unseen flux of water from land to the ocean," he said. "And it's hard to quantify. It's not like a river where you have a delta and you can measure it."

The team tracked groundwater by testing for radon 222, a naturally occurring radioactive isotope that is found in local groundwater but not in open ocean water. Along with the CO2 bubbles, the team also found hotspots in the sea floor where groundwater was being discharged into the ocean. This is significant, said Cardenas, because the connection between the groundwater and ocean means that there is a pathway for pollutants from the island to make it to the reef system.

This is particularly important for a place like the Philippines, he said, where coastal development is booming largely because of ecotourism driven by the nearby reefs, but the communities almost always depend on septic tanks instead of modern sewage systems. This means the development could drive pollution to the same reefs the economy relies on.

Cardenas has been scuba diving since his college days in the Philippines. Training in deep diving has allowed him to open up a portion of the ocean that is rarely studied.

"It's really a big part of the ocean that is left unexplored because it's too shallow for remotely operated vehicles and is too deep for regular divers," he said.

Conducting field work under water has also led Cardenas to develop new technical skills and techniques to collect samples under water. Elco Luijendijk, a lecturer at the University of Göttingen in Germany who reviewed the study for the journal, said that these techniques - and the findings they enabled -represent major scientific strides.

"Underwater fieldwork is 10 times harder than above water, as I have also recently found out during a diving campaign in the Caribbean," he said. "Even simple measurements and collecting samples require a lot of care, let alone measurement of radon isotopes, which even onshore is tricky. This [study] really widens our knowledge on what happens in these environments and has shown that these vents can change seawater chemistry over large areas."

Credit: 
University of Texas at Austin

Keeping lead out of drinking water when switching disinfectants

image: Researchers in the lab of Daniel Giammar at the McKelvey School of Engineering at Washington University in St. Louis found that adding orthophosphate to a water supply before switching to chloramine from free chlorine can prevent lead contamination in some situation

Image: 
Washington University in St. Louis

About 80 percent of water systems across the country use a disinfectant in drinking water that can lead to undesirable byproducts, including chloroform. There is an alternative, but many cities have been afraid to use it.

That's because in 2000, when the water authority in Washington, D.C., switched from free chlorine to chloramine, the nation watched as levels of lead in drinking water immediately shot up. They stayed up for four years while scientists determined the problem and implemented a solution.

In other cities that used free chlorine, Washington's experience had a chilling effect; many have put off switching disinfectants, fearing their own lead crisis.

They may soon be able to safely make the switch, thanks to research from the McKelvey School of Engineering at Washington University in St. Louis. Researchers found that adding orthophosphate to the water supply before switching to chloramine can prevent lead contamination in certain situations.

The results of the study were published in Environmental Science & Technology.

Because of its malleability and longevity, lead was the preferred material for service lines, the pipes that deliver water from a water main to homes, for the first half of the 20th century. As the pipes corrode in the presence of free chlorine, a certain type of lead, PbO2, can build up on their interior surfaces.

That buildup typically isn't a problem. In fact, so long as free chlorine is being used as a disinfectant, the PbO2 is actually a positive, according to Daniel Giammar, the Walter E. Browne Professor of Environmental Engineering at Washington University. This form of lead has a low solubility so it stays in a solid form on the pipes, instead of in the water.

PbO2 is not always so benign, however. "There is a potential risk because the solubility is only low if you keep using this type of chlorine," Giammar said.

Switching to a different disinfectant such as chloramine -- the mixture of chlorine and ammonia that Washington switched to in late 2000 -- causes the lead to become water soluble. The PbO2 then dissolves quickly and releases lead into the water system.

In Washington, researchers determined that adding a particular phosphate, called orthophosphate, to the system would create lead phosphate. This new material was also low solubility, so again, the lead material began to line the walls of the pipes instead of dissolving into drinking water.

"But forming the new, low-solubility coating takes time," Giammar said. In the case of Washington, "the lead concentrations took months to come down."

The solution had been identified and implemented, but residents continued to deal with lead in their water for months. "Our overarching question was, 'Would they have had a problem if they had implemented the solution before they made the chlorine switch? What if they added orthophosphate before, as a preventative measure, and then they switched the disinfectant? Would they have had a problem?'"

Recreating Washington water

To find out, the researchers had to recreate 2000 in their lab. "We had to recreate the crisis, then watch the crisis happen and watch our proposed solution in parallel," Giammar said. They sourced lead pipes, then recreated Washington water.

First author Yeunook Bae, a PhD student in Giammar's lab, looped the water through a six-pipe system with free chlorine for 66 weeks to get the lead scales to form. Once they approximated those found in Washington, the pipes were divided into a study group and a control group.

Researchers then added orthophosphate to the water in three of the pipe systems, the study group, for 14 weeks.

Then, as the Washington water authority had done, researchers switched from free chlorine to chloramine in all six systems, looping the water through the pipes for more than 30 weeks.

The lead on the pipes that did not receive orthophosphate became soluble, as it had in Washington, leading to high lead levels in the water. In the pipes to which orthophosphate was added, "levels went from really low to still quite low," Giammar said.

The experimental setup was designed to let researchers remove small sections of pipe without disturbing the system. That allowed them to see just how quickly the switch to chloramine affected the system.

The regulatory level set by the EPA for lead in drinking water is 15 micrograms of lead per liter of water.

Within five days of the switch, lead levels in the control pipes -- those without orthophosphate -- rose from five to more than 100 micrograms/liter. During the subsequent 30 weeks, levels never fell below 80 micrograms/liter.

In water treated with orthophosphate, levels remained below 10 micrograms/liter for the duration of the experiment.

The Washington University team also learned something else: Because of the high levels of calcium in Washington's water, adding orthophosphate did not result in a pure lead phosphate, but a calcium lead phosphate.

This surprise points to the uniqueness of each situation. Those who oversee water systems and are concerned about switching disinfectants can not only benefit from this study, according to Giammar, but also from their own studies, tailored to their specific water and environmental conditions.

Nevertheless, this finding can help guide decisions in the roughly 80 percent of American water systems that are still using free chlorine, including Chicago and New York City.

"Our next big step," Giammar said, "is making sure places that are thinking about switching disinfectant know that the option is there to do it safely."

Credit: 
Washington University in St. Louis