Culture

Why experiences are better gifts for older children

What should we get for our kids this holiday? As children get older, giving them something they can experience (live through) instead of material things makes them happier, according to new research led by Lan Nguyen Chaplin, associate professor of marketing at the University of Illinois Chicago.

The research, published in the International Journal of Research in Marketing, compared the level of happiness children derive from material goods with the level of happiness they derive from experiences.

Across four studies with children and adolescents, Chaplin and her collaborators demonstrated that children ages 3-12 derive more happiness from material things than from experiences. However, older children derive more happiness from their experiences than from their possessions.

"What this means is, experiences are highly coveted by adolescents, not just expensive material things, like some might think," Chaplin says.

She goes on to explain, "Don't get me wrong. Young children do love experiences. Entire industries (e.g., theme parks such as Disneyland) are built around this premise. In fact, young children are ecstatic throughout the experience. However, for experiences to provide enduring happiness, children must be able to recall details of the event long after it is over."

Long after they have unwrapped their Legos and stuffed animals, there will still be a physical reminder to give them a "jolt" of happiness. However, young children can't see or touch experiences after they are over, making it harder for them to appreciate experiences long after the event is over. There's an easy and inexpensive fix though, according to Chaplin.

"Take pictures or videos of family walks, playing in the snow, and birthday parties," she said. "Children are likely going to appreciate those experiences more if there is something to remind them of the event. Additionally, they'll be able to learn the social value of shared experiences."

Children will remember and appreciate not only the birthday gifts they received, but also the time spent with family and friends as they relive the experience through concrete reminders such as photos and videos.

Since memory is developed over time, it's likely that children, especially young ones, may not derive as much happiness from past experiences as from possessions. But with age, creating new memories and exploring new interests may be far more valuable than acquiring new possessions.

Credit: 
University of Illinois Chicago

Strengthening the climate change scenario framework

Over the past decade, the climate change research community developed a scenario framework that combines alternative futures of climate and society to facilitate integrated research and consistent assessment to inform policy. An international team of researchers assessed how well this framework is working and what challenges it faces.

The scenario framework contains a set of scenarios about how society may evolve in the future - so-called Shared Socioeconomic Pathways (SSPs) - and defines different levels of climate change known as Representative Concentration Pathways (RCPs). Combining both aspects of the framework allows researchers to develop integrated analyses of how future societies can avoid climate change and cope with its impacts.

"The SSPs started with brief global narratives combined with projections of a few key variables like GDP, population, and urbanization. In the past few years, researchers extended the SSPs to individual countries, cities, and sectors. They've also added new indicators, such as governance, income distribution, access to basic services, and air pollution. The framework has been widely and successfully applied, and has shaped climate change research," explains Brian O'Neill, director of the Joint Global Change Research Institute (JGCRI) and main author of the assessment published in Nature Climate Change.

"The scenarios framework allows scientists to use similar scenarios across many different studies. Individual research projects don't need to develop their own scenario storylines and quantifications but can build on the work of others. Once many studies use comparable scenarios, it becomes more straightforward to assess the literature for insights that emerge across these studies. This means that large scientific assessments like the Intergovernmental Panel on Climate Change (IPCC) and the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) can use the framework to structure their analyses and reports," adds IIASA researcher Bas van Ruijven, one of the authors of the study and co-chair of the International Committee on New Integrated Climate Change Assessment Scenarios (ICONICS).

In their paper, the authors synthesize the insights from the first ever Scenarios Forum organized by the University of Denver and ICONICS in Denver, CO in March 2019, and present the first in-depth literature analysis of the SSP-RCP scenarios framework. They specifically looked into how useful the framework has been for researchers, which topics the SSPs have been used for, and what can be done to improve the framework and make it more useful for future studies.

The results show that the framework has been used in almost 1,400 studies over the past five years, of which about half are related to climate impacts, one-third to avoiding climate change, and the remainder to extensions or methodological improvements. Encouragingly, the findings indicate that the scenarios framework enables research that had not been possible before, such as estimating the combined impacts of socioeconomic and climate changes on exposure to climate risks. The insights from this new study will help researchers to improve the framework and make it even more useful over the next five years. The study also revealed that some studies use unlikely combinations of socioeconomic assumptions with the highest climate change outcomes (the so-called RCP8.5 pathway). They caution that researchers should be more careful using this high climate change scenario for their studies in combination with a development pathway aiming to sustainable development, as well as in communication about their findings.

The authors identified seven recommendations for future work:

Improving the integration of societal and climate conditions

Improving applicability to regional and local scales

Improving relevance beyond the climate research community

Producing a broader range of reference scenarios that include impacts and policy

Capturing relevant perspectives and uncertainties

Keeping scenarios up to date, and

Improving the relevance of climate change scenario applications for users.

"While the scenarios framework is mostly used by researchers, it has also been translated into accessible non-technical language for the public. It has had a significant impact on how we study and think about future climate change. By identifying the weaknesses of the existing framework, we improve the utility of the framework for future studies. Also, by combining socioeconomic and climate change scenarios with other societal objectives (e.g., biodiversity), we can paint a more concrete picture of what future societies might look like and systematically explore how to avoid climate change and how to cope with its impacts," notes IIASA Energy Program Director Keywan Riahi, who was also a study author.

Going forward, ICONICS and IIASA will support the research community in further improving the scenarios framework. To facilitate these developments, the two organizations will organize an online seminar and discussion series starting in January 2021. To foster and track progress, and revise goals as experience accumulates, the Scenarios Forum is intended to become a regular biennial event. To this end, IIASA and the ICONICS Steering Committee plan to host the second Scenarios Forum in Laxenburg, Austria in 2022.

Credit: 
International Institute for Applied Systems Analysis

New paper proposes framework for eliminating defects in psychiatric care

CLEVELAND -- A new paper from the Department of Psychiatry and the Population Health program at University Hospitals (UH) Cleveland Medical Center, proposes a framework for eliminating defects in behavioral health treatment.

Entitled "Eliminating Defects in Behavioral Health Treatment," the paper was published online on Nov. 19 in the journal Psychiatric Services and was written by Patrick Runnels, M.D., M.B.A., Heather M. Wobbe, D.O., M.B.A., and Peter J. Pronovost, M.D., Ph.D.

The authors cite that a large majority of defects are the result of system failures rather than due to the individual psychiatrist, and they propose that psychiatrists need to function as "systems engineers" to help eliminate these defects in healthcare organizations.

"The job of building and transforming behavioral health at the system level will require psychiatrists to adopt a new set of skills and a willingness to think differently about their identity as clinicians," said Dr. Runnels, Chief Medical Officer, Population Health - Behavioral Health at UH and Associate Professor of Psychiatry at Case Western Reserve University School of Medicine.

"Beyond assessment and treatment skills, psychiatrists who act as systems engineers must be experts in quality improvement, implementation science, resonant leadership, and design-based thinking," he said.

Dr. Pronovost, UH Chief Quality and Clinical Transformation Officer, said that they define "defect" as "anything clinically, operationally, or experientially that a provider would not want to happen, including in diagnosing, initiating treatment, adjusting treatment, nurturing therapeutic alliances at the individual provider and system level, and avoiding preventable service utilization."

"The ultimate goal is to provide "defect-free care" that we believe will empower clinicians to engage in quality improvement initiatives at whatever level is most accessible to them," said Dr. Pronovost.

"Despite our best intentions and efforts, defects happen every day in every field of medicine, said Dr. Pronovost, who is also Clinical Professor of Anesthesiology and Critical Care Medicine at CWRU School of Medicine. "Despite clear evidence-based screening tools and criteria for diagnosis, patients are rarely screened appropriately for common behavioral health issues, with barely half of identified individuals having received any care during the prior year and less than 15 percent having received appropriate evidence-based care. Even when prescribed a medication, only 23 percent of patients with depression received evidence-based psychopharmacology and appropriate symptom tracking."

The authors trace defects back to incentives that are poorly aligned with goals, within and across health care systems, often leading to inefficient, suboptimal behavioral health care delivery. To a large extent, this deficit occurs because clinicians and the systems in which they practice are incentivized almost entirely by volume and throughput rather than by quality and outcomes.

They write: "Traditional payment models, information systems, and treatment paradigms fail to incentivize keeping people healthy, managing chronic conditions, or coordinating care across the continuum of services. This is not an indictment of clinicians, who are clearly motivated to improve the lives of the patients they serve. Nor is it an indication that high-quality work is not happening. Pockets of excellence are all around. A discussion with almost any psychiatrist will yield multiple stories highlighting the positive impact they have had on those they serve, stories that motivate them to continue their work as healers. Yet, despite clinicians' best efforts, the constraints imposed by misaligned incentives negatively affect system design and lead to widespread defects in care."

"Our observation and experience are that we providers have become so accustomed to working in low-reliability environments that we accept defects in the system as the norm. Indeed, most defects are invisible or are accepted as the cost of caring for patients with complex issues," said Dr. Runnels.

In the paper, the authors provide scenarios to illustrate their points. In one such example, they describe a patient with major depressive disorder prescribed a starting dose of 20 mg of fluoxetine and scheduled for a follow-up appointment in four weeks. The patient picked up the medication but did not take the first pill for two weeks, then took two pills and stopped because of side effects. The patient called the office to report the trouble, but no one answered, and the phone call was never returned because no one checked the voicemail (no one had been assigned to check). When the patient arrived in the office four weeks later, the symptoms had not improved, and the patient ended up paying a second copayment of $50 to receive an alternative treatment. The patient described feeling angry with the system for not responding and for necessitating more time and money to receive the alternative treatment. The patient canceled the next appointment and did not return.

"If we instead made sure someone called patients one or two weeks after starting treatment and simply asked whether they had picked up the medication, tried it, and were still taking it, might we prevent this kind of outcome?" the authors ask.

They conclude with the belief that their framework to visualize and systematically eliminate defects in behavioral health care ultimately offers a hopeful approach to improving care--one that can drive large-scale success more effectively than trying to pick away at pieces of the system independently or berating clinicians about performance on individual quality metrics.

"This new narrative, which builds on much of the wisdom accumulated by our field over decades, can succeed only if clinicians see their core responsibility as focusing on eliminating defects and delivering the care that individuals with mental illness and addictions deserve, that clinicians are hungry for, and that payers increasingly demand," they write.

Credit: 
University Hospitals Cleveland Medical Center

Young Brazilians are increasingly keen on conservation- and biodiversity-related topics

Young Brazilians are increasingly interested in biodiversity, conservation of the Amazon and science as they begin high school, but school students in the North region are more interested in learning about these subjects, and about local fauna and flora, than their peers in the Southeast.

These are some of the findings of a study reported in an article in Science Advances. Part of a Thematic Project supported by FAPESP (São Paulo Research Foundation), the study analyzed data from five PhD theses as well as an international survey called The Relevance of Science Education, or ROSE. The authors argue for the need to include more learning about local plants and animals in Brazil's National School Curriculum. The project was conducted under the aegis of the FAPESP Research Program on Biodiversity Characterization, Conservation, Restoration and Sustainable Use (BIOTA-FAPESP) and involved five institutions in the state of São Paulo: the University of São Paulo (USP), the Federal University of São Paulo (UNIFESP), the Federal University of the ABC (UFABC), the University of São Caetano do Sul (USCS), and Butantan Institute.

Fifteen-year-old school students have answered the ROSE questionnaire in more than 40 countries since 2004, detailing their interest in topics relating to conservation, science, technology, and biodiversity. "The Brazilian students were very interested in studying native plants and animals in more depth - far more so than youngsters in the UK, Norway or Sweden, for example. We conducted three runs of the survey in Brazil between 2007 and 2014, identifying several trends, one of which is that contrary to what may be considered common sense, young people's interest in these topics is increasing," said Nélio Bizzo, principal investigator for the project. Bizzo is Professor of Science Education at USP's School of Education and UNIFESP's Institute of Environmental, Chemical and Pharmaceutical Sciences.

The students were invited to express their views freely on topics relating to science, technology and biodiversity. The survey was conducted as a pilot in two cities in 2007, followed by nationwide samples in 2010 and 2014. In this latest run, 788 students (43.7% of the sample) in public and private high schools across the country said they were interested in learning about local wildlife, while 1,015 (56.3%) said they were not.

The comparison between regions showed that 50.4% of the students who lived in the North, which includes the Amazon Rainforest, were interested in learning more about local biodiversity, while the proportion in the Southeast was 33.1%.

The Northeast had the second-largest proportion of respondents who wanted to know more about biodiversity in their region (46.9%). "This a surprising result. One would expect interest in education to be higher in places with a higher HDI [Human Development Index] and more cultural or educational attractions such as museums, for example, but the survey showed exactly the opposite," Bizzo told Agência FAPESP. "The reasons for this inequality of interest will have to be investigated further. We want to understand why some are very interested and find ways of stimulating the others. The lowest HDIs in Brazil are in the North and Northeast. In the North, we suggest, indigenous culture may be a key reason for this interest in learning about forest conservation and biodiversity."

The researchers note in the article that Amazon biodiversity and the ancestral knowledge of the indigenous inhabitants of the region are all but absent from schools and textbooks, which tend to favor large exotic animals such as polar bears, elephants, giraffes, or lions over local diversity such as pink river dolphins, coatis, sloths, maned wolves, mosquitoes and ocelots.

"The need for Brazilian students to know more about the Amazon is evident, but schools and textbooks tell them about an impenetrable jungle only recently occupied by Indians," Bizzo said. "That's not really the case. As we know, the Amazon was very far from being empty of people when the Portuguese came over."

It is a well-established fact that in pre-Colombian times the Amazon was home to a large complex population and that these inhabitants changed the forest. "Local communities intervened in tree distribution and domesticated more than 80 plant species," he said. "This has been demonstrated by a very interesting study combining archeology and linguistics. Yet virtually nothing is said about all this traditional knowledge, even though students are eager to learn about it at school."

People know cassava has been domesticated, he added, but those more than 80 species also include the sweet potato, the pineapple, the papaya, and many other kinds of fruit and vegetable widely enjoyed around the world. "Even the active ingredient of chloroquine, the antimalarial drug that's often in the news these days, comes from the bark of a tree discovered by Amerindians," he said.

Culture also explains this strong interest to some extent, according to the researchers. Mythological creatures such as the river dolphin, curupira and mapinguari inhabit the imagination of local and indigenous populations in the Amazon, reflecting their proximity to nature. These fabulous beings and their sagas are part of the rural and urban culture of the Amazon. They contribute to the dissemination of knowledge about biodiversity, and because of them, young people living in the North, even in urban areas, have a different relationship to nature than those in the cities and countryside of the South and Southeast.

"It's very hard to find anything similar in Brazil's South region, even in western Santa Catarina, where there's a large indigenous population, mainly consisting of Guarani and Kaingang," Bizzo said, adding that a different approach to local biodiversity and conservation urgently needs to be offered to school students and the general public, especially in the Amazon.

The 2014 questionnaire included an item illustrating the disconnect between student interest and environmental policy. It asked respondents what they thought about compensation or reparation payments by rich countries to offset environmental problems. "Our objective data shows that most respondents in Brazil didn't support demands that rich countries pay reparations for environmental problems," Bizzo said. "Of course, these youngsters couldn't have foreseen that five years after this question was asked our environment minister would advocate making rich countries liable at the COP-25 UN climate conference in Madrid in December 2019."

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

From the woodworking shop to the operating room: New technique uses mortise and tenon joints to repair unstable shoulders

November 24, 2020 - Orthopaedic surgery techniques for treatment of recurrent shoulder instability are effective, but prone to problems with nonunion of bone grafts held in place by screws alone. A new technique - borrowing a design used for centuries in Chinese architecture and woodworking - provides an effective approach to shoulder stabilization, suggests a study in The Journal of Bone & Joint Surgery. The journal is published in the Lippincott portfolio in partnership with Wolters Kluwer.

Inspired by the ancient woodworking technique of mortise-tenon joints, the "arthroscopic inlay Bristow procedure" provides a high success rate in patients with recurrent shoulder instability, according to the report by Guoqing Cui, MD, of Peking University Third Hospital and colleagues. Their technique "optimized the accuracy of graft positioning and resulted in a high rate of graft healing, excellent functional outcomes, and a high return to sports," the researchers write.

Initial experience shows good outcomes with 'inlay Bristow' technique
Recurrent anterior shoulder instability is a common problem among athletes, particularly those involved in collision or overhead sports. If shoulder dislocation becomes a recurrent problem, a surgical procedure may be necessary to stabilize the joint.

One popular surgical approach is the Bristow-Latarjet procedure, commonly performed arthroscopically. With this technique, a small "hook" of bone (i.e., the coracoid) is taken from the scapula and fixed with a screw to the glenoid bone, providing long-term anterior shoulder stability. This procedure has a good success rate in terms of preventing future shoulder dislocations.

However, problems can occur when the graft doesn't heal properly, resulting in nonunion. This can lead to bone deterioration and displacement or breakage of the screw, potentially resulting in recurrent instability.

Thinking about how to make the shoulder repair more stable, Dr. Cui and colleagues were inspired by an ancient technique from Chinese timber building: the mortise-tenon joint. Known to carpenters and woodworkers everywhere, this technique uses a projecting piece of wood (the tenon) that precisely fits into a gap (the mortise) in another piece of wood to join the two pieces together.

In the researchers' new "inlay Bristow" procedure, the coracoid graft is trimmed to form a tenon, which is fixed into a mortise created in the glenoid bone. Dr. Cui and colleagues report the results of 51 patients who underwent the arthroscopic inlay Bristow procedure for recurrent anterior shoulder instability.

One year after the procedure, the coracoid bone graft had healed in 96 percent of patients, with only two patients experiencing a nonunion of the bone graft. Follow-up CT scans showed good graft positioning with little or no sign of bone degeneration.

Patients also exhibited improved function with the inlay Bristow procedure, with significant improvements in shoulder stability (Rowe score) and overall joint function (American Shoulder and Elbow Surgeons score) at a minimum follow-up of three years. Eighty-seven percent of patients were able to return to sport.

Inspired by traditional Chinese architecture, the arthroscopic inlay Bristow procedure provides a promising reconstructive approach in patients with recurrent shoulder instability. The inherent strength of the mortise-tenon joint increases the contact area and proper positioning of the bone graft, with high rates of bone healing and union and good clinical outcomes. Dr. Cui and colleagues note that further studies will be needed to confirm the reliability of the procedure, and compare the outcomes of the procedure with other treatments for recurrent shoulder instability.

Click here to read "Cuistow: Chinese Unique Inlay Bristow: A Novel Arthroscopic Surgical Procedure for Treatment of Recurrent Anterior Shoulder Instability with a Minimum 3-Year Follow-Up."
DOI: 10.2106/JBJS.20.00382

Credit: 
Wolters Kluwer Health

Research news tip sheet: Story ideas from Johns Hopkins Medicine

image: Research News Tip Sheet: Story Ideas from Johns Hopkins Medicine

Image: 
Johns Hopkins Medicine

Within the past two decades, Johns Hopkins Medicine researchers and others have shown that ketogenic diet therapies -- high-fat, low-carbohydrate and adequate protein diets that metabolize fat into chemicals called ketones that protect the brain and foster healthy neuron growth -- are safe and effective in treating both children and adults with drug-resistant forms of epilepsy. However, while established guidelines are available for using ketogenic diet therapy to reduce seizures in children, there aren't formal recommendations for adults.

Now, Mackenzie Cervenka, M.D., director of the Adult Epilepsy Diet Center and associate professor of neurology at the Johns Hopkins University School of Medicine, and her international collaborators have published the first set of recommendations based on current clinical practices and scientific evidence for managing ketogenic diet therapies in adult patients.

The new guidelines appear in the Oct. 30, 2020, issue of the journal Neurology: Clinical Practice.

"We wanted to guide medical professionals on how to manage adults who are using a ketogenic diet therapy," says Cervenka. "We focused on epilepsy, but we also touch on the diet's use in patients with other neurological disorders."

Emerging evidence supports using ketogenic diet therapies in other adult neurologic disorders and medical conditions, such as migraines, Parkinson's disease, dementia, brain tumors and multiple sclerosis.

"Ketogenic diets are called therapies for a reason," Cervenka says. "They should only be employed with the support of medical professionals who are familiar with them as a clinical tool. The concern is when people follow them unsupervised. Our new recommendations are designed to make it easier for health care providers to give proper guidance."

Ketogenic diets have been found effective in children with specific seizure types and epilepsy syndromes, which are often life-long conditions and require transition to adult providers for ongoing care.

"This goes back to Hippocrates, who wrote about fasting to suppress seizures," Cervenka says. "There are many studies and articles about the benefits of fasting; however, it's not a sustainable treatment. Ketogenic diet therapies create a metabolic state where you're breaking down fats like fasting, but with adequate nutrition."

To develop their recommendations, the researchers surveyed medical professionals at 20 institutions around the world on their results in treating an estimated 2,189 adults with ketogenic diet therapies for epilepsy and other neurologic diseases. Cervenka and her colleagues found that the therapy should be tailored to fit the needs of the individual patient, taking into consideration his or her: (1) physical and mental characteristics, (2) underlying medical conditions, (3) food preferences, (4) type and amount of support from family and others, (5) level of self-sufficiency, (6) feeding habits and (7) ease of following the diet.

"Most of the differences between the child and adult recommendations have to do with compliance," Cervenka says. "Often, it's more of a challenge for adults than for children."

The researchers advise medical professionals to provide their patients with recipe ideas, individualized training on the ketogenic diet lifestyle from a dietitian or nutritionist, and guidance for meal planning and preparation before initiating the therapy.

Proper preparation and training, says Cervenka, provide the greatest likelihood of success, as patients often report difficulties coping with carbohydrate restriction. She adds that patients on a ketogenic diet therapy should take multivitamin and mineral supplements, along with drinking plenty of fluids.

STUDY LOOKS AT BARRIERS KEEPING CHILDREN FROM ATTENDING DIABETES CAMPS, SUGGESTS REMEDIES

Media Contact: Michael E. Newman, mnewma25@jhmi.edu

According to the Diabetes Education and Camping Association, over 20,000 North American children with type 1 diabetes attend diabetes camps and related programs each year. These activities, says the organization, enable children to improve their management of the disease, provide motivation and support by being with others who share the condition, and foster acceptance and understanding while having fun.

Although the benefits of diabetes camp programs are well established, minority youth are underrepresented in camp attendance. In a recent study -- believed to be the first of its kind -- Johns Hopkins Medicine researchers tried to define why this occurs, identify barriers to making it happen and find potential disparities in those barriers.

The findings were reported online Oct. 8, 2020, in the Journal of Pediatric Endocrinology and Metabolism.

In the study, 39 children, ages 5 to 15 with type 1 diabetes, and their primary caregiver were surveyed. The majority of the children were between ages 10 and 15 (59%), male (54%) and had an average time with diabetes of 2.9 years. Ethnically and racially, the participants were 67% white, 28% Black, 2.5% Hispanic or Latino, and 2.5% other groups. Only 18% of the children had previously attended a diabetes camp, although most (79%) had heard about them from their diabetes medical team. Less than half (46%) of the children and their caregivers were aware that financial assistance was available to help pay for attendance.

The most frequently reported barriers to attending diabetes camp -- for all racial/ethnic groups, socioeconomic levels, and for children who had or had not previously attended camp -- were "There are no camps close to me" (59%), "Camp is too much money" (46%) and "I don't want my child to attend sleep-away camp" (44%). Families of youths ages 10 to 15 were more likely to report "My child doesn't want to attend sleep-away camp" (40%) than "I don't want my child to attend sleep-away camp" (33%).

The average number of barriers reported per family was 2.3, with no significant difference in majority compared to minority children.

Most families were able to identify benefits of diabetes camps, including meeting other friends with diabetes (100%), improved independence in diabetes control (97%), improved diabetes control (94%) and learning about diabetes (94%).

Participants stated that diabetes clinics, online and social media groups, and the Juvenile Diabetes Research Foundation were the sources they most often sought for support and information about diabetes. However, minority families reported engaging with fewer sources and networks compared with white families.

"Our findings indicate that if barriers are mitigated, parents and caregivers would be more likely to send their children to diabetes camps," says Risa Wolf, M.D., assistant professor of pediatrics at the Johns Hopkins University School of Medicine and senior author of the study.

The researchers suggest a number of strategies to improve access to diabetes camps, especially for minority and underserved populations. These include:

Operating more day camps in urban settings to provide easier access for young children -- and perhaps, serve as a springboard to overnight camps.

Increasing diversity of camp staff to appeal to more minority youth.

Inclusion of developmentally appropriate programs -- such as counselor-in-training opportunities -- to engage more adolescents.

Increasing awareness of financial assistance and scholarships for camps, thereby encouraging families to seek more support.

Increasing availability of camp information on social support networks and in diabetes clinics since most study participants cited them as primary knowledge resources.

"Although the sample size was small, our study findings clearly suggest that further research is needed -- with more Black and Hispanic/Latino families participating -- to gain greater insight into the barriers to attending diabetes camps, and then develop ways to overcome them," says study lead author Gina Ferrari, M.S.N., M.P.H., F.N.P.-C, a recent graduate of the nurse practitioner program at the Johns Hopkins University School of Nursing.

POOR ENERGY METABOLISM MAY DRIVE HEART FAILURE AFTER ATTACK

Media Contact: Vanessa McMains, Ph.D., vmcmain1@jhmi.edu

After a heart attack, a patient can go on to develop cardiac failure, but not always. Previous research looking at the molecular mechanism behind this process in heart muscle cells showed that the heart enzyme calmodulin kinase II (CaMKII) relocates to the power factories of the cell -- the mitochondria -- shortly after a heart attack to drive the organ's shutdown.

Now, Johns Hopkins Medicine researchers have shown in mice that CaMKII drains energy molecules from the mitochondria of heart muscle cells, causing the heart to work harder because it doesn't pump as efficiently. They also revealed that it was possible to direct an enzyme to move energy out of the mitochondria, enabling it to fuel the heart muscle cells. In turn, they say, this prevents the physical signs of failure in mouse hearts.

The findings, published on Sept. 4, 2020, in Nature Communications, suggest that by targeting energy transfer in the heart's muscle cells, it may be possible to prevent the progression to cardiac failure following a heart attack.

"It's really not clear clinically why certain people who have heart disease will progress to heart failure while others won't," says Elizabeth "Betsy" Luczak, Ph.D., assistant professor of medicine at the Johns Hopkins University School of Medicine, and lead author of the study. "Our findings suggest this could be a clue as to the ways to develop a therapy for people with dilated heart failure or even arrhythmias [irregular heartbeats] by enhancing transfer of ATP, the energy molecules in the cell."

Following a heart attack, one of the classic signs of heart failure is when the left ventricle -- the chamber of the heart responsible for pumping oxygenated blood throughout the body -- increases in size from working too hard. This is because the tissue of the ventricle thins and doesn't pump as well. In their study using a mouse model of a heart attack, Luczak and her colleagues turned on the enzyme creatine kinase to high levels in the heart muscle, which prevented CaMKII from draining ATP and resulted in the left ventricle no longer dilating and thinning.

According to the U.S. Centers for Disease Control and Prevention, heart failure eventually leads to death, with more than half of patients dying within five years of developing the condition.

GLOBAL STUDY SHOWS FOUR-MONTH TREATMENT FOR TUBERCULOSIS AS EFFECTIVE AS SIX-MONTH REGIMEN

Media Contact: Michael E. Newman, mnewma25@jhmi.edu

With everyone focused on the ongoing COVID-19 pandemic, it's easy to overlook the fact that there are other health threats still plaguing the world. Number one on the list is tuberculosis (TB), the disease that the World Health Organization calls the leading cause of death worldwide -- some 1.5 million fatalities annually -- from a single infectious agent (the microbe Mycobacterium tuberculosis). Now, the recently released results of an international clinical trial have shown that a four-month daily treatment plan using a high-dose, or "optimized," course of the drugs rifapentine and moxifloxacin is as safe and effective as the existing standard six-month daily therapy.

The new regimen is the first successful short-course treatment option for drug-susceptible TB disease in 40 years. The findings from the clinical trial that led to its development -- with significant involvement from Johns Hopkins Medicine researchers -- were announced Oct. 21, 2020, by the research team funded by the U.S. Centers for Disease Control and Prevention (CDC) and the National Institutes of Health (NIH).

In the past 40 years, physicians have put their patients with TB on a six-month treatment regimen, which includes daily doses of the drugs rifampin, isoniazid, pyrazinamide and ethambutol for eight weeks, followed by 18 weeks of daily therapy with just rifampin and isoniazid. Looking for a way to shorten the regimen without sacrificing safety or efficacy, the CDC's Tuberculosis Trials Consortium and the AIDS Clinical Trial Group (funded by the NIH's National Institute of Allergy and Infectious Diseases) conducted one of the largest TB clinical trials in history. It involved more than 2,500 participants, ages 12 and older with newly diagnosed TB, who were enrolled at 34 clinical sites in 13 countries. The trial included 214 people with HIV infection.

Participants in the trial, known as Study 31/A5349, were given standard treatment or one of two four-month drug regimens for their TB -- the first featuring high-dose rifapentine, instead of rifampin, and the second using high-dose rifapentine and moxifloxacin.

"We found that four months of treatment -- with rifapentine and moxifloxacin -- cured TB as well as the six-month standard treatment, had a comparable level of safety, and was well tolerated by patients," says Richard Chaisson, M.D., director of the Johns Hopkins Center for Tuberculosis Research, professor of medicine at the Johns Hopkins University School of Medicine and one of four co-chairs of Study 31/A5349.

However, the regimen of high-dose rifapentine without moxifloxacin was found to be inadequate.

Along with their leadership role in the trial, the Johns Hopkins Medicine researchers followed 100 participants at a Johns Hopkins Medicine-funded center in South Africa and previously had conducted a number of animal studies that helped lay the groundwork for the latest investigation.

"This will certainly change how TB is treated in the United States [which still documents just under 10,000 cases annually] and likely, the world," Chaisson says.

MOLECULAR 'SWITCH' CONTROLS ABILITY TO REPAIR HEARING LOSS IN MICE

Media Contact: Rachel Butch, rbutch1@jhmi.edu

In a study in mice, Johns Hopkins Medicine researchers have found a molecular "switch" that turns off the animal's ability to repair damaged cells in the inner ear. The findings shed light on regenerative abilities that are present in many species of birds and fish, but get turned off in mammals, including humans.

The study was published Sept. 8, 2020, in The Proceedings of the National Academy of Sciences.

"We might have for the first time identified something that explains why humans lost the ability to repair cells related to hearing loss," says Angelika Doetzlhofer, Ph.D., associate professor of neuroscience at the Johns Hopkins University School of Medicine, and co-author of the study.

More than 37 million adults in the United States report hearing loss. In the majority of cases, it results from damage to sound receptor cells deep within the human ear known as hair cells. These cells line the spiral-shaped walls of the cochlea, a bony structure in the inner ear, and capture sound waves reverberating in the area. Then, they convert the vibrations into electrical impulses that are carried to the brain by nerves.

Hair cells are kept healthy by a layer of cells called supporting cells. In birds and fish, supporting cells can function as progenitors to replace lost hair cells. Recent studies of mammals have shown that supporting cells have some regenerative potential early in life, before the animals start hearing. For example, supporting cells in mouse pups are able to create new hair cells at birth. However, the ability to repair or replace them stops within a week. At that point, any damage done to the hair cells is irreversible.

Based on these data from previous mouse studies, Doetzlhofer and study co-author Xiaojun Li, Ph.D., a postdoctoral fellow in her laboratory, looked to the rodents as a way to better understand what controls the decline in regenerative ability in mammals.

The researchers achieved this by following the levels of a protein and micro RNA in mice, called LIN28B and let-7, respectively. LIN28B and let-7 are what scientists call "mutual agonists,"' meaning they control each other's function within the cell.

They found that when let-7 levels ramp up, LIN28B levels drop at the same time, turning off the mouse's regenerative ability.

The two researchers found that without LIN28B, hair cell regeneration does not occur. They tested this by using cochlear tissue and cells from genetically engineered mice that enabled the protein and its agonistic RNA to be turned on and off as needed.

The researchers say that their findings suggest LIN28B is the deciding factor as to whether or not the hair cells retain their regenerative abilities. LIN28B, they believe, promotes the regenerative process by turning on progenitor-specific genes in supporting cells, which then reprograms supporting cells into hair cell progenitor-like bodies.

"The most exciting part was seeing the dramatic effects of manipulating these factors. We began the experiment hoping to get any type of response, and to see a restoration regeneration capability was really thrilling," says Doetzlhofer.

The researchers say that a better understanding of the biology behind hair cell regeneration may lead to the development of future treatments for hearing loss.

This research was supported by the National Institute on Deafness and Other Communication Disorders and the David M. Rubenstein Fund for Hearing Research.

Credit: 
Johns Hopkins Medicine

Young people's anxiety levels doubled during first COVID-19 lockdown, says study

The number of young people with anxiety doubled from 13 per cent to 24 per cent, during the early stages of the COVID-19 pandemic and lockdown 1, according to new research from the University of Bristol. The study, using Bristol's Children of the 90s questionnaire data, showed that young people (27-29 years) reported higher levels of anxiety during the early phases of the pandemic in the first national lockdown and this was higher than their parents.

Researchers also found that anxiety levels continued to remain high even when lockdown restrictions were eased in June and thus a similar situation may be expected this winter. The findings also suggest that this could be worse for individuals with a history of mental health problems, women and those who had experienced pre-pandemic financial problems. These findings have been highlighted by Public Health England to help influence policy and the government's understanding of the impact of COVID-19 on mental health.

There was no evidence that depression was higher overall, however, specific groups of individuals were more likely to experience greater levels of depression and anxiety during the pandemic.

Bristol's Children of the 90s health study recruited 14,500 pregnant mothers in 1991-2 and has collected almost three decades of detailed health and lifestyle data about the mothers and their babies, who will soon be turning 30. For this study, researchers compared participants' previous years' of data with findings from two 2020 COVID-19 questionnaires to understand the impact of the pandemic on mental health.

Who is at risk of poorer mental health during COVID-19?

Certain groups within the study were at greater risk of increased anxiety and/or depression during COVID-19, even after accounting for their previous history of depression and anxiety. These were women, those with pre-existing mental and physical health conditions, those living alone during the pandemic, those self-isolating as a result of COVID-19 and those who had experienced recent financial problems. Interestingly, some factors, such as living alone, were only linked to greater depression and others, such being a parent, only linked to anxiety. Researchers did not find evidence of an elevated risk of anxiety in key workers or healthcare workers. These findings were observed in both the younger and older generations and replicated in an additional group of over 4000 Scottish individuals - implying these effects may not just be specific to individuals in the South-West.

Researchers are now looking at why some groups of people may have poorer mental health during the pandemic and the role of worries and health behaviours such as sleep and exercise levels. A further questionnaire examining the impact of the England's second lockdown is planned for December. Alongside the data from Children of the 90s, the research also looked at data from another longitudinal study - Generation Scotland.

Co-lead researcher Dr Alex Kwong, Senior Research Associate in Psychiatric Genetic Epidemiology at University of Bristol, commented: "The highly detailed Children of the 90s questionnaire data reveals a worrying rise in young people's anxiety - this looks like it is due to the pandemic itself and potentially the societal and economic fallout caused by the lockdown measures used to control the spread of the virus. Evidence suggests this is not going to be a short-term issue and that mental health support and interventions are urgently required to reduce some of the mental health inequalities that have emerged."

COVID-19 in Children of the 90s

The Children of the 90s' first two COVID-19 questionnaires uncovered details about participants' COVID-19 symptoms, plus their work, finances, lifestyle and diet - helping to understand more about their own, their parents' and their children's physical and mental health.

As might be expected in a study heavily based in the south west of England, only a small percentage had tested positive for COVID-19, but a larger and more representative number of participants have experienced at least one of the primary COVID-19 symptoms - loss of smell/taste (13 per cent), new persistent cough (21 per cent) or fever (23 per cent). A subsequent antibody testing study by Children of the 90s found that 4.3 per cent of those who took part tested positive for antibodies, suggesting they had previously had an infection with COVID-19.

Professor Nic Timpson, principal investigator of Children of the 90s comments: "Longitudinal health studies like Children of the 90s are so important as they repeatedly measure an individual's mental health (as well as lifestyle and general health) over different time points throughout their life. With this study, it enabled us to compare pre- and post-pandemic data to fully understand the impact of COVID-19 on mental health. Such detailed, contextualised health data is unique, providing valuable evidence for policymakers and Public Health England."

Co-lead researcher Dr Rebecca Pearson, Senior Lecturer in Psychiatric Epidemiology and the University of Bristol said: "The findings suggest that there is a need to protect mental health at this time (especially managing anxiety) and support mental health services. It is especially important to learn lessons from the first lockdown now that we are in a second lockdown. The findings also provide evidence for supporting specific groups at greater mental health risk, such as those living alone. Support bubbles for single adults and single parents (which have been allowed from the outset this lockdown) could be beneficial to mental health, but we need to understand the role of social isolation better".

Professor David Porteous, Principle Investigator for Generation Scotland said: "This study shows beyond doubt how COVID-19 is affecting mental health, particularly in younger people. The strength of the study is really three-fold. First, both Children of the 90s and Generation Scotland had mental health measures from before and repeat measures during the pandemic. Second, each cohorts findings echo the other. Third, the findings are not a quirk of locality - young adults in both Scotland and around Bristol were similarly affected. The study shows that indirect effects of COVID-19 are profound and widespread and felt most acutely by young adults. They as much as any group will bear the long-term brunt of the COVID experience and post-pandemic recovery."

Credit: 
University of Bristol

Tracking COVID-19 trends in hard-hit states

image: Researchers at Louisiana State University have applied computational models to investigate infection rates in relation to social distancing measures.

Image: 
Nicholas Walker, LSU

Currently, there are over 10 million confirmed cases and more than 240,000 casualties attributed to COVID-19 in the U.S. Researchers at Louisiana State University have applied computational models to investigate infection rates in relation to social distancing measures. Their paper, Effect of mitigation measures on the spreading of COVID-19 in hard-hit states in the U.S., was recently published in PLOS ONE.

This research compiles the data for each state in the U.S. and calculates the change in the infection rate before and after social distance measures were imposed last spring.

"We investigate the change in the infection rate due to mitigation efforts and project death and infection counts through September 2020 for some of the most heavily impacted states: New York, New Jersey, Michigan, Massachusetts, Illinois, and Louisiana," said co-author Juana Moreno, LSU Department of Physics & Astronomy and the Center for Computation & Technology associate professor.

Many of the current predictive models of the development of COVID-19, especially after mitigation efforts, partially rely on extrapolations from data collected in other countries. However, understanding the effects of mitigation efforts based on local data is important, since data extrapolated from other areas may not be reliable. As states and countries have implemented different degrees of social distancing measures, the effects on controlling the pandemic simply cannot be translated between regions.

"The interactions among people are complicated and often difficult to model at the individual level. The challenge is similar to the study of a large ensemble of particles where monitoring the motion of each particle is virtually impossible. Mean field approximation is often used to study large populations by coarse graining the individual, or microscopic, details to the population, or macroscopic, averaged quantity," said lead author Ka-Ming Tam, LSU Department of Physics & Astronomy research assistant professor.

With the current mitigation efforts, five of those six states with the exception of Illinois, have reduced their base reproduction number to a value less than one, stopping the exponential growth of the pandemic.

"The infection rate is an important indicator of the evolution of an epidemic. If it is larger than one, the number of infections is exponentially increasing," said Nicholas Walker, LSU Department of Physics & Astronomy alumnus and current postdoctoral fellow. "We found that the infection rate is substantially suppressed by social distancing measures. Almost all states had the infection rate dropped below one by the end of the April."

The analyses by the researchers clearly shows a drop in infection rates following public policy measures such as social distancing and stay-at-home orders.

"We are currently working on the effects of reopening and how the reopening policies in different states have affected the number of fatalities," Moreno said. "Our current analysis suggests that the face mask mandate is the most important policy for lowering the death count."

Credit: 
Louisiana State University

Tackling metabolic complexity

image: CRISPR interference uses a deactivated Cas9 protein (dCas9) and a single guide RNA (sgRNA) and enables specific knockdowns of enzymes in the metabolic network of bacteria. Deep sequencing can track growth of 7177 CRISPRi in parallel and informs about robustness of each CRISPRi strain.

Image: 
Max Planck Institute for terrestrial Microbiology/Link

Metabolic robustness, the ability of a metabolic system to buffer changes in its environment, is not always a welcome feature for microbiologists: it interferes with metabolic engineering or prevents that antibiotics kill bacteria. Therefore it is important to understand the mechanisms that enable metabolic robustness. A massively parallel CRISPRi screen demonstrated that E. coli metabolism is very robust against knockdowns of enzymes, and multi-omics data revealed the mechanisms behind it. In the future, the researchers want to apply this knowledge to build better models of metabolism, which enable rational-design of industrial microbes.

In their natural habitat, bacteria like E. coli are confronted with constant changes in the composition of nutrients. But under laboratory conditions, they can also be real specialists, and grow for instance on a single carbon source like glucose. To do so, their metabolic network must synthesize all cellular building blocks from scratch. This task requires that hundreds of enzyme-catalyzed reaction in the metabolic network work at the right pace, and that no reaction accidently falls below a critical threshold. Otherwise, a single bottleneck in the network may have wide-spread consequences and eventually stop cellular growth.

To understand how E. coli accomplishes this task, researchers led by Dr. Hannes from the Max Planck Institute for terrestrial Microbiology applied the CRISPR interference (CRISPRi) technology. By inducing knockdowns of each protein in the metabolic network of E. coli, they created a CRISPRi library with 7177 strains. Deep sequencing of the library during a pooled competition assay allowed the researchers to track fitness of each CRISPRi strain for 14 hours. The results of this massively parallel CRISPR screen were somewhat surprising. While knockdowns of only seven genes - keypoints in the metabolic network, like biosynthesis of deoxynucleotides for DNA synthesis - caused immediate and strong fitness defects, hundreds of other knockdowns had little effects.

As Dr. Hannes Link explains: "Our results demonstrated that E. coli cells accomplish a very high metabolic robustness. In general, robustness enables living organisms to survive despite external and internal disturbances, and there are different mechanisms that mediate it, such as feedback mechanisms or redundancy. In this context, organisms are always in a trade-off situation: either they express high enzyme concentrations, which is costly; or the express low enzyme concentrations which can limit the metabolic capacity. For us researchers, robustness is not always a welcome feature in bacteria, for example in the course of biotechnological applications, if we want to engineer metabolism to overproduce chemicals with bacteria. Therefore it is important to understand how E. coli accomplishes this task."

To answer this question, the team measured the proteome and metabolome of 30 CRISPRi strains. In some strains the proteome responses revealed mechanisms that actively buffered the CRISPRi knockdowns. For example, knockdown of homocysteine transmethylase (MetE) in the methionine pathway caused a compensatory upregulation of all other enzymes in the methionine pathway. In other words, E. coli cells sensed that the knockdown caused a bottleneck in methionine biosynthesis and then mounted a very precise and local response around the methionine pathway. The other 30 CRISPRi strains revealed similar buffering mechanisms that were surprisingly specific, but whether all metabolic pathways are equipped with such precise and localized buffering mechanisms remains open. Therefore, the Link Lab is currently innovating new mass spectrometry methods to probe the complete metabolism of the complete CRISPRi library.

This comprehensive approach creates new possibilities for the development of industrially useful microbes, as Dr. Hannes Link points out: "In the future, we want to use these data to construct metabolic models that are dynamic and predictive. We used a very small dynamic model in the current study, but building larger models remains one of the big challenges. Such models would allow us to engineer E. coli cells that stop growing upon a certain signal and then concentrate all metabolic resources on the synthesis of a desired chemical. This controlled decoupling of growth from overproduction would break new ground in metabolic engineering and opens new applications in industrial biotechnology."

Credit: 
Max-Planck-Gesellschaft

Sestrin makes fruit flies live longer

image: Fruit flies in a vial in front of amino acids.

Image: 
MPI f. Biology of Ageing/ Sebastian Grönke and Yu-Xuan Lu

Reduced food intake, known as dietary restriction, leads to a longer lifespan in many animals and can improve health in humans. However, the molecular mechanisms underlying the positive effects of dietary restriction are still unclear. Researchers from the Max Planck Institute for Biology of Ageing have now found one possible explanation in fruit flies: they identified a protein named Sestrin that mediates the beneficial effects of dietary restriction. By increasing the amount of Sestrin in flies, researchers were able to extend their lifespan and at the same time these flies were protected against the lifespan-shortening effects of a protein-rich diet. The researchers could further show that Sestrin plays a key role in stem cells in the fly gut thereby improving the health of the fly.

The health benefits of dietary restriction have long been known. Recently, it has become clear that restriction of certain food components, especially proteins and their individual building blocks, the amino acids, is more important for the organism's response to dietary restriction than general calorie reduction. On the molecular level, one particular well-known signalling pathway, named TOR pathway, is important for longevity.

"We wanted to know which factor is responsible for measuring nutrients in the cell, especially amino acids, and how this factor affects the TOR pathway", explains Jiongming Lu, researcher in the department of Linda Partridge at the Max Planck Institute for Biology of Ageing. "We focused on a protein called Sestrin, which was suggested to sense amino acids. However, no one has ever demonstrated amino acid sensing function of Sestrin in a living being." Therefore, Lu and his colleagues focused on the role of Sestrin in the model organism Drosophila melanogaster, commonly known as fruit fly.

Sestrin as a potential anti-ageing factor

"Our results in flies revealed Sestrin as a novel potential anti-ageing factor", says Linda Partridge, head of the research team. "We could show that the Sestrin protein binds certain amino acids. When we inhibited this binding, the TOR signalling pathway in the flies was less active and the flies lived longer", adds Lu. "Flies with a mutated Sestrin protein unable to bind amino acids showed improved health in the presence of a protein-rich diet."

Particularly interesting: If the researchers increased the amount of Sestrin protein in stem cells located in the fly gut, these flies lived about ten percent longer than control flies. In addition, the increased Sestrin amounts only in the gut stem cells also protected against the negative effect of a protein-rich diet. Lu continues: "We are curious whether the function of Sestrin in humans is similar as in flies. Experiments with mice already showed that Sestrin is required for the beneficial effects of exercise on the health of the animal. A drug that increases the activity of the Sestrin protein might therefore be in future a novel approach to slow down the ageing process."

Credit: 
Max-Planck-Gesellschaft

To push or to pull? How many-limbed marine organisms swim

video: A video explainer of the research published in Colin, S.P., et al (2020) The role of suction thrust in the metachronal paddles of swimming invertebrates. Scientific Reports, DOI: 10.1038/s41598-020-74745-y.

Image: 
Emily Greenhalgh, MBL

WOODS HOLE, Mass. — When you think of swimming, you probably imagine pushing through the water—creating backwards thrust that pushes you forward. New research at the Marine Biological Laboratory (MBL) suggests instead that many marine animals actually pull themselves through the water, a phenomenon dubbed “suction thrust.”

The study, published in Scientific Reports, found that small marine animals with multiple propulsers—including larval crabs, polychaete worms, and some types of jellyfish—don’t push themselves forward when they move their appendages, but instead create negative pressure behind them that pulls them through the water.

When the front appendage moves, it creates a pocket of low pressure behind it that may reduce the energy required by the next limb to move. “It is similar to how cyclists use draft to reduce wind drag and to help pull the group along,” says lead author Sean Colin of Roger Williams University, a Whitman Center Scientist at the MBL.

This publication builds on the team’s previous work, also conducted at the MBL, on suction thrust in lampreys and jellyfish. For the current study, they focused on small marine animals that use metachronal kinematics also known as “metachronal swimming,” a locomotion technique commonly used by animals with multiple pairs of legs in which appendages stroke in sequence, rather than synchronously.

“We came into this study looking for the benefits of metachronal swimming, but we realized the flow around the limbs looks very similar to the flow around a jellyfish or a fish fin,” said Colin. “Not only does the flow look the same, but the negative pressure is the same.”

For this study, the researchers worked with two crab species, a polychaete worm, and four species of comb jellies. All are smaller than a few millimeters in length. They found that the fluid flow created while swimming was the same as in the larger animals they had previously studied.

“Even at these really small scales, these animals rely on negative pressure to pull themselves forward through the water,” said Colin, who added that this could be a common phenomenon among animals.

“It’s not unique to the fish or the jellyfish we looked at. It’s probably much more widespread in the animal kingdom,” says Colin, who added that something like suction thrust has been observed in birds and bats moving themselves through the air. These creatures have the same degree of bend in their limbs (25-30 degrees) that the observed marine animals do.

Moving forward, Colin and colleagues want to study a larger variety of marine organisms to determine the range of animal sizes that rely on suction thrust to propel through the water.

“That’s one of our main goals — to get bigger, get smaller, and get a better survey of what animals are really relying on this suction thrust,” Colin says.

Credit: 
Marine Biological Laboratory

Can we harness a plant's ability to synthesize medicinal compounds?

image: Plate showing Senna tora, also called Cassia tora.

Image: 
From Flora De Filipinas by Francisco Manuel Blanco in U.S. public domain

Palo Alto, CA-- Anthraquinones are a class of naturally occurring compounds prized for their medicinal properties, as well as for other applications, including ecologically friendly dyes. Despite wide interest, the mechanism by which plants produce them has remained shrouded in mystery until now.

New work from an international team of scientists including Carnegie's Sue Rhee reveals a gene responsible for anthraquinone synthesis in plants. Their findings could help scientists cultivate a plant-based mechanism for harvesting these useful compounds in bulk quantities.

"Senna tora is a legume with anthraquinone-based medicinal properties that have long been recognized in ancient Chinese and Ayurvedic traditions, including antimicrobial and antiparasitic benefits, as well as diabetes and neurodegenerative disease prevention," Rhee explained.

Despite its extensive practical applications, genomic studies of Senna have been limited. So, led by Sang-Ho Kang of the Korean National Institute of Agricultural Sciences and Ramesh Prasad Pandey of Sun Moon University and MIT, the research team used an array of sophisticated genetic and biochemical approaches to identify the first known anthranoid-forming enzyme in plants.

"Now that we've established the first step of the ladder, we can move quickly to elucidate the full suite of genes involved in the synthesis of anthraquinone," said lead author Kang.

Once the process by which plants make these important compounds is fully known, this knowledge can be used to engineer a plant to produce high concentrations of anthraquinones that can be used medicinally.

"The same techniques that we use to help improve the yields of agricultural or biofuel crops can also be applied to developing sustainable production methods for plant-based medicines," Rhee concluded.

Credit: 
Carnegie Institution for Science

'Crisis decision making at the speed of COVID-19' - Bay Area public health officials share their experience with shelter-in-place order

November 24, 2020 - In mid-March, public health officials across the San Francisco area issued the first U.S. regional shelter-in-place order in response to the emerging COVID-19 pandemic. A "field report" on the crisis decision-making approach followed in that effective early response is featured in a special COVID-19 supplement to the Journal of Public Health Management and Practice. The journal is published in the Lippincott portfolio by Wolters Kluwer.

"With the benefit of hindsight and reflection, we recount our story through the lens of public health legal authority, meta-leadership, and decision intelligence," according to the new article, authored by Tomás J. Aragón, MD, DrPH, of the San Francisco Department of Public Health along with local health officers (LHOs) from six other Bay Area jurisdictions. Their paper appears as part of a JPHMP supplement titled "COVID-19 and Public Health: Looking Back, Moving Forward."

On March 15, 2020, LHOs in Alameda, Contra Costa, Marin, San Francisco, San Mateo, and Santa Clara Counties and the City of Berkeley issued legal orders for 6.7 million residents to shelter-in-place to prevent the spread of SARS-CoV-2. Within days, other regions issued similar orders. Studies have confirmed that early shelter-in-place orders prevented many COVID-19 cases, hospitalizations, and deaths across the United States.

But even in a crisis, outcomes can't determine whether a decision was good or bad. "The quality of a decision depends only on the quality of the decision-making process at the time the decision was made," Dr. Aragón and coauthors write. They review key elements of good crisis decision making during complex public health emergencies:

Crisis Decision Making. Public health decision makers bring epidemiologic (causal and probabilistic) reasoning and the ability to balance tradeoffs from competing objectives to tackle a health crisis. In the face of "volatility, uncertainty, complexity, and ambiguity," team deliberations must be built on trust-based, collaborative relationships and approached with "genuine intellectual humility ... embrac[ing] curiosity over certainty."

Legal Authority. In California, LHOs are authorized to take measures as necessary to control the spread of communicable diseases. This enabled the Bay Area LHOs to act "quickly and decisively" to issue shelter-in-place orders to protect the public health and healthcare system from a COVID-19 surge like that occurring at the time in New York City.

Meta-Leadership. Longstanding state and regional health official networks have played an important role in responding, adapting, and scaling to public health threats. In the Bay Area, years of experience in preparing for pandemic influenza, and responding to other microbial threats, enabled the spontaneous activation of "swarm leadership"--a key characteristic of good disaster responses where a group, without formal hierarchical authority, coalesces to tackle effectively a complex emergency.

Decision Intelligence. Dr. Aragón and colleagues define decision intelligence as "the integration of problem-solving and decision quality within a performance improvement framework, ensuring quality and continuous improvement in crisis decision making." While recognizing that a shelter-in-place order was a "drastic measure," the LHOs agreed that immediate action would maximize the benefits of interrupting transmission: "flattening the curve" of COVID-19 cases, hospitalizations, and deaths while providing time to ensure public health systems readiness and to learn about SARS-CoV-2 biology and transmission.

The Bay Area LHOs hope their experience will draw attention to the elements of crisis decision making - throughout the pandemic and in preparing for future public health emergencies. "Decision intelligence provides practical tools to improve crisis decision making," Dr. Aragon and coauthors conclude. "We hope that with experimentation and practice, public health officials will improve their routine team decisions, and thereby be capable and ready to make better decisions in new crisis situations."

Credit: 
Wolters Kluwer Health

Simple new testing method aims to improve time-release drugs

image: A piece of glass bent like a tuning fork measures dissolution of time-release drugs.

Image: 
William Grover

When you take a time-release drug, you count on it doing what the package says: release the drug slowly into your bloodstream to provide benefits over the specified period of time. When the drug dissolves too slowly or too quickly, the results can range from inconvenient-- a decongestant that lets your sinuses get stuffed up too soon-- to tragic, as many who were prescribed OxyContin discovered.

OxyContin, which contains the opiate oxycodone, was supposed to offer 12-hour pain relief. Instead, in some patients it dissolved much more quickly, causing them to take it more frequently and ultimately become addicted.

But assessing how a drug dissolves in the body is surprisingly tricky. Drug dissolution has to be measured under laboratory conditions that come as close as possible to mimicking what happens in the body.

In a paper published in Scientific Reports, UC Riverside researchers describe a simple, inexpensive way to measure drug dissolution that should help pharmaceutical companies develop better and more consistent time-release drug products.

"We directly measured dissolution profiles of single drug granules, which are the little spheres you see when you open up a capsule," said corresponding author William Grover, an associate professor of bioengineering at the Marlan and Rosemary Bourns College of Engineering. "We accomplished this using a vibrating tube sensor, which is just a piece of glass tubing bent in the shape of a tuning fork."

Many factors affect drug dissolution in the body, including the pH and chemical composition of the gastrointestinal fluid, the hydrodynamics of the fluid caused by gastrointestinal contractions, the patient's sex, and metabolism. For example, the makers of OxyContin note taking the drug with a high-fat meal can increase the amount of oxycodone in the patient's blood by 25%.

Pharmaceutical companies usually test drugs by placing them in a vessel filled with fluid that mimics the contents of the gastrointestinal, or GI, tract, and stir the fluid to recreate GI tract dynamics. Small samples of the fluid are taken at intervals and the concentration of the drug, which should be increasing over time, measured using ultraviolet-visible spectroscopy or high-performance liquid chromatography. The data from this testing is used to construct a model of how the drug is expected to behave in the body.

The common ways of testing all have drawbacks. Small differences in the placement of tablets in a vessel can double the measured dissolution rate in one method, for example. Other methods can experience clogged equipment, impeded flow, and air bubbles, all of which affect how the drug dissolves. Moreover, the measurement process is time-consuming, laborious, often irreproducible, and involves expensive equipment. The existing methods also offer only "snapshots" of dissolution, taken at sampling points, providing limited information.

Grover, doctoral student Heran Bhakta, and undergraduate student Jessica Lin took a radically different approach. Rather than measure the increasing concentration of the drug in the fluid, they decided to measure the decreasing mass of a solid pellet as it dissolves.

The group used a glass tube bent like a tuning fork, kept vibrating by a circuit at its resonance frequency, which was determined by the mass of the tube and its contents. When they filled the tube with simulated stomach and intestine contents and passed an over-the-counter time-release drug granule through the tube, they observed a brief change in the frequency.

When plotted, they could compare the peaks of resonance frequency against the time to learn the buoyant mass of the drug granule at that moment.

"By passing the granule back-and-forth through the vibrating tube while it dissolves, we can monitor its weight throughout the dissolution process and obtain single-granule dissolution profiles," Grover said.

The group tested three different controlled-release proton pump inhibitor drugs: omeprazole, lansoprazole, and esomeprazole. Though they all have the same intended function in the body, they have very different granule sizes and dissolution mechanisms.

"We also found different dissolution behaviors between name-brand and generic formulations of the same drug. These differences in single-particle dissolution behavior could lead to different rates of drug absorption in patients," Grover said.

The researchers write that the technique addresses many of the shortcomings of existing testing methods, requires no additional analytical instruments, and is suitable for both fast-dissolving and slow-dissolving formulations. By giving dissolution profiles for individual pellets the method can capture variations in pellet dissolution behavior that other methods can't.

"Our technique is much cheaper and easier to perform than conventional methods, and that enables pharmaceutical companies to do more tests in a wider variety of conditions," said Grover. "We can also easily see differences in dissolution between individual particles in a drug. That should help pharmaceutical companies improve and monitor the consistency of their manufacturing processes."

The technique measures not just active ingredients, but also the inert ingredients in each drug particle.

"That's helpful for manufacturers who want to study how each layer of a controlled-release granule behaves during dissolution," said Bhakta.

The authors hope this data can augment existing dissolution methods and help pharmaceutical developers and manufacturers create better controlled-release drugs.

Credit: 
University of California - Riverside

COVID's collateral damage: Germicidal lamps may damage corneas

image: Injected conjunctiva (redness) of the right and left eye (top row)
Diffuse staining of the cornea with green dye indicating epithelial damage (bottom row).

Image: 
Bascom Palmer Eye Institute

In a paper published in the journal of Ocular Immunology and Inflammation, physicians from the Bascom Palmer Eye Institute at the University of Miami Miller School of Medicine reported that several patients using germicidal lamps in an attempt to sanitize against the coronavirus, developed painful inflammation of the cornea, a condition called photokeratitis. These consumer-available ultraviolet (UV) emitting devices were being used in an attempt to eliminate coronavirus from homes and offices.

"During the height of the pandemic, we noticed an increased number of patients coming in with irritation, pain and sensitivity to light," said first author and Bascom Palmer resident Jesse Sengillo, M.D. "We realized this was after direct exposure to germicidal lamps that emit UV light in the C range to kill bacteria and viruses. This can be quite a painful experience for the patient, but with prompt topical lubrication and antibiotics to prevent infection, patients often do very well."

UV photokeratitis occurs when the cornea is overexposed to ultraviolet radiation. This can happen at high elevation, where less UV rays are absorbed by the atmosphere, or near water, snow or other reflective surfaces in the environment. A few hours after exposure, patients experience burning in their eyes and sometimes intense light sensitivity.

Numerous germicidal lamps are on the market, and while they may be safe for at-home use, customers need to pay close attention to manufacturer recommendations to prevent damage to the eyes and skin.

"The patients we met were not aware of these recommendations, and many were unknowingly exposed at work" said co-author and fellow resident Anne Kunkler, M.D., B.S. "For UV-C emitting devices, it is best to leave the room while the device is on. Our patients were directly exposed to the light for various lengths of time. A few hours later, they felt discomfort and sought medical attention."

Dr. Sengillo and colleagues encourage anyone feeling eye discomfort after exposure to one of these devices to promptly seek medical attention a medical professional from an ophthalmologist.

While germicidal lamps are being purchased to protect people during the pandemic, this study did not attempt to address whether they are effective in destroying coronaviruses. "There are many COVID-19 related publications recently. It is important that we disseminate information accurately and responsibly to avoid public confusion." Dr. Sengillo and colleagues note that some UV-C emitting germicidal devices are proven to be effective in killing various microbes and viruses, but to the authors knowledge, they have not been tested against COVID-19 specifically yet. "Our study was not designed to answer that question. If you choose to use these lamps, just make sure to follow manufacturer recommendations closely to avoid unnecessary injury," said Dr. Sengillo.

Credit: 
University of Miami Miller School of Medicine