Earth

How is STEM children's programming prioritizing diversity?

image: "Children soak up subtleties and are learning and taking cues from everything; by age 5, you can see that they understand implicit biases," says Fashina Aladé, lead author of the study.

Image: 
Royalty-free from PxHere

EAST LANSING, Mich. - Children's television programming not only shapes opinions and preferences, its characters can have positive or negative impacts on childhood aspiration, says a new study from Michigan State University.

The study is the first large-scale analysis of characters featured in science, technology, engineering and math-related educational programming. It was published in the fall 2020 edition of Journal of Children and Media. Results revealed that of the characters appearing in STEM television programming for kids ages 3 to 6, Latinx and females are left behind.

"Children soak up subtleties and are learning and taking cues from everything; by age 5, you can see that they understand implicit biases," said Fashina Aladé, lead author of the study and assistant professor in the College of Communication Arts and Sciences. "With the recent proliferation of STEM television over the past five years or so, I wanted to see who was showing kids how to solve problems, who is teaching STEM foundations and who is modeling what it looks like to engage in STEM."

To get a picture of the entire landscape of STEM programming available to children, Aladé and colleagues -- Alexis Lauricella of Erikson Institute, Yannik Kumar from University of Chicago and Ellen Wartella of Northwestern University -- looked to Nielsen, Netflix, Amazon and Hulu for a list of children's shows that mentioned keywords like science, math, technology or problem-solving in their descriptions.

The researchers looked at 30 shows with target audiences between 3- and 6-year-olds, all claiming to teach some aspect of STEM. Coders watched 90 episodes total -- three episodes from each show's most recent season -- and coded over 1,000 characters who appeared on the shows for physical attributes, gender, race and ethnicity.

"Surprisingly, when it came to the centrality of their role and on-screen STEM engagement, characters were portrayed relatively equally regardless of their race or gender," Aladé said. "But, female and minority characters were underrepresented in these programs compared to population statistics."

An interesting finding, Aladé said, was that racially ambiguous characters -- including non-human skin tones, like pink or purple -- comprised 13% of the characters, which she suggests illustrates producers' attempts to show racial diversity. "The jury's still out on whether those subtle cues are effective," Aladé said. Additionally, the study also found that only 14% of the shows showed occupations related to STEM.

"Animation presents such an opportunity for representation. Ideally, we'd see authentic representation -- not representative stereotypes," Aladé said. "I hope we move in a direction where kids see what scientists really look like in today's world, where doctors, engineers and computer scientists come from all ethnicities and genders."

Credit: 
Michigan State University

Remember that fake news you read? It may help you remember even more

People who receive reminders of past misinformation may form new factual memories with greater fidelity, according to an article published in the journal Psychological Science.

Past research highlights one insidious side of fake news: The more you encounter the same misinformation--for instance, that world governments are covering up the existence of flying saucers--the more familiar and potentially believable that false information becomes.

New research, however, has found that reminders of past misinformation can help protect against remembering misinformation as true while improving recollection of real-world events and information.

"Reminding people of previous encounters with fake news can improve memory and beliefs for facts that correct misinformation," said Christopher Wahlheim, a lead author on the paper and assistant professor of psychology at the University of North Carolina, Greensboro. "This suggests that pointing out conflicting information could improve the comprehension of truth in some situations."

Wahlheim and colleagues conducted two experiments examining whether reminders of misinformation could improve memory for and beliefs in corrections. Study participants were shown corrections of news and information they may have encountered in the past. Reminders of past misinformation appeared before some corrections but not others. Study results showed that misinformation reminders increased the participants' recall of facts and belief accuracy. The researchers interpreted the results to indicate that misinformation reminders raise awareness of discrepancies and promote memory updating. These results may be pertinent to individuals who confront misinformation frequently.

"It suggests that there may be benefits to learning how someone was being misleading. This knowledge may inform strategies that people use to counteract high exposure to misinformation spread for political gain," Wahlheim said.

Credit: 
Association for Psychological Science

USask scientists develop model to identify best lentils for climate change impacts

image: USask plant scientist Kirstin Bett.

Image: 
Debra Marshall Photography

With demand for lentils growing globally and climate change driving temperatures higher, a University of Saskatchewan-led international research team has developed a model for predicting which varieties of the pulse crop are most likely to thrive in new production environments.

An inexpensive plant-based source of protein that can be cooked quickly, lentil is a globally important crop for combatting food and nutritional insecurity.

But increased production to meet this global demand will have to come from either boosting yields in traditional growing areas or shifting production to new locations, said USask plant scientist Kirstin Bett.

"By understanding how different lentil lines will interact with the new environment, we can perhaps get a leg up in developing varieties likely to do well in new growing locations," said Bett.

Working with universities and organizations around the globe, the team planted 324 lentil varieties in nine lentil production hotspots, including two in Saskatchewan and one in the U.S., as well as sites in South Asia (Nepal, Bangladesh, and India) and the Mediterranean (Morocco, Spain, and Italy).

The findings, published in the journal Plants, People, Planet, will help producers and breeders identify existing varieties or develop new lines likely to flourish in new growing environments--valuable intelligence in the quest to feed the world's growing appetite for inexpensive plant-based protein.

The new mathematical model is based on a key predictor of crop yield--days to flowering (DTF) which is determined by two factors: day length (hours of sunshine or "photoperiod") and the mean temperature of the growing environment. Using detailed information about each variety's interaction with temperature and photoperiod, the simple model can be used to predict the number of days it takes each variety to flower in a specific environment.

"With this model, we can predict which lines they (producers) should be looking at that will do well in new regions, how they should work, and whether they'll work," Bett said.

For example, lentil producers in Nepal--which is already experiencing higher mean temperatures as a result of climate change--can use the model to identify which lines will produce high yields if they're grown at higher altitudes.

Closer to home in Western Canada, the model could be used to predict which varieties should do well in what are currently considered to be marginal production areas.

The project also involved USask plant researchers Sandesh Neupane, Derek Wright, Crystal Chan, and Bert Vandenberg.

The next step is putting the new model to work in lentil breeding programs to identify the genes that are controlling lentil lines' interactions with temperature and day length, said Bett.

Once breeders determine the genes involved, they can develop molecular markers that will enable breeders to pre-screen seeds. That way they'll know how crosses between different lentil varieties are likely to perform in different production locations.

Credit: 
University of Saskatchewan

Deep sea coral time machines reveal ancient CO2 burps

image: Coral images: Analysis of the fossil remains of deep-sea corals (pictured here) were used to examine the history of the oceans and connections to global climate.

Image: 
Dann Blackwood, USGS.

The fossilised remains of ancient deep-sea corals may act as time machines providing new insights into the effect the ocean has on rising CO2 levels, according to new research carried out by the Universities of Bristol, St Andrews and Nanjing and published today [16 October] in Science Advances.

Rising CO2 levels helped end the last ice age, but the cause of this CO2 rise has puzzled scientists for decades. Using geochemical fingerprinting of fossil corals, an international team of scientists has found new evidence that this CO2 rise was linked to extremely rapid changes in ocean circulation around Antarctica.

The team collected fossil remains of deep-sea corals that lived thousands of metres beneath the waves. By studying the radioactive decay of the tiny amounts of uranium found in these skeletons, they identified corals that grew at the end of the ice age around 15,000 years ago.

Further geochemical fingerprinting of these specimens - including measurements of radiocarbon - allowed the team to reconstruct changes in ocean circulation and compare them to changes in global climate at an unprecedented time resolution.

Professor Laura Robinson, Professor of Geochemistry at Bristol's School of Earth Sciences who led the research team, said: "The data show that deep ocean circulation can change surprisingly rapidly, and that this can rapidly release CO2 to the atmosphere."

Dr James Rae at St Andrew's School of Earth and Environmental Sciences, added: "The corals act as a time machine, allowing us to see changes in ocean circulation that happened thousands of years ago.

"They show that the ocean round Antarctica can suddenly switch its circulation to deliver burps of CO2 to the atmosphere."

Scientists have suspected that the Southern Ocean played an important role in ending the last ice age and the team's findings add weight to this idea.

Dr Tao Li of Nanjing University, lead author of the new study, said: "There is no doubt that Southern Ocean processes must have played a critical role in these rapid climate shifts and the fossil corals provide the only possible way to examine Southern Ocean processes on these timescales."

In another study published in Nature Geoscience this week the same team ruled out recent speculation that the global increase in CO2 at the end of the ice age may have been related to release of geological carbon from deep sea sediments.

Andrea Burke at St Andrew's School of Earth and Environmental Sciences, added: "There have been some suggestions that reservoirs of carbon deep in marine mud might bubble up and add CO2 to the ocean and the atmosphere, but we found no evidence of this in our coral samples."

Dr Tianyu Chen of Nanjing University said: "Our robust reconstructions of radiocarbon at intermediate depths yields powerful constraints on mixing between the deep and upper ocean, which is important for modelling changes in circulation and carbon cycle during the last ice age termination.

Dr James Rae added: "Although the rise in CO2 at the end of the ice age was dramatic in geological terms, the recent rise in CO2 due to human activity is much bigger and faster. What the climate system will do in response is pretty scary."

Credit: 
University of Bristol

Results from the DEFINE-FLOW study reported at TCT Connect

NEW YORK - October 16, 2020 - A new observational study of deferred lesions following combined fractional flow reserve (FFR) and coronary flow reserve (CFR) assessments found that untreated vessels with abnormal FFR but intact CFR do not have non-inferior outcomes compared to those with an FFR greater than 0.8 and a CFR greater than or equal to two when treated medically.

Findings were reported today at TCT Connect, the 32nd annual scientific symposium of the Cardiovascular Research Foundation (CRF). TCT is the world's premier educational meeting specializing in interventional cardiovascular medicine.

The role for invasive CFR assessment in the current era remains unclear since FFR has become a reference standard guiding decisions for revascularization. While observational data from invasive and noninvasive tools has indicated that lesions with intact CFR do well, few of these studies simultaneously assessed FFR. To address the limitations of the current literature, researchers designed and carried out the DEFINE-FLOW study.

A total of 455 patients were enrolled from 12 sites in six countries. Of those enrolled, 430 patients (533 lesions) were protocol-treated and followed for two years. Stable coronary lesions underwent simultaneous FFR and CFR measurement in at least duplicate with central core lab review of the tracings. Treatment followed the local measurements according to a uniform protocol whereby only lesions with both FFR?0.8 and CFR2.0, received initial medical therapy.

The primary endpoint was the composite of all-cause death, myocardial infarction, and revascularization at two years. The study findings for MACE were 5.8% for FFR-/CFR-, 10.8% for FFR+/CFR-, 12.4% for FFR-/CFR+, and 14.4% for FFR+/CFR+ (after PCI). The difference of FFR+/CFR- compared to FFR-/CFR- was 5.0% (95%CI -1.5% to +11.5%, p-value 0.065 for non-inferiority). Therefore, the study found that vessels with abnormal FFR?0.8 but intact CFR?2.0 did not have non-inferior outcomes compared to FFR>0.8 and CFR?2.0 when treated medically.

"Because the study was observational, it is not clear what the outcomes among FFR+/CFR- lesions would have been had they undergone PCI instead of medical therapy," said Nils Johnson, MD, MS, Associate Professor of Medicine and Weatherhead Distinguished Chair of Heart Disease, Division of Cardiology, Department of Medicine and the Weatherhead PET Imaging Center at McGovern Medical School at UT Health (Houston) and Memorial Hermann Hospital - Texas Medical Center. "There were a number of limitations to this study such as few lesions with severe FFR/CFR as well as unblinded subjects and physicians. The limitations coupled with the results makes this a hypothesis-generating study that can help to further understand the role of invasive CFR and how to treat CFR/FFR discordance."

Credit: 
Cardiovascular Research Foundation

When good governments go bad

image: The ruins of the Roman Forum, once a site of a representational government.

Image: 
(c) Linda Nicholas, Field Museum

All good things must come to an end. Whether societies are ruled by ruthless dictators or more well-meaning representatives, they fall apart in time, with different degrees of severity. In a new paper, anthropologists examined a broad, global sample of 30 pre-modern societies. They found that when "good" governments--ones that provided goods and services for their people and did not starkly concentrate wealth and power--fell apart, they broke down more intensely than collapsing despotic regimes. And the researchers found a common thread in the collapse of good governments: leaders who undermined and broke from upholding core societal principles, morals, and ideals.

"Pre-modern states were not that different from modern ones. Some pre-modern states had good governance and weren't that different from what we see in some democratic countries today," says Gary Feinman, the MacArthur curator of anthropology at Chicago's Field Museum and one of the authors of a new study in Frontiers in Political Science. "The states that had good governance, although they may have been able to sustain themselves slightly longer than autocratic-run ones, tended to collapse more thoroughly, more severely."

"We noted the potential for failure caused by an internal factor that might have been manageable if properly anticipated," says Richard Blanton, a professor emeritus of anthropology at Purdue University and the study's lead author. "We refer to an inexplicable failure of the principal leadership to uphold values and norms that had long guided the actions of previous leaders, followed by a subsequent loss of citizen confidence in the leadership and government and collapse."

In their study, Blanton, Feinman, and their colleagues took an in-depth look at the governments of four societies: the Roman Empire, China's Ming Dynasty, India's Mughal Empire, and the Venetian Republic. These societies flourished hundreds (or in ancient Rome's case, thousands) of years ago, and they had comparatively more equitable distributions of power and wealth than many of the other cases examined, although they looked different from what we consider "good governments" today as they did not have popular elections.

"There were basically no electoral democracies before modern times, so if you want to compare good governance in the present with good governance in the past, you can't really measure it by the role of elections, so important in contemporary democracies. You have to come up with some other yardsticks, and the core features of the good governance concept serve as a suitable measure of that," says Feinman. "They didn't have elections, but they had other checks and balances on the concentration of personal power and wealth by a few individuals. They all had means to enhance social well-being, provision goods and services beyond just a narrow few, and means for commoners to express their voices."

In societies that meet the academic definition of "good governance," the government meets the needs of the people, in large part because the government depends on those people for the taxes and resources that keep the state afloat. "These systems depended heavily on the local population for a good chunk of their resources. Even if you don't have elections, the government has to be at least somewhat responsive to the local population, because that's what funds the government," explains Feinman. "There are often checks on both the power and the economic selfishness of leaders, so they can't hoard all the wealth."

Societies with good governance tend to last a bit longer than autocratic governments that keep power concentrated to one person or small group. But the flip side of that coin is that when a "good" government collapses, things tend to be harder for the citizens, because they'd come to rely on the infrastructure of that government in their day-to-day life. "With good governance, you have infrastructures for communication and bureaucracies to collect taxes, sustain services, and distribute public goods. You have an economy that jointly sustains the people and funds the government," says Feinman. "And so social networks and institutions become highly connected, economically, socially, and politically. Whereas if an autocratic regime collapses, you might see a different leader or you might see a different capital, but it doesn't permeate all the way down into people's lives, as such rulers generally monopolize resources and fund their regimes in ways less dependent on local production or broad-based taxation."

The researchers also examined a common factor in the collapse of societies with good governance: leaders who abandoned the society's founding principles and ignored their roles as moral guides for their people. "In a good governance society, a moral leader is one who upholds the core principles and ethos and creeds and values of the overall society," says Feinman. "Most societies have some kind of social contract, whether that's written out or not, and if you have a leader who breaks those principles, then people lose trust, diminish their willingness to pay taxes, move away, or take other steps that undercut the fiscal health of the polity."

This pattern of amoral leaders destabilizing their societies goes way back--the paper uses the Roman Empire as an example. The Roman emperor Commodus inherited a state with economic and military instability, and he didn't rise to the occasion; instead, he was more interested in performing as a gladiator and identifying himself with Hercules. He was eventually assassinated, and the empire descended into a period of crisis and corruption. These patterns can be seen today, as corrupt or inept leaders threaten the core principles and, hence, the stability of the places they govern. Mounting inequality, concentration of political power, evasion of taxation, hollowing out of bureaucratic institutions, diminishment of infrastructure, and declining public services are all evidenced in democratic nations today.

"What I see around me feels like what I've observed in studying the deep histories of other world regions, and now I'm living it in my own life," says Feinman. "It's sort of like Groundhog Day for archaeologists and historians."

"Our findings provide insights that should be of value in the present, most notably that societies, even ones that are well governed, prosperous, and highly regarded by most citizens, are fragile human constructs that can fail," says Blanton. "In the cases we address, calamity could very likely have been avoided, yet, citizens and state-builders too willingly assumed that their leadership will feel an obligation to do as expected for the benefit of society. Given the failure to anticipate, the kinds of institutional guardrails required to minimize the consequences of moral failure were inadequate."

But, notes Feinman, learning about what led to societies collapsing in the past can help us make better choices now: "History has a chance to tell us something. That doesn't mean it's going to repeat exactly, but it tends to rhyme. And so that means there are lessons in these situations."

Credit: 
Field Museum

Immunotherapy combo halts rare, stage 4 sarcoma in teen

image: John Theurer Cancer Center at Hackensack University Medical Center.

Image: 
(Hackensack Meridian Health)

October 15, 2020 - Nutley, NJ - A patient with end-stage and rapidly progressing soft-tissue cancer whose tumor did not respond to standard treatment, had a "rapid and complete response" to a novel combination of immunotherapy, according to new research published by a team of scientists from John Theurer Cancer Center at Hackensack University Medical Center and the Georgetown Lombardi Comprehensive Cancer Center, both of whom are part of the Georgetown Lombardi Comprehensive Cancer Center Consortium.

The immunotherapies targeting the immune checkpoints T-lymphocyte-associated protein 4 (CTLA-4) and programmed cell death protein 1 (PD-1) were administered to a 19-year-old patient with stage 4 epithelioid sarcoma. The patient, whose tumor responded within two weeks after receiving the combination, resumed normal activity and was in a complete remission at the time of the report. The single case was reported online August 18, 2020 in the Journal of Immunotherapy (with the patient's consent).

"Epithelioid sarcoma is a rare cancer, and the outcome was not expected to be so positive," said Andrew Pecora, M.D., F.A.C.P., C.P.E., the division chief of skin cancer and sarcoma services at John Theurer Cancer Center. "The breakthrough in this patient's care was the result of the close collaboration between clinician scientist of the Consortium to elucidate the underlying mechanisms that suggested potential sensitivity to the checkpoint inhibitors.

"The teamwork in this case, which involved experts from our Consortium, turned things around for this patient," said Louis M. Weiner, M.D., director, Georgetown Lombardi Comprehensive Cancer Center and the MedStar Georgetown Cancer Institute. "The team will follow the patient closely in hopes that we can continue to keep the cancer at bay."

The patient was first diagnosed with the soft-tissue sarcoma along the spine in 2017, as a 17 year old. Chemotherapy, radiation and standard of care drugs, were administered, and surgery was performed to achieve partial response. The patient was admitted to the hospital in April 2019 in severe pain and his cancer had progressed to stage 4.

Most cases of epithelioid sarcoma show an inactivation of a protein-coding gene known as SMARCB1. That inactivation leads to the suppression of INI1, a gene that codes for a tumor-suppressing protein, effectively causing a biological chain reaction that promotes tumor growth.

The doctors at John Theurer Cancer Center, working as a team with Georgetown's experts, obtained a compassionate use authorization to try two checkpoint inhibitors, ipilimumab (anti-CTLA4) and nivolumab (anti-PD1) in May 2019.

By October, the patient was in complete remission. As of his last visit in June 2020, he has resumed normal activities and normal physical examination and is essentially asymptomatic.

According to the report, the patient's tumor was reliant on the lack of INI1. But the checkpoint inhibitors apparently unmasked the immune system, making the cancer cells susceptible to the body's natural defenses again.

The researchers write that more about the dramatic reversal of the patient's cancer needs to be further investigated, particularly what effect the chemotherapy before and after the checkpoint inhibitors may have had.

"Immune checkpoint inhibitors are finding a way to effect previously impossible outcomes and we are trying to learn about the precise indicators that suggest clinical utility of the checkpoint inhibitors," said senior author Jeffrey Toretsky, M.D., a professor of oncology and pediatrics at Georgetown Lombardi and chief of MedStar Georgetown University Hospital's Division of Pediatric Hematology/Oncology. "This patient opens a window for this exploration. We're proud to be able to successfully treat patients in this way."

"We are eager to see what our science can do for our sickest patients," said Andre Goy, M.D., M.S., physician-in-chief of Oncology, Hackensack Meridian Health. "It seems like almost anything is becoming possible."

Credit: 
Hackensack Meridian Health

Poor diet is top contributor to heart disease deaths globally

Sophia Antipolis, 16 Oct 2020: More than two-thirds of deaths from heart disease worldwide could be prevented with healthier diets. That's the finding of a study published today in European Heart Journal - Quality of Care and Clinical Outcomes, a journal of the European Society of Cardiology (ESC).1

The findings come on World Food Day, which highlights the importance of affordable and sustainable healthy diets for all.

"Our analysis shows that unhealthy diets, high blood pressure, and high serum cholesterol are the top three contributors to deaths from heart attacks and angina - collectively called ischaemic heart disease," said study author Dr. Xinyao Liu of Central South University, Changsha, China. "This was consistent in both developed and developing countries."

"More than six million deaths could be avoided by reducing intake of processed foods, sugary beverages, trans and saturated fats, and added salt and sugar, while increasing intake of fish, fruits, vegetables, nuts and whole grains. Ideally, we should eat 200 to 300 mg of omega 3 fatty acids from seafood each day. On top of that, every day we should aim for 200 to 300 grams of fruit, 290 to 430 grams of vegetables, 16 to 25 grams of nuts, and 100 to 150 grams of whole grains," she added.

The study analysed data provided by the Global Burden of Disease Study 2017, which was conducted in 195 countries between 1990 and 2017.2 In 2017, there were 126.5 million individuals living with ischaemic heart disease, and 10.6 million new diagnoses of the condition. Ischaemic heart disease caused 8.9 million deaths in 2017, which equates to 16% of all deaths, compared with 12.6% of all deaths in 1990.

Between 1990 and 2017, age-standardised prevalence, incidence, and death rates per 100,000 people decreased by 11.8%, 27.4%, and 30%, respectively. But absolute numbers almost doubled. Dr. Liu said: "While progress has been made in preventing heart disease and improving survival, particularly in developed countries, the numbers of people affected continues to rise because of population growth and ageing."

The investigators calculated the impact of 11 risk factors on death from ischaemic heart disease. These were diet, high blood pressure, high serum low-density lipoprotein (LDL) cholesterol, high plasma glucose, tobacco use, high body mass index (BMI), air pollution, low physical activity, impaired kidney function, lead exposure, and alcohol use. Specifically, they estimated the proportion of deaths that could be stopped by eliminating that risk factor.

Assuming all other risk factors remained unchanged, 69.2% of ischaemic heart disease deaths worldwide could be prevented if healthier diets were adopted. Meanwhile, 54.4% of these deaths could be avoided if systolic blood pressure was kept at 110-115 mmHg, while 41.9% of deaths could be stopped if serum LDL was kept at 0.7-1.3 mmol/L. About a quarter of deaths (25.5%) could be prevented if serum fasting plasma glucose was kept at 4.8-5.4 mmol/L, while eradicating smoking and second-hand smoke could stop one-fifth (20.6%) of deaths from ischaemic heart disease.

Notably, tobacco use ranked as fourth highest contributor to ischaemic heart disease deaths in men but only seventh in women. Between 1990 and 2017, the global prevalence of smoking decreased by 28.4% in men and 34.4% in women. High BMI was the fifth highest contributor to ischaemic heart disease deaths in women and sixth in men. For women, 18.3% of deaths from ischaemic heart disease could be prevented if BMI was kept at 20-25 kg/m2. In both sexes, the percentage contributions of air pollution and lead exposure to age-standardised ischaemic heart disease deaths increased as the country of residence became less developed.

Dr. Liu said: "Ischaemic heart disease is largely preventable with healthy behaviours and individuals should take the initiative to improve their habits. In addition, geographically tailored strategies are needed - for example, programmes to reduce salt intake may have the greatest benefit in regions where consumption is high (e.g. China or central Asia)."

Credit: 
European Society of Cardiology

Glutathione precursor GlyNAC reverses premature aging in people with HIV

Premature aging in people with HIV is now recognized as a new, significant public health challenge. Accumulating evidence shows that people with HIV who are between 45 to 60 years old develop characteristics typically observed in people without HIV that are more than 70 years of age. For instance, declining gait speed, physical function and cognition, mitochondrial aging, elevated inflammation, immune dysfunction, frailty and other health conditions are significantly higher in people with HIV when compared to age- and sex-matched uninfected people.

At Baylor College of Medicine, endocrinologist Dr. Rajagopal Sekhar, associate professor of medicine-endocrinology, and his team have found themselves in the right place at the right time to study premature aging in people with HIV. For the last 20 years, they have been studying natural aging in older humans and aged mice in the Section of Endocrinology, Diabetes and Metabolism of the Department of Medicine. Also, for the last 17 years, Sekhar has been active in HIV research, and has been providing clinical care for patients at the HIV clinic at Thomas Street Health Center, a part of Houston's Harris Health System, where he runs the sole endocrinology and metabolism clinic.

Sekhar's years-long expertise, knowledge and interest in metabolic disorders affecting HIV patients and a parallel track investigating non-HIV people have resulted in the publication of significant discoveries regarding the metabolic complications in aging, HIV and diabetes, and has guided numerous clinical trials that together provide a better understanding of why we age.

"The work presented here, published in the journal Biomedicines, builds a bridge between laboratory bench and bedside by showing proof-of-concept that supplementing people with HIV specifically with a combination of glycine and N-acetylcysteine, which we call GlyNAC, as precursors of glutathione, a major antioxidant produced by the body, improves multiple deficits associated with premature aging," said Sekhar.

Why we age?

For several decades, experimental evidence has supported two theories for aging. The free radical theory and the mitochondrial theory propose that elevated free radicals (oxidative stress) and mitochondrial dysfunction, respectively, are at the core of geriatric aging. Both, elevated oxidative stress and mitochondrial dysfunction, are present in people with HIV.

Free radicals, such as reactive oxygen species, and the mitochondria are physiologically connected. The mitochondria are like the batteries of the cell, they produce the energy needed for conducting cellular functions. The body transforms the food we eat into sugar and fat, which the mitochondria burns as fuel to produce energy.

However, one of the waste products of cellular energy generation is free radicals, which are highly reactive molecules that can damage cells, membranes, lipids, proteins and DNA. Cells depend on antioxidants, such as glutathione, to neutralize these toxic free radicals. When cells fail to neutralize free radicals, there is an imbalance between the radicals and the antioxidant responses, leading to harmful and damaging oxidative stress.

"The free radicals produced during fuel burning in the mitochondria can be compared to some of the waste products produced by a car's combustion engine, some of which are removed by the oil filter," Sekhar said. "If we don't change the oil filter periodically, the car's engine will diminish its performance and give less mileage."

Similarly, if the balance between free radical production and antioxidant response in cells consistently favors the former, in time cellular function could be disrupted. Glutathione helps cells keep oxidative stress in balance, it keeps the oil filter clean. GlyNAC helps the cell make glutathione.

Sekhar and his colleagues have been studying mitochondrial function and glutathione for more than 20 years. Their findings, and those of other researchers, have shown that glutathione is the ultimate natural antioxidant.

Interestingly, compared to those in younger people, glutathione levels in older people are much lower, and the levels of oxidative stress are much higher. Glutathione levels also are lower and oxidative stress is higher in conditions associated with mitochondrial dysfunction, including ageing, HIV infection, diabetes, neurodegenerative disorders, cardiovascular disorders, neurometabolic diseases, cancer, obesity and other conditions.

"When the mitochondrial batteries are running low on power, as a medical and scientific community, we do not know how to recharge these batteries," Sekhar said. "Which raised the question, if the levels of glutathione were restored in cells, would the mitochondria be recharged and able to provide power to the cell? Would restoring mitochondrial functioning improve conditions associated with mitochondrial dysfunction?"

Restoring glutathione

Restoring glutathione in cells was not straightforward because glutathione cannot work if taken orally for the same reasons that diabetic patients cannot eat insulin. It would be digested before it reached the cells. Also, providing glutathione in the blood cannot correct glutathione deficiency because every cell makes its own.

"Glutathione is a small protein made of three building blocks: amino acids cysteine, glycine and glutamic acid. We found that people with glutathione deficiency also were deficient in cysteine and glycine, but not glutamic acid," Sekhar said. "We then tested whether restoring deficient glutathione precursors would help cells replenish their glutathione. But there's another catch, because cysteine cannot be given as such, we had to supplement it in another form called N-acetylcysteine."

In past studies, Sekhar and his colleagues determined that supplementing GlyNAC, a combination of glycine and N-acetylcysteine, corrected glutathione deficiency inside the cells of naturally aged mice to the levels found in younger mice. Interestingly, the levels of glutathione and mitochondrial function, which were lower in older mice before taking GlyNAC, and oxidative stress, which was higher before GlyNAC, also were comparable to those found in younger mice after taking GlyNAC for six weeks.

The same results were observed in a small study in older humans who had high oxidative stress and glutathione deficiency inside cells. In this case, taking GlyNAC by mouth for 2-weeks corrected the glutathione deficiency and lowered both oxidative stress and insulin resistance (a pre-diabetic risk factor).

In past clinical trials, Sekhar provided GlyNAC to small groups of people to correct a nutritional deficiency, and produced encouraging evidence supporting further studies of the value of this approach to restoring mitochondrial function in clinical trials.

Improving premature aging in people with HIV

In the current study, Sekhar and his colleagues conducted an open-label clinical trial that included six men and two women with HIV, and eight age-, gender- and body mass index-matched uninfected controls, all between 45 and 60 years old. The people with HIV were on stable antiretroviral therapy and had not been hospitalized for six months prior to the study.

Before taking GlyNAC, the group with HIV, compared with the controls, was deficient in glutathione and had multiple conditions associated with premature aging, including higher oxidative stress; mitochondrial dysfunction; higher inflammation, endothelial dysfunction and insulin resistance; more damage to genes; lower muscle strength; increased belly fat and impaired cognition and memory.

The results are encouraging. GlyNAC supplementation for 12 weeks improved all the deficiencies indicated above. Some of the improvements declined eight weeks after stopping GlyNAC.

"It was exciting to see so many new beneficial effects of GlyNAC that have never been described before. Some of the most encouraging findings included reversal of some measures of cognitive decline, a significant condition in people with HIV, and also improved physical strength and other hallmark defects," Sekhar said.

"It was encouraging to see that GlyNAC can reverse many of these hallmark defects in people with HIV as there is no current treatment known to reverse these abnormalities. Our findings could have implications beyond HIV and need further investigation," Sekhar said.

Overall, these findings in HIV patients provide proof-of-concept that dietary supplementation of GlyNAC improves multiple hallmarks of aging and that glutathione deficiency and oxidative stress could contribute to them.

Encouraged by these results, Sekhar has continued his investigations by testing the value of GlyNAC supplementation for improving the health of the growing older population, and has completed an open label trial, and another NIH-funded, double-blind, placebo-controlled trial in older adults.

"The results from these recently completed trials support the findings of the HIV study," said Sekhar, who is currently the Principal Investigator of two NIH-funded randomized clinical trials studying the effect of GlyNAC in older humans with mild cognitive impairment, and with Alzheimer's disease.

Credit: 
Baylor College of Medicine

Marriage or not? Rituals help dating couples decide relationship future

URBANA, Ill. - Rituals such as those centered around holidays and other celebrations play an important part in human relationships. When dating couples engage in rituals together, they learn more about each other. And those experiences can serve as diagnostic tools of where the relationship is going, a University of Illinois study shows.

"Rituals have the power to bond individuals and give us a preview into family life and couple life. We found they help magnify normative relationship experiences," says Chris Maniotes, graduate student in the Department of Human Development and Family Studies (HDFS) at U of I and lead author of the paper, published in Journal of Social and Personal Relationships.

Rituals are experiences that are shared with others, and they impact communication between individuals. While rituals are typically celebrations such as holidays, they can also be idiosyncratic events a couple creates, such as Friday movie night. Most rituals are recurring events, though some (such as rites of passage) occur just once in a person's life. Rituals have elements of routine, but they have symbolic meaning that goes beyond routine interaction.

"Rituals provide a unique time to review one's partner and relationship; you get to see a host of behaviors and interactions that might normally be obscured," Maniotes notes. "Some of the ways rituals affected commitment to wed with these couples was by altering their view of their partner, giving them a new perspective."

Maniotes and co-authors Brian Ogolsky and Jennifer Hardesty, researchers in HDFS, analyzed in-depth interviews with 48 individuals (24 couples) in the U.S. Southwest region. Respondents were on average 23 years old and had been in their relationship for 2.5 years. They were randomly selected from a larger study examining commitment to wed in heterosexual dating couples over a period of nine months.

For this study, the researchers looked at the impact of rituals. They found commitment to wed could increase or decrease, depending on the nature of the interaction. Rituals can reinforce bonds and strengthen commitment, but they can also showcase conflict areas and make people less likely to see the relationship heading towards marriage.

For example, holiday celebrations involving rituals could highlight interactions with extended family and provide a window into how people navigate through conflict.

"Rituals seem to really play a role in pausing and slowing down individuals, helping them take a better look at their relationship. They help them see, 'this is who we are as a couple; this is who we are as a family,'" Maniotes explains.

Rituals may not be the defining driver of where a relationship is going, but along with a constellation of experiences and behaviors it brings up important nuances that affect couples' decision whether or not to wed.

Couples who are dating can benefit from understanding how rituals affect their relationship. That's even more important during current COVID-19 restrictions, where rituals we used to take for granted are less predictable, Maniotes says.

"Just recognizing the importance of rituals in our lives, and the magnitude of the role they play, can help us integrate them in an intentional way," he concludes.

Credit: 
University of Illinois College of Agricultural, Consumer and Environmental Sciences

Study reveals most effective drugs for common type of neuropathic pain

image: Richard Barohn, MD, is the lead researcher and executive vice chancellor for health affairs at the University of Missouri

Image: 
Justin Kelley

More than 20 million people in the U.S. suffer neuropathic pain. At least 25% of those cases are classified as unexplained and considered cryptogenic sensory polyneuropathy (CSPN). There is no information to guide a physician's drug choices to treat CSPN, but a researcher from the University of Missouri School of Medicine and MU Health Care led a first-of-its-kind prospective comparative effectiveness study.

The study compared four drugs with different mechanisms of action in a large group of patients with CSPN to determine which drugs are most useful for this condition. The study involved 40 sites and enrolled 402 patients with diagnosed CSPN who were 30 years or older and reported a pain score of four or greater on a 10-point scale.

Participants were prescribed one of four medications commonly used to treat CSPN: nortriptyline, a tricyclic antidepressant; duloxetine, a serotonin-norepinephrine reuptake inhibitor; pregabalin, an anti-seizure drug; or mexiletine, an anti-arrhythmic medication. Patients took the prescribed treatment for 12 weeks and were evaluated at four, eight and 12 weeks. Any participant who reported at least a 50% reduction in pain was deemed as demonstrating an efficacious result. Patients who discontinued the treatment drug because of adverse effects were also measured.

"This study went beyond whether the drug reduced pain to also focus on adverse effects," said Richard Barohn, MD, lead researcher and executive vice chancellor for health affairs at the University of Missouri. "As the first study of its kind, we compared these four drugs in a real-life setting to provide physicians with a body of evidence to support the effective management of peripheral neuropathy and to support the need for newer and more effective drugs for neuropathic pain."

Nortriptyline had the highest efficacious percentage (25%), and the second-lowest quit rate (38%), giving it the highest level of overall utility. Duloxetine had the second-highest efficacious rate (23%), and lowest drop-out rate (37%). Pregbalin had the lowest efficacy rate (15%) and Mexiletene had the highest quit rate (58%).

"There was no clearly superior performing drug in the study," Barohn said. "However, of the four medications, nortriptyline and duloxetine performed better when efficacy and dropouts were both considered. Therefore, we recommend that either nortriptyline or duloxetine be considered before the other medications we tested."

There are other nonnarcotic drugs used to treat painful peripheral neuropathy, including gabapentin, venlafaxine and other sodium channel inhibitors. Barohn said additional comparative effectiveness research studies can be performed on those drugs, so doctors can further build a library of data for the treatment of CSPN. His goal is to build effectiveness data on nearly a dozen drugs for CSPN.

Credit: 
University of Missouri-Columbia

Low-metallicity globular star cluster challenges formation models

On the outskirts of the nearby Andromeda Galaxy, researchers have unexpectedly discovered a globular cluster (GC) - a massive congregation of relic stars - with a very low abundance of chemical elements heavier than hydrogen and helium (known as its metallicity), according to a new study. The GC, designated RBC EXT8, has 800 times lower abundance of these elements than the Sun, below a previously-observed limit, challenging the notion that massive GCs could not have formed at such low metallicities. GCs are dense, gravitationally bound collections of thousands to millions of ancient stars that orbit in the fringes of large galaxies; many GCs formed early in the history of the Universe. Because they contain some of the oldest stars in a galaxy, GCs provide astronomers with a record of early galaxy formation and evolution. The most metal-poor GCs have abundances about 300 times lower than the Sun and no GCs with metallicities below that value were previously known. This was thought to indicate a limit to metal content - a metallicity floor - that was required for GC formation; several mechanisms have been proposed to explain this limit. Søren Larsen and colleagues report the discovery of an extremely metal-deficient GC in the Andromeda Galaxy. Spectral analysis of RBC EXT8 shows that its metallicity is nearly three times lower than the most metal-poor clusters previously known, challenging the need for a metallicity floor. "Our finding shows that massive globular clusters could form in the early Universe out of gas that had only received a small 'sprinkling' of elements other than hydrogen and helium. This is surprising because this kind of pristine gas was thought to be associated with proto-galactic building blocks too small to form such massive star clusters," said Larsen.

Credit: 
American Association for the Advancement of Science (AAAS)

A new approach boosts lithium-ion battery efficiency and puts out fires, too

image: Scientists at Stanford and SLAC redesigned current conductors - thin metal foils that distribute current to and from electrodes - to make lithium-ion batteries lighter, safer and more efficient. They replaced the all-copper conductor, middle, with a layer of lightweight polymer coated in ultrathin copper (top right), and embedded fire retardant in the polymer layer to quench flames (bottom right). (Yusheng Ye/Stanford University)

Image: 
Yusheng Ye/Stanford University

In an entirely new approach to making lithium-ion batteries lighter, safer and more efficient, scientists at Stanford University and the Department of Energy's SLAC National Accelerator Laboratory have reengineered one of the heaviest battery components ­- sheets of copper or aluminum foil known as current collectors ­- so they weigh 80% less and immediately quench any fires that flare up.

If adopted, the researchers said, this technology could address two major goals of battery research: extending the driving range of electric vehicles and reducing the danger that laptops, cell phones and other devices will burst into flames. This is especially important when batteries are charged super-fast, creating more of the types of battery damage that can lead to fires.

The research team described their work Oct. 15 in Nature Energy.

"The current collector has always been considered dead weight, and until now it hasn't been successfully exploited to increase battery performance," said Yi Cui, a professor at SLAC and Stanford and investigator with the Stanford Institute for Materials and Energy Sciences (SIMES) who led the research.

"But in our study, making the collector 80% lighter increased the energy density of lithium-ion batteries - how much energy they can store in a given weight - by 16-26%. That's a big jump compared to the average 3% increase achieved in recent years."

Desperately seeking weight loss

Whether they come in the form of cylinders or pouches, lithium-ion batteries have two current collectors, one for each electrode. They distribute current flowing in or out of the electrode, and account for 15% to as much as 50% of the weight of some high-power or ultrathin batteries. Shaving a battery's weight is desirable in itself, enabling lighter devices and reducing the amount of weight electric vehicles have to lug around; storing more energy per given weight allows both devices and EVs to go longer between charges.

Reducing battery weight and flammability could also have a big impact on recycling by making the transportation of recycled batteries less expensive, Cui said.

Researchers in the battery industry have been trying to reduce the weight of current collectors by making them thinner or more porous, but these attempts have had unwanted side effects, such as making batteries more fragile or chemically unstable or requiring more electrolyte, which raises the cost, said Yusheng Ye, a postdoctoral researcher in Cui's lab who carried out the experiments with visiting scholar Lien-Yang Chou.

As far as the safety issue, he said, "People have also tried adding fire retardant to the battery electrolyte, which is the flammable part, but you can only add so much before it becomes viscous and no longer conducts ions well."

Designing a polymer-foil sandwich

After brainstorming the problem, Cui, Ye and graduate student Yayuan Liu designed experiments for making and testing current collectors based on a lightweight polymer called polyimide, which resists fire and stands up to the high temperatures created by fast battery charging. A fire retardant ­- triphenyl phosphate, or TPP - was embedded in the polymer, which was then coated on both surfaces with an ultrathin layer of copper. The copper would not only do its usual job of distributing current, but also protect the polymer and its fire retardant.

Those changes reduced the weight of the current collector by 80% compared to today's versions, Ye said, which translates to an energy density increase of 16-26% in various types of batteries, and it conducts current just as well as regular collectors with no degradation.

When exposed to an open flame from a lighter, pouch batteries made with today's commercial current collectors caught fire and burned vigorously until all the electrolyte burned away, Ye said. But in batteries with the new flame-retardant collectors, the fire never really got going, producing very weak flames that went out within a few seconds, and did not flare up again even when the scientists tried to relight it.

One of the big advantages of this approach, Cui said, is that the new collector should be easy to manufacture and also cheaper, because it replaces some of the copper with an inexpensive polymer. So scaling it up for commercial production, he said, "should be very doable." The researchers have applied for a patent through Stanford, and Cui said they will be contacting battery manufacturers to explore the possibilities.

Credit: 
DOE/SLAC National Accelerator Laboratory

Researchers question the existence of the social brain as a separate system

image: A sample Raven-like matrix problem. Participants have to fill in the lower right corner of the visual matrix with one of the four alternatives by figuring out the principle of the matrix organization. Item created by Maria Falikman.

Image: 
Shpurov IY, Vlasova RM, Rumshiskaya AD, Rozovskaya RI, Mershina EA, Sinitsyn VE and Pechenkova EV (2020) Neural Correlates of Group Versus Individual Problem Solving Revealed by fMRI. Front. Hum. Neurosci.

A team of Russian researchers with the participation of a leading researcher at HSE University, Ekaterina Pechenkova, found that during group problem solving the components of the social brain are co-activated, but they do not increase their coupling during cooperation as would be suggested for a holistic network. The study was published in Frontiers in Human Neuroscience.
https://www.frontiersin.org/articles/10.3389/fnhum.2020.00290/full
Social neuroscience studies examine the 'social brain', a hypothetical network of different areas of the brain responsible for interacting with other people.

In this science, the brain of a person is most often studied while the person observes interactions between others without taking part in them themselves. This is due to the complexity of conducting experiments with active communication. Modern equipment for studying the brain is not adapted to situations in which a person can freely move and talk during the scanning process.

Meanwhile, in experiments with passive observation of others, it is possible to detect only individual components of the social brain. It is not possible to view the network as a whole as it functions.
In order to better understand the network, researchers must be able to observe subjects as they engage in tasks that are more complex and closer to those they encounter in real life--such as group problem solving.

One such study was carried out by a team of Russian scientists from institutions including the Research Institute of Neuropsychology of Speech and Writing (Moscow) and the Federal Center of Treatment and Rehabilitation of the Russian Ministry of Health. The study involved 24 teams of 3 people, who were recruited from among Moscow players of the 'What? Where? When?' quiz game. In the study, participants had to solve Raven-like matrix problems both in small groups and individually.

The study lasted four hours and consisted of two parts that each contained 60 problems. When solving the problems as a group, each team completed both a practice session in a classroom and a scanning session, in which one subject was placed in an MRI scanner while their two teammates sat next to them on each side. Group and individual tasks alternated. The individual task was solved only by one team member, who was in the MRI machine in the second part of the experiment. The participants had about 40 seconds to solve each matrix. On average, teams solved 44 out of 60 problems correctly, while those working individually solved only 34 problems.

The study showed that when solving problems as a group, as opposed to individually, the social brain components described in the literature are additionally co-activated but not synchronized. In other words, they do not form a holistic, jointly working system. However, a readjustment occurs in the interaction between the brain's basic networks: the researchers observed a decreasing connectivity between the language and the salience networks in the group vs. individual activity conditions. The scientists have yet to figure out why these particular networks are reconfigured during social interaction.
The components of the social brain include areas such as the medial prefrontal cortex, the pole of the temporal lobe, the temporoparietal junction, the precuneus, and other areas. Studies confirm that these zones are co-activated when various social stimuli are presented, such as photographs or descriptions of how people interact with others.

In addition, when people work together, structures of the so-called 'default' mode network became more involved. This observation may support the fact that it is easier to solve problems in a group setting, because less personal cognitive effort is required.

'Another explanation is that the default network is by no means passive and provides the processes necessary for cooperative communication,' says HSE researcher Ekaterina Pechenkova, one of the study authors. 'These supposed functions of the zones involved in the default network include, first of all, the ability to mentalize, or to understand that other people have thoughts and experiences, as well as to reflect on their own thoughts and feelings.'

The study involving players of the quiz game 'What? Where? When?' allowed the researchers to examine the work of the brain during social interaction from a different perspective, and it also demonstrates the possibility of using more complex problems that are closer to those encountered in real life activities for neuroscientific experiments.

Credit: 
National Research University Higher School of Economics

Artificial cyanobacterial biofilm can sustain green ethylene production for over a month

The great global challenges of our time, including climate change, energy security and scarcity of natural resources, promote a transition from the linear fossil-based economy to the sustainable bio-based circular economy. Taking this step requires further development of emerging technologies for production of renewable fuels and chemicals.

Photosynthetic microorganisms, such as cyanobacteria and algae, show a great potential for satisfying our demand for renewable chemicals and reducing the global dependence on fossil fuels. These microorganisms have the ability to utilise solar energy in converting CO2 into biomass and a variety of different energy-rich organic compounds. Cyanobacteria are also capable of holding novel synthetic production pathways that allow them to function as living cell factories for the production of targeted chemicals and fuels.

Ethylene is one of the most important organic commodity chemicals with an annual global demand of more than 150 million tons. It is the main building block in the production of plastics, fibres and other organic materials.

"In our research, we employed the genetically engineered cyanobacterium Synechocystis sp. PCC 6803 that expresses the ethylene-forming enzyme (EFE) acquired from the plant pathogen, Pseudomonas syringae. The presence of EFE in cyanobacterial cells enables them to produce ethylene using solar energy and CO2 from air," says Associate Professor Allahverdiyeva-Rinne.

Ethylene has a high energy density that makes it an attractive fuel source. Currently, ethylene is produced via steam cracking of fossil hydrocarbon feedstocks leading to a huge emission of CO2 into the environment. Therefore, it is important to develop green approaches for synthesising ethylene.

"Although very promising results have been reported on ethylene-producing recombinant cyanobacteria, the overall efficiency of the available photoproduction systems is still very low for industrial applications. The ethylene productivity of engineered cyanobacteria is the most critical variable for reducing the costs and improving efficiency," says Postdoctoral Researcher Sindhujaa Vajravel.

However, cyanobacteria have several limitations for efficient production, as they primarily accumulate biomass, not the desired products.

"They possess a giant photosynthetic light-harvesting antenna that leads to self-shading and limited light distribution in suspension cultures, which decreases productivity. The greatest limitation is that the production period of the cells is short, only a few days," explains Associate Professor Allahverdiyeva-Rinne.

To solve these two problems, researchers entrapped ethylene-producing cyanobacterial cells within thin-layer alginate polymer matrix. This approach limits cell growth strongly, thus engaging efficient flux of photosynthetic metabolites for ethylene biosynthesis. It also improves light utilisation under low-light conditions and strongly promotes cell fitness. As a result, the artificial biofilms achieved sustainable photoproduction of ethylene for up to 40 days with a light-to-ethylene conversion efficiency that is 3.5 fold higher than in conventional suspension cultures.

These findings open up new possibilities for the further development of efficient solid-state photosynthetic cell factories for ethylene production and scaling up the process to the industrial level.

Credit: 
University of Turku