Culture

Frequent consumption of meals prepared away from home associated with an increased risk of death

This release has been removed upon request of the submitting institution. Please contact Lydia R. Hall, media@eatright.org for more information.

Credit: 
Elsevier

Anabolic androgenic steroids accelerate brain aging

Philadelphia, March 25, 2021 - Anabolic androgenic steroids (AAS), a synthetic version of the male sex hormone testosterone, are sometimes used as a medical treatment for hormone imbalance. But the vast majority of AAS is used to enhance athletic performance or build muscle because when paired with strength training. AAS use increases muscle mass and strength, and its use is known to have many side effects, ranging from acne to heart problems to increased aggression. A new study now suggests that AAS can also have deleterious effects on the brain, causing it to age prematurely.

The report appears in Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, published by Elsevier.

"Anabolic steroid use has been associated with a range of medical and psychological side effects," said lead author, Astrid Bjørnebekk, PhD, Division of Mental Health and Addiction, Oslo University Hospital, Oslo, Norway. "However, since anabolic steroids have only been in the public domain for about 35 years, we are still in the early phase of appreciating the full scope of effects after prolonged use. The least studied effects are those that relate to the brain."

Steroid hormones readily enter the brain, and receptors for sex hormones are found throughout the brain. Because AAS are administered at much higher doses than those naturally found in the body, they could have a harmful impact on the brain, particularly over a long period of use. Previous studies have shown that AAS users performed worse on cognitive tests than non-users.

Dr. Bjørnebekk and colleagues performed magnetic resonance imaging (MRI) of the brains of 130 male weightlifters with a history of prolonged AAS use and of 99 weightlifters who had never used AAS. Using a set of data compiled from nearly 2,000 healthy males from age 18 to 92 years of age. The researchers used machine learning to determine the predicted brain age of each of their participants and then determined the brain age gap: the difference between each participant's chronological age and their predicted brain age. Advanced brain age is associated with impaired cognitive performance and increased risk for neurodegenerative diseases.

Not surprisingly, AAS users had a bigger brain age gap compared to non-users. Those with dependence on AAS, or with a longer history of use, showed accelerated brain aging. The researchers accounted for use of other substances and for depression in the men, which did not explain the difference between the groups.

"This important study shows in a large sample that use is associated with deviant brain aging, with a potential impact on quality of life in older age. The findings could be directly useful for health care professionals, and may potentially have preventive implications, where brain effects are also included into the risk assessment for young men wondering whether to use anabolic steroids," added Dr. Bjørnebekk.

Cameron Carter, MD, editor of Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, said of the study: "The results of this brain imaging study should be of concern for athletes using anabolic steroids for performance enhancement and suggest that the adverse effects on behavior and cognition previously shown to be associated with long-term use are the result of effects on the brain in the form of accelerated brain aging."

Credit: 
Elsevier

Deforestation, forest conversion and palm oil plantations linked to disease outbreaks

Deforestation, certain types of reforestation and commercial palm plantations correlate with increasing outbreaks of infectious disease, shows a new study in Frontiers in Veterinary Science. This study offers a first global look at how changes in forest cover potentially contribute to vector-borne diseases--such as those carried by mosquitos and ticks--as well as zoonotic diseases, like Covid-19, which jumped from an animal species into humans. The expansion of palm oil plantations in particular corresponded to significant rises in vector-borne disease infections.

"We don't yet know the precise ecological mechanisms at play, but we hypothesize that plantations, such as oil palm, develop at the expense of natural wooded areas, and reforestation is mainly monospecific forest made at the expense of grasslands," says lead author Dr Serge Morand, who holds joint positions at the Centre National de la Recherche Scientifique (CNRS) and the French Agricultural Research Centre for International Development (CIRAD), and at Kasetsart University in Thailand". "Both land use changes are characterized by loss of biodiversity and these simplified habitats favor animal reservoirs and vectors of diseases."

Land use and disease outbreaks

Deforestation is widely recognized to negatively impact biodiversity, the climate and human health generally. Deforestation in Brazil has already been linked to malaria epidemics, but the global consequences of deforestation and forest cover changes on human health and epidemics has not been studied in detail.

To better understand these effects, Morand and his colleague looked at changes in forest cover around the world between 1990 and 2016. They then compared these results to the local population densities and outbreaks of vector-borne and zoonotic diseases. They also specifically looked at reforestation and afforestation--which included conversion of natural grasslands and abandonment of agricultural land. Several prior studies had claimed that both afforestation and palm oil plantations likely play a role in further spreading disease vectors.

Confirming past hypotheses, they found that both deforestation and afforestation had significant correlations to disease outbreaks. They found a strong association between deforestation and epidemics (such as malaria and Ebola) in tropical countries like Brazil, Peru, Bolivia, the Democratic Republic of Congo, Cameroon, Indonesia, Myanmar and Malaysia. In contrast, temperate regions like the USA, China and Europe showed clear links between afforestation activities and vector-borne diseases like Lyme disease.

Their approach did not distinguish between different types of reforestation activities, but they did find a significant increase in disease outbreaks in countries with expanding palm oil plantations. This was especially striking in regions of China and Thailand, where there was relatively little deforestation. These areas appeared particularly susceptible to mosquito-borne diseases like dengue, zika and yellow fever.

Healthy forests for a healthy planet

These results suggest that careful forest management is a critical component in preventing future epidemics. Commercial plantations, land abandonment, and grassland conversion to forests are potentially detrimental and these are no substitute for preserving the world's existing forests.

"We hope that these results will help policymakers recognize that forests contribute to a healthy planet and people, and that governing bodies need to avoid afforestation and agricultural conversion of grasslands," says Morand. "We'd also like to encourage research into how healthy forests regulate diseases, which may help better manage forested and planted areas by considering their multidimensional values for local communities, conservation and mitigation of climate change."

Credit: 
Frontiers

Meta-analysis shows children prefer people who speak like them

Research shows that children prefer to befriend, listen to, and imitate people who speak similarly to them. While most of this research has been conducted on monolingual (speaking only one language) children from Western societies, a growing subset of research has begun examining whether this pattern holds for children from more diverse linguistic and cultural backgrounds. A new meta-analysis including studies with monolingual as well as bilingual children helps to shed light on the range of factors that contribute to the development of linguistic-based biases in early childhood. Understanding these patterns can eventually guide efforts to diminish biases based on how one speaks.

The findings were published in a Child Development article written by researchers at the University of Queensland in Australia.

"Consistent with prior research, we found that infants and children overall prefer those who speak in the same accent, dialect or language as themselves," said Jessica Spence, doctoral candidate, The University of Queensland. "Interestingly, we also found that bilingual children and children exposed to other accents, dialects and languages displayed just as much preference for speakers of their own linguistic variety--if not more--than monolingual children and those who were not exposed to other ways of speaking."

The meta-analysis began with a literature search to find all the studies that have examined children's linguistic-based social preferences and found 38 studies published between 1980 and 2020. The studies involved 2,680 infants and children ranging from 2 days old to 11 years of age with the following sample characteristics:

13 different countries (Australia, Brazil, Britain, Canada, France, Germany, Israel, Japan, Netherlands, Singapore, South Africa, Spain, and United States),

Diverse range of socio-economic backgrounds (e.g., working-, middle-, an upper class),

Spoke 15 different primary languages (Basque, Cantonese, Dutch, English, French, German, Hawaiian Creole English, Hebrew, Japanese, Korean, Mandarin, Portuguese, Spanish, Tagalog, and Xhosa), and

Participants were identified by the studies as White, Mixed race, Asian descent, German, Japanese, White and Korean mix and Xhosa.

The meta-analysis examined the overall effect of linguistic cues on children's social preferences and also investigated how the mean age of the sample, bilingualism, exposure to non-native speech, cultural background, type of measure used to assess preferences (implicit, explicit behavioral, or explicit forced-choice) and type of linguistic cue (accent, dialect or language) influences children's linguistic-based social preferences. The findings suggest that cultural backgrounds did not impact children's preference for native over non-native speakers. Contrary to past assumptions, however, children raised in a diverse environment may have a greater awareness of linguistic-based group differences.

The authors recognize that while the findings show that infants and children generally favor native speakers over non-native speakers, the question of why they display this preference remains. "As the world becomes more globalized, it is more important than ever to consider how exposure to diversity can promote acceptance rather than amplify intergroup biases," says Kana Imuta, Assistant Professor, The University of Queensland.

Credit: 
Society for Research in Child Development

1º of global warming causes a ~50% increase in population displacement risk

A new study shows that if the population were fixed at current levels, the risk of population displacement due to river floods would rise by ~50% for each degree of global warming. However, if population increases are taken into account, the relative global flood displacement risk is significantly higher.

The research, by an international team from Switzerland, Germany, and the Netherlands, used a global climate-, hydrology- and inundation-modelling chain, including multiple alternative climate and hydrological models, to quantify the effect of global warming on displacement risk for both current and projected future population distributions. Their results are published today in the IOP Publishing journal, Environmental Research Letters.

Since 2008, disasters caused by natural hazards have caused 288 million people to be displaced, three times the number displaced as a result of wars, conflicts and violence. Floods account for about half of all disaster displacements; floods alone have caused 63% more displacement than conflict and violence. As the authors of this research point out: "Displacement poses many hardships, which often fall most heavily on socio-economically vulnerable groups, who tend to live in more hazard-prone areas... displaced people face heightened risks to their physical and mental health, livelihoods, land tenancy, personal security, and many other aspects of their well-being." In addition to risk at the personal level, the authors explain: "From a long-term perspective, displaced people are among those most at risk of being "left behind" by economic development. According to the UN's secretary-general, displacement and climate change represent the key challenges we face as a global community."

The authors describe the rationale for their research as follows: "Because floods are a major driver of displacement and due to the fact that they are influenced by climate change, it is imperative that we have a better understanding of the future flood displacement risk and how climate change and demographic and socio-economic factors will influence it." Their study used scenario-based projections to estimate the changing trend of river flood displacements, both globally and at the regional level, capturing the long-term dynamics of climate change and socio-economic development. The authors also examined whether flood displacement risk is driven by climate change, by social economic development, or by both.

The results show that for most regions, both increased flooding and population growth contribute to an increased risk of displacement by river floods. If climate change is aligned to the Paris Agreement and scenarios of population change are used, "the globally averaged risk of people being displaced by river floods is projected to double (+110%) by the end of this century", but under 'business as usual' climate-change conditions, "this displacement risk is projected to increase by 350%." However, the risk of flood displacement can still be addressed and managed, e.g., by urban planning and protective infrastructure.

While the resolution of the global models is limited, the effect of global warming is robust across greenhouse gas concentration scenarios, climate models and hydrological models. These findings highlight the need for rapid action on both climate mitigation and adaptation agendas in order to reduce future risks to vulnerable populations.

Credit: 
IOP Publishing

Astronomers image magnetic fields at the edge of M87's black hole

image: The Event Horizon Telescope (EHT) collaboration, who produced the first ever image of a black hole released in 2019, has today a new view of the massive object at the centre of the Messier 87 (M87) galaxy: how it looks in polarised light. This is the first time astronomers have been able to measure polarisation, a signature of magnetic fields, this close to the edge of a black hole.
This image shows the polarised view of the black hole in M87. The lines mark the orientation of polarisation, which is related to the magnetic field around the shadow of the black hole.

Image: 
EHT Collaboration

The Event Horizon Telescope (EHT) collaboration, who produced the first ever image of a black hole, has revealed today a new view of the massive object at the centre of the M87 galaxy: how it looks in polarised light. This is the first time astronomers have been able to measure polarisation, a signature of magnetic fields, this close to the edge of a black hole. The observations are key to explaining how the M87 galaxy, located 55 million light-years away, is able to launch energetic jets from its core.

"We are now seeing the next crucial piece of evidence to understand how magnetic fields behave around black holes, and how activity in this very compact region of space can drive powerful jets that extend far beyond the galaxy," says Monika Moscibrodzka, Coordinator of the EHT Polarimetry Working Group and Assistant Professor at Radboud Universiteit in the Netherlands.

On 10 April 2019, scientists released the first ever image of a black hole, revealing a bright ring-like structure with a dark central region -- the black hole's shadow. Since then, the EHT collaboration has delved deeper into the data on the supermassive object at the heart of the M87 galaxy collected in 2017. They have discovered that a significant fraction of the light around the M87 black hole is polarised.

"This work is a major milestone: the polarisation of light carries information that allows us to better understand the physics behind the image we saw in April 2019, which was not possible before," explains Ivan Marti-Vidal, also Coordinator of the EHT Polarimetry Working Group and GenT Distinguished Researcher at the Universitat de Valencia, Spain. He adds that "unveiling this new polarised-light image required years of work due to the complex techniques involved in obtaining and analysing the data."

Light becomes polarised when it goes through certain filters, like the lenses of polarised sunglasses, or when it is emitted in hot regions of space that are magnetised. In the same way polarised sunglasses help us see better by reducing reflections and glare from bright surfaces, astronomers can sharpen their vision of the region around the black hole by looking at how the light originating from there is polarised. Specifically, polarisation allows astronomers to map the magnetic field lines present at the inner edge of the black hole.

"The newly published polarised images are key to understanding how the magnetic field allows the black hole to 'eat' matter and launch powerful jets," says EHT collaboration member Andrew Chael, a NASA Hubble Fellow at the Princeton Center for Theoretical Science and the Princeton Gravity Initiative in the USA.

The bright jets of energy and matter that emerge from M87's core and extend at least 5000 light-years from its centre are one of the galaxy's most mysterious and energetic features. Most matter lying close to the edge of a black hole falls in. However, some of the surrounding particles escape moments before capture and are blown far out into space in the form of jets.

Astronomers have relied on different models of how matter behaves near the black hole to better understand this process. But they still don't know exactly how jets larger than the galaxy are launched from its central region, which is as small in size as the Solar System, nor how exactly matter falls into the black hole. With the new EHT image of the black hole and its shadow in polarised light, astronomers managed for the first time to look into the region just outside the black hole where this interplay between matter flowing in and being ejected out is happening.

The observations provide new information about the structure of the magnetic fields just outside the black hole. The team found that only theoretical models featuring strongly magnetised gas can explain what they are seeing at the event horizon.

"The observations suggest that the magnetic fields at the black hole's edge are strong enough to push back on the hot gas and help it resist gravity's pull. Only the gas that slips through the field can spiral inwards to the event horizon," explains Jason Dexter, Assistant Professor at the University of Colorado Boulder, USA, and coordinator of the EHT Theory Working Group.

To observe the heart of the M87 galaxy, the collaboration linked eight telescopes around the world to create a virtual Earth-sized telescope, the EHT. The impressive resolution obtained with the EHT is equivalent to that needed to measure the length of a credit card on the surface of the Moon.

This setup allowed the team to directly observe the black hole shadow and the ring of light around it, with the new polarised-light image clearly showing that the ring is magnetised. The results are published today in two separate papers in The Astrophysical Journal Letters by the EHT collaboration. The research involved over 300 researchers from multiple organisations and universities worldwide.

"The EHT is making rapid advancements, with technological upgrades being done to the network and new observatories being added. We expect future EHT observations to reveal more accurately the magnetic field structure around the black hole and to tell us more about the physics of the hot gas in this region," concludes EHT collaboration member Jongho Park, an East Asian Core Observatories Association Fellow at the Academia Sinica Institute of Astronomy and Astrophysics in Taipei.

Credit: 
Radboud University Nijmegen

Astronomers image magnetic fields at the edge of M87's black hole

image: The Event Horizon Telescope (EHT) collaboration, who produced the first ever image of a black hole released in 2019, has today a new view of the massive object at the centre of the Messier 87 (M87) galaxy: how it looks in polarised light. This is the first time astronomers have been able to measure polarisation, a signature of magnetic fields, this close to the edge of a black hole.

This image shows the polarised view of the black hole in M87. The lines mark the orientation of polarisation, which is related to the magnetic field around the shadow of the black hole.

Image: 
EHT Collaboration

The Event Horizon Telescope (EHT) collaboration, who produced the first ever image of a black hole, has today revealed a new view of the massive object at the centre of the Messier 87 (M87) galaxy: how it looks in polarised light. This is the first time astronomers have been able to measure polarisation, a signature of magnetic fields, this close to the edge of a black hole. The observations are key to explaining how the M87 galaxy, located 55 million light-years away, is able to launch energetic jets from its core.

"We are now seeing the next crucial piece of evidence to understand how magnetic fields behave around black holes, and how activity in this very compact region of space can drive powerful jets that extend far beyond the galaxy," says Monika Mo?cibrodzka, Coordinator of the EHT Polarimetry Working Group and Assistant Professor at Radboud University in the Netherlands.

On 10 April 2019, scientists released the first ever image of a black hole (https://www.eso.org/public/news/eso1907/) , revealing a bright ring-like structure with a dark central region -- the black hole's shadow (https://www.eso.org/public/images/eso1907a/) . Since then, the EHT collaboration has delved deeper into the data on the supermassive object at the heart of the M87 galaxy collected in 2017. They have discovered that a significant fraction of the light around the M87 black hole is polarised.

"This work is a major milestone: the polarisation of light carries information that allows us to better understand the physics behind the image we saw in April 2019, which was not possible before," explains Iván Martí-Vidal, also Coordinator of the EHT Polarimetry Working Group and GenT Distinguished Researcher at the University of Valencia, Spain. He adds that "unveiling this new polarised-light image required years of work due to the complex techniques involved in obtaining and analysing the data."

Light becomes polarised when it goes through certain filters, like the lenses of polarised sunglasses, or when it is emitted in hot regions of space where magnetic fields are present. In the same way that polarised sunglasses help us see better by reducing reflections and glare from bright surfaces, astronomers can sharpen their view of the region around the black hole by looking at how the light originating from it is polarised. Specifically, polarisation allows astronomers to map the magnetic field lines present at the inner edge of the black hole.

"The newly published polarised images are key to understanding how the magnetic field allows the black hole to 'eat' matter and launch powerful jets," says EHT collaboration member Andrew Chael, a NASA Hubble Fellow at the Princeton Center for Theoretical Science and the Princeton Gravity Initiative in the US.

The bright jets of energy and matter that emerge from M87's core (https://www.eso.org/public/images/eso1907c/) and extend at least 5000 light-years from its centre are one of the galaxy's most mysterious and energetic features. Most matter lying close to the edge of a black hole falls in. However, some of the surrounding particles escape moments before capture and are blown far out into space in the form of jets.

Astronomers have relied on different models of how matter behaves near the black hole to better understand this process. But they still don't know exactly how jets larger than the galaxy are launched from its central region, which is comparable in size to the Solar System, nor how exactly matter falls into the black hole. With the new EHT image of the black hole and its shadow in polarised light, astronomers managed for the first time to look into the region just outside the black hole where this interplay between matter flowing in and being ejected out is happening.

The observations provide new information about the structure of the magnetic fields just outside the black hole. The team found that only theoretical models featuring strongly magnetised gas can explain what they are seeing at the event horizon.

"The observations suggest that the magnetic fields at the black hole's edge are strong enough to push back on the hot gas and help it resist gravity's pull. Only the gas that slips through the field can spiral inwards to the event horizon," explains Jason Dexter, Assistant Professor at the University of Colorado Boulder, US, and Coordinator of the EHT Theory Working Group.

To observe the heart of the M87 galaxy, the collaboration linked eight telescopes around the world -- including the northern Chile-based Atacama Large Millimeter/submillimeter Array (ALMA - https://www.eso.org/public/teles-instr/alma/ ) and the Atacama Pathfinder EXperiment (APEX - https://www.eso.org/public/teles-instr/apex/ ), in which the European Southern Observatory (ESO) is a partner -- to create a virtual Earth-sized telescope, the EHT. The impressive resolution obtained with the EHT is equivalent to that needed to measure the length of a credit card on the surface of the Moon.

"With ALMA and APEX, which through their southern location enhance the image quality by adding geographical spread to the EHT network, European scientists were able to play a central role in the research," says Ciska Kemper, European ALMA Programme Scientist at ESO. "With its 66 antennas, ALMA dominates the overall signal collection in polarised light, while APEX has been essential for the calibration of the image."

"ALMA data were also crucial to calibrate, image and interpret the EHT observations, providing tight constraints on the theoretical models that explain how matter behaves near the black hole event horizon," adds Ciriaco Goddi, a scientist at Radboud University and Leiden Observatory, the Netherlands, who led an accompanying study (https://www.eso.org/public/archives/releases/sciencepapers/eso2105/eso2105c.pdf) that relied only on ALMA observations.

The EHT setup allowed the team to directly observe the black hole shadow and the ring of light around it, with the new polarised-light image clearly showing that the ring is magnetised. The results are published today in two separate papers in The Astrophysical Journal Letters by the EHT collaboration. The research involved over 300 researchers from multiple organisations and universities worldwide.

"The EHT is making rapid advancements, with technological upgrades being done to the network and new observatories being added. We expect future EHT observations to reveal more accurately the magnetic field structure around the black hole and to tell us more about the physics of the hot gas in this region," concludes EHT collaboration member Jongho Park, an East Asian Core Observatories Association Fellow at the Academia Sinica Institute of Astronomy and Astrophysics in Taipei.

Credit: 
ESO

Alzheimer's patients' cognition improves with Sargramostim (GM-CSF), new study shows

A new study suggests that Sargramostim, a medication often used to boost white blood cells after cancer treatments, is also effective in treating and improving memory in people with mild-to-moderate Alzheimer's disease. This medication comprises of a natural human protein produced by recombinant DNA technology (yeast-derived rhu GM-CSF/Leukine®).

The study, from the University of Colorado Alzheimer's and Cognition Center at the University of Colorado Anschutz Medical Campus (CU Anschutz), presents evidence from their clinical trial that shows that Sargramostim may have both disease-modifying and cognition-enhancing activities in Alzheimer's disease patients. It was published online today by Alzheimer's & Dementia: Translational Research and Clinical Interventions, an open access journal of the Alzheimer's Association.

"The goal of the clinical trial was to examine the impact of a natural human protein called granulocyte-macrophage colony stimulating factor (GM-CSF) on people living with Alzheimer's disease. We tested GM-CSF because people with rheumatoid arthritis tend not to get Alzheimer's disease and we had previously found this protein, which is increased in the blood of people with rheumatoid arthritis, reduced amyloid deposition in Alzheimer's mice and returned their poor memory to normal after a few weeks of treatment. Thus, naturally increased levels of GM-CSF in people with rheumatoid arthritis may be one reason that they are protected from Alzheimer's disease," said Huntington Potter, PhD, director of the CU Alzheimer's and Cognition Center, who together with Jonathan Woodcock, Timothy Boyd and collaborators carried out the new trial.

"Human GM-CSF is the active compound in the known human drug Sargramostim, and we are the first to study its effect on people with Alzheimer's disease."

GM-CSF/Sargramostim is used to stimulate the bone marrow to make more white blood cells of a particular kind called macrophages and granulocytes, as well as progenitor cells that repair blood vessels. These white blood cells circulate throughout the body and remove cells, bacteria and amyloid deposits that aren't supposed to be there, as well as promoting repair to damaged blood vessels and to the brain.

The researchers carried out a randomized, double-blind, placebo-controlled phase II trial to test the safety and efficacy of Sargramostim treatment in participants with mild-moderate Alzheimer's disease. Study participants who met eligibility criteria were randomized to receive injections of either Sargramostim (20 participants took a standard FDA dosage 250 mcg/m2/day subcutaneous injection for five days a week for three weeks) or placebo (20 participants took saline for five days a week for three weeks). The majority of the participants from the study were recruited and treated at CU Anschutz with a few from the University of South Florida.

The CU Anschutz researchers then conducted and studied multiple neurological, neuropsychological, cell, cytokine, Alzheimer's pathology biomarkers and neuroimaging assessments.

They found that short-term Sargramostim treatment increased innate and other immune cells, modulated cytokine measures, and was safe and well-tolerated by participants. They also found cognition memory improved by almost two points in the 30 point Mini-Mental State Exam. Measures of blood biomarkers of Alzheimer's disease--brain amyloid, tangles, and neurodegeneration--all improved toward normal.

"These results suggest that short-term Sargramostim treatment leads to innate immune system activation, cognition and memory improvement, and partial normalization of blood measures of amyloid and tau pathology and neuronal damage in participants with mild-to-moderate Alzheimer's disease," said Potter.

"This surprising finding that stimulating the innate immune system and modulating inflammation may be a new treatment approach and induced us to start a larger trial of Sargramostim in Alzheimer's disease with more participants treated over a longer time."

Credit: 
University of Colorado Anschutz Medical Campus

News media keeps pressing the mute button on women's sports

The talented athletes are there. The cheering fans are there. But the media? It's nowhere to be found.

This is the reality of women's sports, which continue to be almost entirely excluded from television news and sports highlights shows, according to a USC/Purdue University study published on March 24th in Communication & Sport.

The survey of men's and women's sports news coverage has been conducted every five years since 1989. In the latest study, researchers found that 95% of total television coverage as well as the ESPN sports highlights show SportsCenter focused on men's sports in 2019. They saw a similar lopsidedness in social media posts and in online sports newsletters coverage, which were included in the report for the first time since researchers began gathering data three decades ago.

"One and done": why media coverage matters

The study documented a few bright spots for women's sports, including increasing live televised coverage and prominent news outlets like the Los Angeles Times devoting more resources to women's sports.

But the coverage of women's sports hasn't increased in terms of television news and highlights shows, the more critical components of the "larger media apparatus" that helps create audiences for sports, said report author Michael Messner.

"News media focus on the 'big three' men's sports --football, basketball and baseball--creating audience knowledge about and excitement for the same sporting events over and over," explained Messner, a professor of sociology and gender studies at the USC Dornsife College of Letters, Arts and Sciences. "Meanwhile, women's sports continue to get short shrift, which is significant when you consider the larger picture of girls' and women's efforts to achieve equal opportunities, resources, pay and respect in sports."

Messner and study co-author Cheryl Cooky of Purdue University say this "missing piece" of media coverage is stunting the growth of audience interest in and excitement for women's sports.

"Eighty percent of the news and highlights programs in our study devoted zero time for women's sports," said Cooky, a professor of American studies and women's, gender and sexuality studies. "On the rare broadcast when a women's sports story does appear, it is usually a case of 'one and done' -- a single women's sports story partially eclipsed by a cluster of men's stories that precede it, follow it and are longer in length."

To ensure their data sample included various sports seasons, the authors analyzed three two-week blocs of televised news on three Los Angeles network affiliates in March, July and November 2019, along with three weeks of the one-hour SportsCenter program. For the first time in the study's 30-year history, they also included online daily sports newsletters from networks, including NBC, CBS and ESPN and their associated Twitter accounts.

The inclusion of online coverage was partly in response to assurances by some in the media industry that more coverage of women's sports could be found in online and social media coverage. But the more expansive analysis revealed only slightly more coverage of women's sports, which made up 9% of online newsletter content and 10% of Twitter posts.

"Considering the ease of posting content and the relative lack of production and budgetary constraints when compared to TV news, we anticipated more coverage of women's sports in online and social media spaces," Cooky said. "We were surprised by how little coverage we found. There just isn't a compelling excuse for that absence."

TV news and highlights shows give perfunctory attention to women athletes

Messner and Cooky found that even the tiny percentage of TV news coverage devoted to women's sports in 2019 was inflated due to a burst of coverage of the U.S. soccer team's victory in the Women's World Cup and to a lesser extent, the U.S. women's tennis competitors at Wimbledon.

When they subtracted just the Women's World Cup coverage from the total, the local television affiliates' coverage of women's sports dropped from 5.1% to 4.0% and the airtime on ESPN's SportsCenter devoted to women's sports dropped from 5.4% to 3.5%, a proportion similar to prior iterations of the study.

It isn't just the quantity but also the quality of coverage that needs improvement, said the researchers, who have documented several dramatic shifts in the ways that women's sports are covered.

In the 1990s, they saw coverage that routinely trivialized, insulted and sexualized women athletes. By 2014, that frame had receded and in its place was an attempt at a more "respectful" framing of women's sports that was delivered in a "boring, inflection-free manner" the researchers call "gender-bland sexism."

In this recent study, Messner and Cooky looked more closely at the gender-bland pattern, finding that nationalism temporarily elevated successful women's sports into the news -- as in the Women's World Cup example -- but that there was little subsequent spillover into increased or higher-quality coverage of other women's sports.

Additionally, the charitable contributions of men's teams and athletes were frequently elevated in news and highlights shows, while women athletes' community contributions, including their social justice activism, almost never made the news. These omissions help paint women athletes with a more one-dimensional, gender-bland brush.

"Our analysis shows men's sports are the appetizer, the main course and the dessert, and if there's any mention of women's sports it comes across as begrudging 'eat your vegetables' without the kind of bells and whistles and excitement with which they describe men's sports and athletes," said Messner.

The authors acknowledge their analysis took place before the COVID-19 pandemic temporarily forced many athletes off the courts and out of stadiums in 2020. When they returned to the airwaves, ESPN increased its investment in the Women's National Basketball Association (WNBA) and tripled the number of games it aired across its networks, and the league made headlines for supporting the Black Lives Matter movement. The WNBA recently announced plans for its 25th anniversary this year, including celebrations highlighting the league's milestones.

Could this represent a turning point for women's sports? After 30 years of data demonstrating the intransigence of sports media, Messner and Cooky are skeptical.

"Before the pandemic, many fans took sports for granted; now it's clear how much we rely on sports for entertainment and as a form of escape," Cooky said. "While the WNBA successfully capitalized on the fans' and the media's hunger for sports over the summer, the media's effort to get back to what's considered 'normal' may once again eclipse women athletes. It remains to be seen whether this moment in history will force the media to reimagine its coverage of women's sports."

Credit: 
University of Southern California

Midlife loneliness is a risk factor for Dementia and Alzheimer's disease

(Boston)--Being persistently lonely during midlife (ages 45-64) appears to make people more likely to develop dementia and Alzheimer's Disease (AD) later in life. However, people who recover from loneliness, appear to be less likely to suffer from dementia, compared to people who have never felt lonely.

Loneliness is a subjective feeling resulting from a perceived discrepancy between desired and actual social relationships. Although loneliness does not itself have the status of a clinical disease, it is associated with a range of negative health outcomes, including sleep disturbances, depressive symptoms, cognitive impairment, and stroke. Still, feeling lonely may happen to anyone at some point in life, especially under extreme and unresolved quickly circumstances such as the Covid-19 lockdowns. Yet, people differ in how long--or how "persistent"--they feel lonely for. Thus, it may be that people who recover from loneliness will experience different long-term consequences for their health than people who are lonely for many years.

In an effort to shed light on the relationship between these different forms of loneliness (transient and persistent loneliness) and the incident of AD, researchers from Boston University School of Medicine (BUSM) examined data involving cognitively normal adults from the Framingham Heart Study. Specifically, they investigated whether persistent loneliness more strongly predicted the future development of dementia and AD than transient loneliness. They also wanted to see whether this relationship was independent from depression and established genetic risk factors for AD, such as the Apolipoprotein ε4 (APOE ε4) allele.

After taking effects of age, sex, education, social network, living alone, physical health and genetic risk into account, persistent loneliness was associated with higher risk, whereas transient loneliness was linked to lower risk of dementia and AD onset after 18 years, compared with no loneliness.

"Whereas persistent loneliness is a threat to brain health, psychological resilience following adverse life experiences may explain why transient loneliness is protective in the context of dementia onset," explained corresponding author Wendy Qiu, MD, PhD, professor of psychiatry and pharmacology & experimental therapeutics at BUSM. In light of the current pandemic, these findings raise hope for people who may suffer from loneliness now, but could overcome this feeling after some time, such as by using successful coping techniques or following a policy change in the physical distancing regulations.

According to the researchers, these results motivate further investigation of the factors that make individuals resilient against adverse life events and urges to tailor interventions to the right person at the right time to avert persistency of loneliness, promote brain health and AD prevention.

Credit: 
Boston University School of Medicine

How blockchain and machine learning can deliver the promise of omnichannel marketing

Researchers from University of Minnesota, New York University, University of Pennsylvania, BI Norwegian Business School, University of Michigan, National Bureau of Economic Research, and University of North Carolina published a new paper in the Journal of Marketing that examines how advances in machine learning (ML) and blockchain can address inherent frictions in omnichannel marketing and raises many questions for practice and research.

The study, forthcoming in the Journal of Marketing, is titled "Informational Challenges in Omnichannel Marketing Remedies and Future Research" and is authored by Koen Pauwels, Haitao (Tony) Cui, Catherine Tucker, Raghu Iyengar, S. Sriram, Anindya Ghose, Sriraman Venkataraman, and Hanna Halaburda.

In this new study in the Journal of Marketing, researchers define omnichannel marketing as the "synergistic management of all customer touch points and channels both internal and external to the firm that ensures that the customer experience across channels and firm-side marketing activity, including marketing-mix and marketing communication (owned, paid, and earned), is optimized."

Often viewed as the panacea for one-to-one marketing, omnichannel experiences data, marketing attribution, and consumer privacy frictions. The research team demonstrates that advances in machine learning (ML) and blockchain can address these frictions. However, these technologies may in turn also present new challenges for firms and opportunities for academic research.

First, tofully realize the potential of omnichannel marketing, firms need information on all their interactions with each customer as they traverse the different stages of the customer journey. The study considers the whole gamut of interactions, such as communications between the firm and its customers, activities where the customers interact with the firm (or its partners) across information gathering, purchases, product fulfillment, returns, and post-purchase service. Such data might not be readily available or usable.

Questions for future research include: How to decide which machine-learning methods are best and can impute missing pieces of information using data already available to the firm? What is the optimal design of matchmakers/platforms that collect information from different parties spanning different customer touch points? What is the impact of data sharing within and across firms on consumers (prices they pay), firms (supply-chain efficiency, profit margins), and policy makers (market structure, efficiency, and overall surplus)? How to incentivize internal and external partners to participate in the blockchains? And might blockchain-enabled omnichannel marketing efforts increase or soften competition?

Second, advances in attribution modeling have significantly improved firms' ability to assign credit to a specific marketing touchpoint. However, extant attribution models are limited by an inability to attribute the transition to a single intervention or they presume that the impact of the previous intervention stops with the next step within the purchase funnel and does not carry over to subsequent steps within the funnel. Moreover, future research should develop attribution models that combine micro and macro data leveraging tried-and-tested methods in economics and marketing. Pauwels says that "We need more research that compares aggregate-level approaches using traditional attribution modeling with individual-level approaches and multi-touch attribution. It is useful to compare how extant attribution methods may be adapted to study forward-looking metrics such as customer lifetime value (CLV), which quantifies revenue streams a firm expects to earn after acquiring a customer." Finally, as omnichannel marketers adopt technologies like blockchain, firms will realize greater transparency and more reliable integration of consumer data across touchpoints within and outside of the firm. This naturally warrants a better understanding of how attribution effects change with and without blockchain-enabled marketing platforms.

Third, consumer privacy is advanced by regulation, customer empowerment, and blockchain guarantees. Still, there are several questions about how to improve consumer privacy. Is it possible to use predictive analytics in a manner that is conscious of consumers' likely privacy preferences? Moreover, is there a way of emulating existing blockchain-based ecosystems in an omnichannel context? For example, can a firm use blockchain to create a token that establishes a currency that allows consumers to be rewarded for sharing their data as a part of an omnichannel marketing effort? And more ambitiously, is there a way that multiple firms can coordinate around a single-token-based scheme to help kickstart a larger ecosystem? How successful are ad-tech initiatives that have helped omnichannel marketers become privacy-regulation compliant? Are they inherently just a cost that interrupts the accurate processing of information or are there benefits in terms of enhanced consumer trust? Academic-firm partnerships can assess the usefulness of such tools for firms, consumers, and regulatory compliance - as well as make recommendations for improvements. Recent developments in federated learning aim to provide privacy controls; however, there remains room for indirect leakage of consumer information. These leakages can stem from loopholes in collaborative machine-learning systems, whereby an adversarial participant can infer membership as well as properties associated with a subset of the training data. In a blockchained federated-learning architecture, the local-learning model updates are exchanged and verified by leveraging a blockchain. Might such developments temper privacy concerns and lead to more efficient omnichannel marketing programs?

Finally, public policy has thus far focused on the deleterious effects of machine-learning-induced algorithmic biases, such as racial or gender discrimination. Cui explains that "There is scant research or policy looking at the use of personal information in algorithms. For example, does greater transparency into the customer's path-to-purchase journey, even with explicit customer consent, result in the unintended consequence of giving omnichannel firms room to price-discriminate efficiently, and in doing so, erode consumer welfare?"

Credit: 
American Marketing Association

Scientists improve a photosynthetic enzyme by adding fluorophores

image: Broadening the enzyme's band of harvestable light wavelengths is an important improvement given the extremely low energy density of sunlight.

Image: 
Image Takehisa Dewa from Nagoya Institute of Technology

Given the finite nature of fossil fuel reserves and the devastating environmental impacts of relying on fossil fuels, the development of clean energy sources is among the most pressing challenges facing modern industrial civilization. Solar energy is an attractive clean energy option, but the widescale implementation of solar energy technologies will depend on the development of efficient ways of converting light energy into chemical energy.

Like many other research groups, the members of Professor Takehisa Dewa's research team at Nagoya Institute of Technology in Japan have turned to biological photosynthetic apparatuses, which are, in Prof. Dewa's words, both "a source of inspiration and a target to test ways of improving the efficiency of artificial systems." Specifically, they chose to focus on the purple photosynthetic bacterium Rhodopseudomonas palustris, which uses a biohybrid light-harvesting 1-reaction center core complex (LH1-RC) to both capture light energy and convert it into chemical energy.

In their initial studies of R. palustris, Prof. Dewa's group quickly noted that the LH1-RC system has certain limitations, such as only being able to harvest light energy efficiently within a relatively narrow wavelength band due to its reliance on (bacterio)chlorophylls, a single light-harvesting organic pigment assembly (B875, named for its absorption maximum). To overcome this limitation, the researchers, in partnership with collaborators at Osaka University and Ritsumeikan University, experimented with covalently linking the LH1-RC system to a set of fluorophores (Alexa647, Alexa680, Alexa750, and ATTO647N). The results of their experiments appear in a paper published in a recent issue of the Journal of Photochemistry & Photobiology A: Chemistry.

Having synthesized their modified LH1-RC system, Prof. Dewa's team used a method called "femtosecond transient absorption spectroscopy" to confirm the presence of ultrafast "excitation energy" transfer from the fluorophores to the bacteriochlorophyll a pigments in the B875 assembly. They also confirmed the subsequent occurrence of "charge separation" reactions, a key step in energy harvesting. Unsurprisingly, the rate of excitation energy transfer increased with greater spectral overlap between the emission bands of the fluorophores and the absorption band of B875. Attaching the external light-harvesting fluorophores boosted the enzyme's maximum yield of charge separation and photocurrent generation activity on an electrode within an artificial lipid bilayer system.

By introducing covalently linked fluorophores into a bacterial photosynthetic enzyme (as shown in Figure 1), Prof. Dewa's team succeeded in broadening the enzyme's band of harvestable light wavelengths. This is an important improvement given the extremely low energy density of sunlight. "This finding could pave the way to developing an efficient artificial photosynthesis system for solar energy conversion," notes Prof. Dewa. "Research on biohybrids should provide insights into the development of implementable energy conversion systems, thereby giving advanced modern civilization a practical option for accessing an inexhaustible supply of clean solar energy," he adds.

The energy conversion systems in question may take many forms, including various nanomaterials, such as quantum dots and nanocarbon materials, but a unifying feature will be the need for some way to harness a broad-spectrum light-harvesting apparatus to a photocurrent-generating apparatus, and the biohybrid-type system developed by Prof. Dewa's team provides a feasible means of addressing this need.

Credit: 
Nagoya Institute of Technology

Glycans are crucial in COVID-19 infection

A research group at the RIKEN Center for Computational Science (R-CCS) has found that glycans--sugar molecules--play an important role in the structural changes that take place when the virus which causes COVID-19 invades human cells. Their discovery, which was based on supercomputer-based simulations, could contribute to the molecular design of drugs for the prevention and treatment of COVID-19. The research was published in the Biophysical Journal.

When SARS-CoV-2--the coronavirus that causes COVID-19--invades a human cell, a spike protein on its surface binds to an enzyme called ACE2 on the surface of the cell. The spike protein consists of three polypeptide chains, and glycans--sugar molecules--are attached to the surface of the protein. Though these glycans are believed to be used to allow the proteins to recognize each other, it is also thought that viruses use them to evade attack by antibodies.

Structural analyses have shown that the spike proteins of SARS-CoV-2 have Down- and Up-form structures. These analyses have advanced our understanding of the three-dimensional structure of the spike proteins, but the detailed molecular structure of the highly fluctuating glycans is still not understood, and in fact the role of glycans in the process of cell invasion remains unclear.

To get a better understanding of their role, the research team led by Yuji Sugita of R-CCS conducted molecular dynamics simulations for the Down- and Up-form structures of the proteins, using two supercomputers--Fugaku at the R-CCS and Oakforest-PACS at the University of Tokyo. Using these powerful machines, they performed molecular dynamics simulations of the spike proteins at a timescale of 1 microsecond (one-millionth of a second).

From the calculations, they were able to identify specific glycan-attached amino acids in the spike protein that play an important role in stabilizing the structure of the receptor binding domain. Their results suggested that the conformational change to the Up-form structure is driven by electrostatic repulsion between the domains, and that glycans which stabilize the Down-form structure are dislodged and replaced by other glycans after the domains are displaced. The study thus provided new insights into how glycans help stabilize the dynamic structure of proteins.

According to Sugita, "We need to develop better preventative and therapeutics to bring the pandemic to an end. It would be very useful to be able to design drugs taking the structural changes of spike proteins into account, by stabilizing the Down-form or inhibiting the change to the Up-form, for example."

"Research projects like this," he adds, "show us how the new generation of powerful supercomputers will allow us to gain new insights into many phenomena by performing simulations at a level of detail that would have been impossible previously."

Credit: 
RIKEN

E. Coli calculus: Bacteria find the derivative optimally

image: The University of Tokyo researchers use information theory to show that the accepted biochemical model of bacterial chemical sensing is mathematically equivalent to the optimal solution, with implications for microbiology and robotics

Image: 
Institute of Industrial Science, the University of Tokyo

Tokyo, Japan - Scientists from the Graduate School of Information Science and Technology at The University of Tokyo calculated the efficiency of the sensory network that bacteria use to move towards food and found it to be optimal from an information theory standpoint. This work may lead to a better understanding of bacterial behavior and their sensory networks.

Despite being single-celled organisms, bacteria such as E. Coli can perform some impressive feats of sensing and adaptation in constantly changing environmental conditions. For example, these bacteria can sense the presence of a chemical gradient indicating the direction of food and move towards it. This process is called chemotaxis, and has been shown to be remarkably efficient, both for its high sensitivity to tiny changes in concentration and for its ability to adapt to background levels. However, the question of whether this is the best possible sensing system that can exist in noisy environments, or a suboptimal evolutionary compromise, has not been determined.

Now, researchers from The University of Tokyo have shown that the standard model biologists use to describe bacterial chemotaxis is, in fact, mathematically equivalent to the optimal dynamics. In this framework, receptors on the surface of the bacterium can be modulated by the presence of the target molecules, but this signaling network can be affected by random noise. Once the bacteria determine if they are swimming towards or away from the food, they can adjust their swimming behavior accordingly.

"E. coli can either move in a straight line or randomly reorient via tumbling. By reducing the frequency of tumbling when it senses a positive attractant concentration gradient, the bacterium can preferentially move toward the food," first author Kento Nakamura says.

Using nonlinear filtering theory, which is a branch of information theory that deals with updating information based on a stream of real-time information, the scientists showed that the system used by bacteria is indeed optimal.

"We find that the best possible noise-filtering system coincides with the biochemical model of the E. coli sensory system," explains senior author Tetsuya J. Kobayashi.

The findings of this research may also be applied to sensory systems in other organisms, such as G protein-coupled receptors used for vision. Since all living systems need to be able to sense and react to their environments, this project can help evaluate the efficiency of information filtering more generally.

Credit: 
Institute of Industrial Science, The University of Tokyo

New automated process makes nanofiber fabrication assessment 30% more accurate

Imbued with special electric, mechanical and other physical properties due to their tiny size, nanofibers are considered leading-edge technology in biomedical engineering, clean energy and water quality control, among others. Now, researchers in Italy and UK have developed an automatic process to assess nanofiber fabrication quality, producing 30% more accurate results than currently used techniques.

Details were published on January 2021 in IEEE/CAA Journal of Automatica Sinica, a joint publication of the IEEE and the Chinese Association of Automation.

"In recent years, nanostructured materials have gained continuously growing interest both in scientific and industrial contexts, because of their research appeal and versatile applications," said paper author Cosimo Ieracitano, research fellow in the Neurolab Group, Department of Civil Engineering, Energy, Environment and Materials, University Mediterranea of Reggio Calabria. "Nanofiber applications success requires special care be paid to the quality of nanomaterial and the generation process."

Nanofibers are produced by applying a high voltage to a syringe containing a polymer solution and a spinning collector. The solution, powered by the electric charge, jets out onto the collector and results in nanofibers. For a product that requires uniformity - for example, a nanofiber intended as scaffolding to grow cells will result in uneven growth if it contains a lump or a hole, or it might not be able to grow any if it has a film on it - the current production process is quite messy.

To prevent anomalies, technicians monitor the fiber production using a scanning electron microscope that can precisely determine the topography of the fibers, as well as their composition. They then visually inspected the images. According to Ieracitano, it is a time-consuming process that depends on humans, who can become fatigued and make mistakes.

"In the production chain of nanomaterials, a crucial step is to practically implement automation in the defect-identification process to reduce the number of laboratory experiments and the burden of the experimentation phase," Ieracitano said.

The research team designed a two-part automatic process to homogenous nanofibers. An autoencoder, a type of machine-learning software, chops the scanning electron microscope images into smaller pieces and translates them into code. That code is rendered into more basic versions of the original images, reducing computing power but still highlighting any anomalies. Another machine-learning processor assess the image, looking for any structural flaws. If it finds one, it dismisses the nanofiber as defective.

"Notably, the proposed system outperforms other standard machine-learning techniques, as well as other recent state-of-the art methods, reporting an accuracy of up to 92.5%," Ieracitano said. Currently used techniques are typically 64 to 66% accurate.

Credit: 
Chinese Association of Automation