Culture

Evidence that Earth's first cells could have made specialized compartments

image: "Protocells" containing bubble-like compartments formed spontaneously on a mineral-like and encapsulated fluorescent dye. This could have been what happened 3.8 billion years ago when cells first began to form.

Image: 
Image courtesy of Karolina Spustova.

New research by the University of Oslo provides evidence that the "protocells" that formed around 3.8 billion years ago, before bacteria and single-celled organisms, could have had specialized bubble-like compartments that formed spontaneously, encapsulated small molecules, and formed "daughter" protocells.

ROCKVILLE, MD - Scientists have long speculated about the features that our long-ago single-celled ancestors might have had, and the order in which those features came about. Bubble-like compartments are a hallmark of the superkingdom to which we, and many other species including yeast, belong. But the cells in today's superkingdom have a host of specialized molecules that help make and shape these bubbles inside our cells. Scientists wondered what came first: the bubbles or the shaping molecules? New research by Karolina Spustova, a graduate student, and colleagues in the lab of Irep Gözen at the University of Oslo, shows that with just a few key pieces these little bubbles can form on their own, encapsulate molecules, and divide without help. Spustova will present her research, which was published in January, on Wednesday, February 24 at the 65th Annual Meeting of the Biophysical Society.

3.8 billion years ago is about when our long ago single-cell ancestor came to be. It would have preceded not only complex organisms in our superkingdom, but also the more basic bacteria. Whether this "protocell" had bubble-like compartments is a mystery. For a long time, scientists thought that these lipid-bubbles were something that set our superkingdom apart from other organisms, like bacteria. Because of this, scientists thought that these compartments might have formed after bacteria came to exist. But recent research has shown that bacteria have specialized compartments too, which led Gözen's research team to wonder--could the protocell that came before bacteria and our ancestors have them? And if so, how could they have formed?

The research team mixed the lipids that form modern cell compartments, called phospholipids, with water and put the mix on a mineral-like surface. They found that large bubbles spontaneously formed, and inside those bubbles, were smaller ones. To test whether those compartments could encapsulate small molecules, as they would need to do to have specialized functions, the team added fluorescent dyes. They observed that these bubbles were able to take up and hold onto the dyes. They also saw instances where the bubbles split, leaving smaller "daughter" bubbles, which is "something like simple division of the first cells," Spustova says. All of this occurred without any molecular machines, like those we have in our cells, and without added energy.

The idea that this could have happened on Earth 3.8 billion years ago is not inconceivable. Gözen explained that water would have been plentiful, plus "silica and aluminum, which we used in our study, are present in natural rocks." Research shows that the phospholipid molecules could have been synthesized under early Earth conditions or reached Earth with meteorites. Gözen says, "these molecules are believed to have reached sufficient concentrations to form phospholipid compartments." So, it is possible that the ancient "protocell" that came before all the organisms currently on Earth, had everything it needed for bubble-like compartments to form spontaneously.

Credit: 
Biophysical Society

Older women who ate more plant protein had lower risk of premature, dementia-related death

DALLAS, Feb. 24, 2021 -- Postmenopausal women who ate high levels of plant protein had lower risks of premature death, cardiovascular disease and dementia-related death compared with women who ate less plant proteins, according to new research published today in the Journal of the American Heart Association, an open access journal of the American Heart Association.

Previous research has shown an association between diets high in red meat and cardiovascular disease risk, yet the data is sparse and inconclusive about specific types of proteins, the study authors say.

In this study, researchers analyzed data from more than 100,000 postmenopausal women (ages 50 to 79) who participated in the national Women's Health Initiative study between 1993 and 1998; they were followed through February 2017. At the time they enrolled in the study, participants completed questionnaires about their diet detailing how often they ate eggs, dairy, poultry, red meat, fish/shellfish and plant proteins such as tofu, nuts, beans and peas. During the study period, a total of 25,976 deaths occurred (6,993 deaths from cardiovascular disease; 7,516 deaths from cancer; and 2,734 deaths from dementia).

Researchers noted the levels and types of protein women reported consuming, divided them into groups to compare who ate the least and who ate the most of each protein. The median percent intake of total energy from animal protein in this population was 7.5% in the lowest quintile and 16.0% in the highest quintile. The median percent intake of total energy from plant protein in this population was 3.5% in the lowest quintile and 6.8% in the highest quintile.

Among the key findings:

Compared to postmenopausal women who had the least amount of plant protein intake, those with the highest amount of plant protein intake had a 9% lower risk of death from all causes, a 12% lower risk of death from cardiovascular disease and a 21% lower risk of dementia-related death.

Higher consumption of processed red meat was associated with a 20% higher risk of dying from dementia.

Higher consumption of unprocessed meat, eggs and dairy products was associated with a 12%, 24% and 11% higher risk of dying from cardiovascular disease, respectively.

Higher consumption of eggs was associated with a 10% higher risk of death due to cancer.

However, higher consumption of eggs was associated with a 14% lower risk of dying from dementia, while higher poultry consumption was associated with a 15% lower risk.

"It is unclear in our study why eggs were associated with a higher risk of cardiovascular and cancer death," said lead study author Wei Bao, M.D., Ph.D., an assistant professor of epidemiology at the University of Iowa in Iowa City. "It might be related to the way people cook and eat eggs. Eggs can be boiled, scrambled, poached, baked, basted, fried, shirred, coddled or pickled or in combinations with other foods. In the United States, people usually eat eggs in the form of fried eggs and often with other foods such as bacon. Although we have carefully accounted for many potential confounding factors in the analysis, it is still difficult to completely tease out whether eggs, other foods usually consumed with eggs, or even non-dietary factors related to egg consumption, may lead to the increased risk of cardiovascular and cancer death."

Researchers noted that substitution of total red meat, eggs or dairy products with nuts was associated with a 12% to 47% lower risk of death from all causes depending on the type of protein replaced with nuts.

"It is important to note that dietary proteins are not consumed in isolation, so the interpretation of these findings could be challenging and should be based on consideration of the overall diet including different cooking methods," said Yangbo Sun, M.D., Ph.D., co-author of the study, a postdoctoral research scholar at the University of Iowa in Iowa City and currently an assistant professor of epidemiology at the University of Tennessee Health Science Center.

The analysis also revealed that women who ate the highest amount of animal protein such as meat and dairy were more likely to be white and have a higher education and income, and they were more likely to be past smokers, drink more alcohol and be less physically active. Moreover, these women were more likely to have Type 2 diabetes at the start of the study, a family history of heart attacks and a higher body mass index -- all risk factors for cardiovascular disease.

"Our findings support the need to consider dietary protein sources in future dietary guidelines," said Bao. "Current dietary guidelines mainly focus on the total amount of protein, and our findings show that there may be different health influences associated with different types of protein foods."

2020-2025 Dietary Guidelines for Americans, jointly published by the U.S. Departments of Agriculture (USDA) and Health and Human Services (HHS), recommend eating a variety of protein foods: low-fat meat, low-fat poultry, eggs, seafood, beans, peas, lentils, nuts, seeds and soy products including at least 8 ounces of cooked seafood per week.

The AHA's 2020 Dietary Cholesterol and Cardiovascular Risk advisory notes that given the relatively high content of cholesterol in egg yolks, it remains advisable to limit intake. Healthy individuals can include up to one whole egg or the equivalent daily.

The study had several limitations including that it was observational, based on self-reported data at the beginning of the study and lacked data on how the proteins were cooked. In addition, the findings may not apply to younger women or men.

Credit: 
American Heart Association

Self-monitoring using digital health tools is associated with weight loss

SILVER SPRING, Md.--A systematic review of multiple randomized controlled studies among adults with overweight or obesity showed that greater engagement in self-monitoring using digital health tools was associated with significant weight loss, according to a paper published online in Obesity, The Obesity Society's flagship journal. This is the first comprehensive systematic review to examine the relationship between digital self-monitoring and weight loss.

"Digital health tools have flourished in the past decade," said Michele L. Patel, PhD, post-doctoral research fellow, Stanford Prevention Research Center, Stanford University School of Medicine in Stanford, Calif. "What this paper sought out to explore was whether tracking via these digital tools is effective at producing greater weight loss." Patel is the corresponding author of the study.

Given the wide-spread prevalence of obesity with rates of 42 percent among US adults and 13 percent worldwide, treatment options that have high efficacy, acceptability and reach are needed. As found in previous reviews, interventions using technology-based modalities, including SMS, apps, wearables and websites often produced weight loss similar to or less than that of in-person interventions but better than that of control arms; however these reviews did not focus on self-monitoring. The current research addresses this gap and contributes to the science of engagement in behavioral interventions.

Conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, the review included 39 randomized controlled studies of behavioral weight loss interventions for adults with overweight or obesity using digital health technologies for self-monitoring. Six databases--PubMed, EMBASE, Scopus, PsycInfo, CINAHL and ProQuest Dissertations & Theses--were searched for studies that included interventions 12 weeks or longer in duration, weight outcomes six months or longer and outcomes on self-monitoring engagement and their relationship to weight loss. The studies were published between January 2009 and September 2019.

Among the 67 interventions with digital self-monitoring, weight was tracked in 72 percent of them, diet in 81 percent and physical activity in 82 percent. Websites were the most common self-monitoring technology tools followed by apps, wearables, electronic scales and text messaging. No studies used social media platforms for self-monitoring.

Digital self-monitoring was linked to weight loss in 74 percent of occurrences. This pattern was found across all three major behaviors that are tracked (dietary intake, physical activity and body weight). Few interventions had digital self-monitoring engagement rates greater than 75 percent of days. Rates were higher in digital tools than in paper-based journals in 21 out of 34 comparisons. "This may be because many digital tools are highly portable, and therefore allow the user to track any time of the day; digital tools also may make tracking quicker, and may be less burdensome to use," said Patel.

"Given that previous reviews conducted before the emergence of these newer tools have established that self-monitoring also plays a key role in the maintenance of weight loss (i.e., preventing weight regain), a critical next step for our field is to examine how we can help sustain engagement with these tools longer term, after the initial novelty wears off," said Assistant Professor Kathryn M. Ross, PhD, MPH, Department of Clinical and Health Psychology, University of Florida, Gainesville. Ross was not associated with the research.

Credit: 
The Obesity Society

Sound-frequency map for inner ear created with advanced X-ray technology

image: Ear membrane and auditory nerve in the cochlea. The octave bands have been given different colours. Humans can perceive frequencies from 20 Hz (the top of the coil) to 20,000 Hz (the base of the coil). The image also shows the round window, the oval window where sound enters and the facial nerve.

Image: 
Hao Li

Peer review/Experimental study/Cells

Researchers at Uppsala University have created the first 3D map of the hearing nerve showing where the various sound frequencies are captured. Using what is known as synchrotron X-ray imaging, they were able to trace the fine nerve threads and the vibrating auditory organ, the cochlea, and find out exactly how the frequencies of incoming sound are distributed. The study is published in Scientific Reports.

"This can make treatment with cochlea implants for the hearing-impaired more effective," says Helge Rask-Andersen, Professor of Experimental Otology at Uppsala University.

Sound waves have differing frequencies - that is, the number of vibrations they make every second varies according to whether it is a high-pitched sound, which causes more vibrations per second, or a low-pitched one, which results in fewer. Frequency is measured in hertz (Hz), and the human ear can perceive frequencies of between 20 and 20,000 Hz.

When the sound waves are captured by the cochlea of the inner ear, fibrous connective tissue and sensory cells separate the various frequencies. High-frequency sounds reach the sound-sensitive hair cells in the lower part of the cochlea, while low-frequency sounds are absorbed in the corresponding way in the upper parts of the cochlea.

The researchers have now studied the details of this process, almost down to cell level. To do so, they used synchrotron X-rays, an advanced and powerful form of tomographic imaging. Since the radiation is too strong to be used on living human beings, donated ears from deceased people were investigated instead. This research has made it possible to work out the locations of the various frequencies in the cochlear nerve, and enabled the creation of a three-dimensional tonotopic frequency map.

"This kind of map is comparable to a piano, with the keys being analogous to all the similarly coded frequencies. Unlike the piano, which has 88 keys, we have about 3,400 internal auditory hair cells, all of which encode distinct frequencies. The hair cells are attached to a 34-millimetre-long basilar membrane, and are also tuned by 12,000 outer hair cells so that we can hear every volume level. This information is mediated to the brain via 30 000 precisely tuned fibres in our hearing nerve," Helge Rask-Andersen explains.

Human ear canals and nerves are not entirely uniform in appearance. The researchers therefore think the new knowledge may prove immensely important for people who, owing to grave hearing impairments, have cochlear implants (CIs) inserted. A CI is a hearing aid where one component is placed in the cochlea to provide direct electrical stimulation of the auditory nerve, while another part is attached to the outside of the skull. Showing exactly what the patient's cochlea looks like enables the technology to be individualised better and each area stimulated with the right frequency.

Credit: 
Uppsala University

„Fat jam" in the cell

In the journal Nature Communications, scientists of the German Center for Neurodegenerative Diseases (DZNE) report new findings on the mechanisms of "Niemann-Pick type C disease" (NPC). This rare brain disorder mainly manifests in childhood and includes severe neurological and psychiatric symptoms. Researchers led by Dr. Sabina Tahirovic have now found evidence that already at an early stage, NPC is associated with neuroinflammation and that this condition is triggered by impaired intracellular lipid transport. In addition, they identified pathological features in the blood of affected individuals that in the future could assist to better monitor the course of the disease and response to therapy.

NPC is a hereditary metabolic disorder and among the rare diseases. In Germany, it is estimated to affect several hundred people. In these individuals, fat molecules - known as lipids - accumulate in their brains and other organs such as the liver. The consequences are severe: ranging from psychoses, epileptic seizures, disturbances in movement and coordination to cognitive impairments and dementia. "NPC mainly manifests in childhood," said Dr. Sabina Tahirovic, a research group leader at the DZNE's Munich site. "Current therapies can alleviate symptoms somewhat, but cannot sustainably halt progression of the disease. Many patients affected by NPC die before they reach adulthood."

Early Inflammations

NPC is caused by defects in one of two genes: NPC1 and NPC2. Both are essential for the lipid recycling and the brain is particularly sensitive to such defects. In NPC, brain cells become overloaded with cholesterol and other lipids, leading to dysfunctions and, in the long term, to the death of neurons. The immune cells of the brain, the microglia, are also affected by the disease. Microglia trigger inflammatory processes; experts refer to this as "neuroinflammation". "Until now, the reaction of microglia was considered to happen at late disease stages. We have now found that neuroinflammation occurs ahead of neuronal loss," Tahirovic explained. "Thus, the inflammatory processes are not necessarily a response to neuronal damage, as anticipated. Inflammation starts before and appears to contribute to the disease progression." The findings of Tahirovic's team are based on studies in mice with defective NPC1 gene. Noteworthy, in humans, about 95 percent of NPC cases are due to flaws in this gene.

Out of Control

Microglia have a protective function that includes clearing cellular junk out of the way. However, the Munich scientists found that the immune cells behaved overly aggressively. "The microglia seemed out of control and more likely to do harm than good. In our experiments, they showed a tendency to over-ambitiously inhale cellular material," Tahirovic said. In their search for the causes of this misbehavior, the researchers took a closer look at the processes inside the cells: with surprising results. "Until now, it has been assumed that the accumulation of lipids in NPC impairs the degradation machinery. However, our studies point to transport problems. This means that the lipids could well be degraded, but on the way there, they get stuck in a molecular traffic jam," said the Munich researcher.

Blood Samples from Patients

In addition to these studies in mice, Tahirovic's team also examined blood samples from patients with NPC. Thanks to a collaboration with the Department of Neurology at the Ludwig-Maximilians-Universität München, the researchers were able to analyze blood from a total of seven patients. "This is quite a large number because NPC is so rare," Tahirovic pointed out. "Human microglia are difficult to access, this would require taking brain tissue. That's why we looked at white blood cells. Specifically at so-called macrophages, they are close relatives to microglia."

Potential Biomarkers

Indeed, numerous similarities were found between the macrophages of patients and the microglia of mice with NPC-like pathology - both in terms of molecular features and aggressive phagocytic behavior. "The macrophages appear to mirror key features of microglia. If they also respond in a similar way as microglia to therapies, they might be useful as biomarkers," Tahirovic said. "This would expand the existing toolkit. Because presently, monitoring disease progression of NPC and response to treatments is essentially limited to observing clinical symptoms."

Combined Therapies

Current approaches to treating NPC aim to reduce the amount of lipids in the cells. "Generally, this makes sense, because lipid overload is triggering the disease. However, our results emphasize the relevance of inflammatory processes. Insofar, the combination of lipid reduction and modulation of the immune response should also be considered in therapy development," said Tahirovic.

Credit: 
DZNE - German Center for Neurodegenerative Diseases

Most women receive inappropriate treatment for urinary tract infections

NEW YORK (February 24, 2021) -- Nearly half of women with uncomplicated urinary tract infections received the wrong antibiotics and almost three-quarters received prescriptions for longer than necessary, with inappropriately long treatment durations more common in rural areas, according to a study of private insurance claims data published today in Infection Control & Hospital Epidemiology, the journal of the Society for Healthcare Epidemiology of America.

"Inappropriate antibiotic prescriptions for uncomplicated urinary tract infections are prevalent and come with serious patient- and society-level consequences," said Anne Mobley Butler, PhD, lead author of the study and assistant professor of medicine and surgery at Washington University School of Medicine, St. Louis. "Our study findings underscore the need for antimicrobial stewardship interventions to improve outpatient antibiotic prescribing, particularly in rural settings."

Researchers studied insurance claims data for 670,400 women ages 18 to 44 who received an outpatient diagnosis of uncomplicated urinary tract infection between April 2011 and June 2015. They identified filled antibiotic prescriptions, assessed adherence to clinical guidelines, and compared rural and urban antibiotic usage patterns.

Rural patients were more likely to receive a prescription for an inappropriately long duration of therapy than urban patients, according to an analysis of geographic data from the claims database. While use of both inappropriate antibiotic choice and inappropriate duration of prescriptions declined slightly over the study period, inappropriate prescriptions continued to be common with 47% of prescriptions written for antibiotics outside guideline recommendations and 76% for an inappropriate duration, nearly all of which were longer than recommended.

"Accumulating evidence suggests that patients have better outcomes when we change prescribing from broad-acting to narrow-spectrum antibiotics and from longer to shorter durations," Butler said. "Promoting optimal antimicrobial use benefits the patient and society by preventing avoidable adverse events, microbiome disruption, and antibiotic-resistant infections."

Clinicians should periodically review clinical practice guidelines, even for common conditions, to determine the ideal antibiotic and treatment duration, Butler said. Auditing outpatient antibiotic prescribing patterns and periodic feedback to healthcare provider helps remind clinicians of the best practices and improves antibiotic prescribing. However, additional research should be performed to understand and ultimately improve rural outpatient antibiotic prescribing practices for urinary tract infections and other common conditions.

Possible explanations for study findings, which are consistent with other research reflecting rural disparities, may be that rural providers may not be as aware of current antibiotic treatment guidelines. In addition, urban providers who treat rural patients may prescribe longer antibiotic durations because of distance-to-care barriers in case symptoms persist. Further research is needed to identify reasons for higher inappropriate prescribing in rural settings.

Credit: 
Society for Healthcare Epidemiology of America

Ancestry estimation perpetuates racism, white supremacy

BINGHAMTON, NY -- Ancestry estimation -- a method used by forensic anthropologists to determine ancestral origin by analyzing bone structures -- is rooted in "race science" and perpetuates white supremacy, according to a new paper by a forensic anthropologist at Binghamton University, State University of New York.

By themselves, bones seem somewhat uniform to the untrained eye. They lack the traits we so often use to categorize fellow humans: hair texture, the shape of nose and eye, skin pigmentation.

Forensic anthropologists know that race isn't based in biological fact, but in a history and culture that assigns meaning to physical traits that occur among different human populations. Why, then, are they still relying on a tool from the field's negative roots in "race science"?

Binghamton University Associate Professor of Anthropology Elizabeth DiGangi addresses this issue in a recent article in The American Journal of Physical Anthropology. Co-authored with Jonathan Bethard of the University of South Florida, "Uncloaking a Lost Cause: Decolonizing ancestry estimation in the United States" explores a practice that dates back to the very origins of forensic anthropology in the late 19th and early 20th centuries.

The field was initially created by anatomists who had human skeletons in their museums or medical schools; they began studying the bones to see what could be learned from their features. Ancestry estimation, which analyzes bone structures -- especially those in the face or skull -- to determine ancestral origin was among the early developments.

However, the practice was originally anything but neutral: scientists used these features to classify races they had already arbitrarily defined, with the goal of proving the superiority of European men. It should be noted, DiGangi said, that these scientists were all European men themselves.

When forensic anthropology became formalized later in the 20th century, it kept the practice of ancestry estimation.

"Since the time of professionalization of the field in the late 1970s, we've just taken as fact that ancestry estimation could and should be done," she said.

Social vs. biological race

The categories we're all familiar with from census forms to employment applications -- African-American/Black, European-American/white, Asian-American and so forth -- are examples of social race. These categories are not only a human creation, but they have changed through the years based on government priorities and social sentiment. In the early 20th century, for example, Irish and Italian immigrants weren't considered white, although they are today.

"Biological race is the myth that there is something inherently biological about the differences between these constructed groups, that the human species is divided into races. This myth has been debunked for decades," DiGangi said. "The problem is that science was responsible for teaching the world that biological race was real, yet has not fully succeeded at rescinding it, explaining why we were wrong and atoning for the gross miscommunication."

These concepts can influence how we interpret otherwise neutral phenomena, such as bones. Like any other part of the body, bones have subtle variations from individual to individual, such as the precise location of a hole where a nerve passes through or a roughened area for a muscle attachment. Ancestry estimation particularly relies on skull features and the bones that make up the face, known as morphoscopic traits.

It has long been assumed that morphoscopic traits indicate a person's ancestry, and there has been some research into specific feature variations among different human groups. However, research has never determined the extent to which these features are inherited, making their connection to particular groups largely anecdotal, DiGangi explained. There are other problems, too: If you were to study whether these traits could be inherited, how do you determine the demarcation line between different groups?

In other words, ancestry estimation isn't grounded in good science.

Those defending its use, however, say that it's a needed tool. In the United States' complex system of death investigation, forensic anthropologists work alongside law enforcement when it comes to identifying human remains. The morphoscopic traits, dental traits and skull measurements that underpin ancestry estimation would be meaningless to investigators unless they can be mapped onto social racial categories.

But it's hard to say whether ancestry estimation really helps identify people, the authors point out. Estimates tend to rely on cases where a body is successfully identified -- and don't take the failures into account.

And then there is also the troublesome legacy of white supremacy that underpins policing in the United States. In the paper, the authors hypothesized that racial bias on the part of the investigators could lead to delayed or nonexistent identification for people of color, and issued an urgent call for research.

"People in the forensic sciences have a tendency to think that because we work for justice for victims, we are above the fray and racism is not applicable to us or the institutions we work for," DiGangi said. "As far as I'm concerned, it's well past time for a reality check."

Changing a culture of exclusion

Today, the discipline once created by white anatomists is called biological anthropology, partially to distinguish it from its earlier racist roots. We shouldn't forget that history, but instead "own it and actively atone for it, which includes ensuring that the discipline is more equitable and inclusive," DiGangi explained.

Biological anthropology has made some progress in this area, but forensic anthropology, a subset of that larger field, hasn't done the same.

Today, 87% of forensic anthropologists are white and DiGangi is a rarity. In fact, she's the only board-certified person who has identified as Black in the history of the American Board of Forensic Anthropology, which was established in 1977.

While diversity is sorely needed, it has to be more than just a buzz phrase. Concrete actions need to be taken not only on the board level, but in anthropology departments, student organizations, and undergraduate and graduate mentoring relationships, all of which lead future forensic anthropologists to the discipline.

These actions include increasing transparency and atoning for the past and present harms done to a variety of populations: people of color, women, the LGBTQ+ community and those who aren't able-bodied or neurotypical. One of those harms is a history of exclusion.

"Leadership may think that they are not exclusionary, but any organization whose membership consists overwhelmingly of white people is exclusionary, and the organization and its members have a responsibility to figure out the factors that have led to that and fix it," she said.

Organizations need specific policies and procedures to create a welcoming environment. Think of a typical summertime barbecue: no one is going to invite themselves in, especially if the other attendees don't look like them and the food and music are otherwise unfamiliar, DiGangi said. But if the barbecue attendees are welcoming, engage with that new individual, make adjustments to meet their needs and truly listen, the situation changes.

This isn't an issue unique to forensic anthropology.

"All of the sciences, and certainly the other forensic disciplines, need to face the issue of how racism and other forms of discrimination have been a key force in everything from our membership recruitment and retention to our methods and how we interpret the results," she said.

Credit: 
Binghamton University

Changes in writing style provide clues to group identity

Small changes to people's writing style can reveal which social group they "belong to" at a given moment, new research shows.

Groups are central to human identity, and most people are part of multiple groups based on shared interests or characteristics - ranging from local clubs to national identity.

When one of these group memberships becomes relevant in a particular situation, behaviour tends to follow the norms of this group so that people behave "appropriately".

The new study - by the University of Exeter, Imperial College London, University College London and Lancaster University - demonstrates that group normative behaviour is reflected in a person's writing style.

It also shows that assessing writing style can reveal - with an accuracy of about 70% - which of two groups affected a person while they were writing a particular piece of text.

To demonstrate their method, researchers studied how people who are parents and feminists change their writing style when they move from one identity to another on anonymous online forums such as Reddit, Mumsnet and Netmums.

"People are not just one thing - we change who we are, our identity, from situation to situation," said Dr Miriam Koschate-Reis, of the Department of Psychology and the Institute for Data Science and Artificial Intelligence, both at the University of Exeter.

"In the current situation, many people will need to switch between being a parent and being an employee as they are trying to manage home schooling, childcare and work commitments.

"Switches between identities influence behaviour in multiple ways, and in our study we tracked which identity was active by focussing on language.

"We found that people not only change their writing style to impress their audience - they change it based on the group identity that is influencing them at the time.

"So, when we asked people in an experiment to think about themselves as a parent, their language patterns reflected this."

The study avoided "content" words (a parent might mention "childcare" for example) and focussed on stylistic patterns including use of pronouns, "intellectual" words and words expressing emotions.

Commenting on the possible uses of the new method, Dr Koschate-Reis said: "We are currently focussing on mental health.

"It is the first method that lets us study how people access different group identities outside the laboratory on a large scale, in a quantified way.

"For example, it gives us the opportunity to understand how people acquire new identities, such as becoming a first-time parent, and whether difficulties 'getting into' this identity may be linked to postnatal depression and anxiety.

"Our method could help to inform policies and interventions in this area, and in many others."

Group identities have been found to affect thoughts, emotions and behaviour in many settings - from work contexts to education to political activism.

Research is ongoing to understand how much control we have over switches between different identities - most of which are thought to be triggered by the social context.

Dr Koschate-Reis said it might be possible to manipulate the cues that trigger an identity switch by going to a location associated with the identity.

For example, students might find it easier to write in an "academic style" when they are in the library rather than the local coffee shop.

Credit: 
University of Exeter

How "ugly" labels can increase purchase of unattractive produce

Researchers from University of British Columbia published a new paper in the Journal of Marketing that examines whether and how the use of 'ugly' labeling for unattractive produce increases sales and profit margins.

The study, forthcoming in the Journal of Marketing, is titled "From Waste to Taste: How "Ugly" Labels Can Increase Purchase of Unattractive Produce" and is authored by Siddhanth (Sid) Mookerjee, Yann Cornil, and JoAndrea Hoegg.

According to a recent report by the National Academies of Science, Engineering and Medicine (2020), each year in the U.S. farmers throw away up to 30% of their crops, equal to 66.5 million tons of edible produce, due to cosmetic imperfections. Such food waste has detrimental consequences for the environment: 96% of wasted food is left to decompose in landfills, releasing methane and contributing to climate change. Additionally, 1.4 billion hectares of land and 25% of the world's fresh water are used to grow produce that will be later thrown away.

These researchers seek to answer two important questions: 1) Why do consumers reject unattractive produce? 2) Does 'ugly' labeling increase the purchase of unattractive produce and, if so, why does it work? They discover that consumers expect unattractive produce to be less tasty and, to a smaller extent, less healthy than attractive produce, which leads to its rejection. They also find that emphasizing aesthetic flaws via 'ugly' labeling (e.g., "Ugly Cucumbers") can increase the purchase of unattractive produce. This is because 'ugly' labeling points out the aesthetic flaw in the produce, making it clear to consumers that there are no other deficiencies in the produce other than attractiveness. Consumers may also reevaluate their reliance on visual appearance as a basis for judging the tastiness and healthiness of produce; 'ugly' labeling makes them aware of the limited nature of their spontaneous objection to unattractive produce.

The research studies the efficacy of 'ugly' labeling in various contexts. First, a field study shows the effectiveness of 'ugly' labeling. Mookerjee explains that "We sold both unattractive and attractive produce at a farmer's market and find that consumers were more likely to purchase unattractive produce over attractive produce when the unattractive produce was labeled 'ugly' compared to when unattractive produce was not labeled in any specific way. 'Ugly' labeling also generated greater profit margins relative to when unattractive produce was not labeled in any specific way--a great solution for sellers to make a profit while reducing food waste." In the second study, participants were told that they could win a lottery worth $30, and could keep all the cash or allocate some of the lottery earnings to purchase either a box of attractive produce or unattractive produce. 'Ugly' labeling increased the likelihood that consumers would use their lottery earnings to purchase a box of unattractive rather than attractive produce.

In Studies 3 and 4, 'ugly' labeling positively impacts taste and health expectations, which led to higher choice likelihood of unattractive produce over attractive produce. Study 5 considers how 'ugly' labeling might alter the effectiveness of price discounts. Typically, when retailers sell unattractive produce, they offer a discount of 20%-50%. Cornil says that "We show that 'ugly' labeling works best for moderate price discounts (i.e., 20%) rather than steep price discounts (i.e., 60%) because a large discount signals low quality, which nullifies the positive effect of the 'ugly' label." This suggests that by simply adding the 'ugly' label, retailers selling unattractive produce can reduce those discounts and increase profitability.

The last two studies demonstrate that 'ugly' labeling is more effective than another popular label, 'imperfect.' Although 'imperfect' is used by major brick-and-mortar and online retailers and was preferred by 50+ grocery store managers interviewed, the researchers find that 'ugly' labeling was more effective than 'imperfect' labeling at generating click-throughs in online ads.

Importantly, these findings largely contrast with managers' beliefs. "While grocery store managers believed in either not labeling unattractive produce in any specific way or using 'imperfect' labeling, we show that 'ugly' labeling is far more effective," says Hoegg. Given retailers' participation in the U.S. Food and Waste 2030 Champions Initiative--with an objective of cutting food waste in half by 2030 (Redman 2018)--this research urges retailers and sellers to use 'ugly' labeling to sell unattractive produce.

Credit: 
American Marketing Association

Oktoberfest memories increase life-satisfaction, customer loyalty

RICHLAND, Wash. - No one went to Oktoberfest in 2020, but chances are those who attended in the past are still thinking about it.

In a case study of the famous German beer festival, researchers tested the theory that events which create memorable experiences can increase life-satisfaction. This deep connection with customers has big benefits for associated businesses, according to Robert Harrington, lead author of the study recently published online in the International Journal of Contemporary Hospitality Management.

"If you can do something that transforms people even a little bit, it can have a huge impact on the success of your company and your brand," said Harrington, professor and director of the School of Hospitality Management at Washington State University Carson College of Business. "The more customers are delighted, the more likely they are to be return customers. They are also more likely to give positive recommendations to friends and relatives, and particularly on social media. In today's environment, people trust those reviews more than paid advertising."

For the study, the researchers surveyed more than 820 people attending a festival beer tent over several days of the 2018 Oktoberfest. The majority of the respondents were male (56.8%) and largely German, though roughly 12% were from outside the country, including from Italy and the United States. The respondents answered questions related to food and beverage quality, connectedness, experience uniqueness, memorability and life satisfaction.

When the researchers analyzed the relationship among those answers, they found that connectedness to Oktoberfest, such as feeling a close association with a particular beer tent or to Oktoberfest traditions, influenced impressions of food and beverage quality and the uniqueness of the experience. These in turn influenced how highly the participants felt that attending the event increased their overall satisfaction with life.

The researchers purposely chose to study the annual beer festival in Munich because it is so well-known and brings together a mix of tourism and hospitality services.

"Oktoberfest has a very strong brand. It's almost like a bucket-list event," said Harrington. "As a significant event in visitors' lives, there's a greater likelihood that there's a quantifiable measure of life satisfaction or sense of well-being from those experiences, as opposed to people going out to a neighborhood bar or restaurant."

Emulating Oktoberfest, which attracts more than 7 million visitors a year, may be a tall order, but the researchers suggest that other businesses can learn from its success. Breweries or wineries can create regional or local events on a smaller scale. Like Oktoberfest, these events could bundle goods and services, such as a tasting that pairs beer or wine with food or adding an experience like a music performance or art show.

The idea is to invite customers to participate in creating a memorable experience that lasts, Harrington said.

"Once people go back home, they will want to bring up that memory again," said Harrington. "They will go buy that beer because they went to a festival where they had a great time."

Credit: 
Washington State University

Southern California COVID-19 strain rapidly expands global reach

LOS ANGELES (Feb. 11, 2021) -- A new strain of the coronavirus in Southern California, first reported last month by Cedars-Sinai, is rapidly spreading across the country and around the world as travelers apparently carry the virus with them to a growing list of global destinations, according to new research published today in the peer-reviewed Journal of the American Medical Association (JAMA).

The strain, known as CAL.20C, was first observed in July 2020 in a single Los Angeles County case, as Cedars-Sinai earlier reported. It reemerged in October in Southern California and then quickly began spreading in November and December, corresponding with a regional surge in coronavirus cases during the holidays.

The strain now accounts for nearly half of current COVID-19 cases in Southern California - nearly double the percentage in the region compared to just a month ago, according to the Cedars-Sinai research.

While CAL.20C has expanded quickly through the local population, it also has spread to a total of 19 U.S. states plus Washington, D.C., and six foreign countries, the Cedars-Sinai investigators report.

The new JAMA study reported that as of Jan. 22, CAL.20C had been detected in the states of Alaska, Arizona, California, Connecticut, Georgia, Hawaii, Maryland, Michigan, New Mexico, Nevada, New York, Oregon, Rhode Island, South Carolina, Texas, Utah, Washington, Wisconsin, Wyoming and in Washington DC. Abroad, it was found in Australia, Denmark, Israel, New Zealand, Singapore and the United Kingdom.

Travelers from Southern California appear to be carrying CAL.20C to other states and parts of the world, according to the JAMA study's co-senior author, Jasmine Plummer, PhD, a research scientist at the Cedars-Sinai Center for Bioinformatics and Functional Genomics and associate director of the Applied Genomics, Computation & Translational Core at Cedars-Sinai.

"CAL.20C is moving, and we think it is Californians who are moving it," Plummer said.

Los Angeles International Airport (LAX) has long been among the busiest travel hubs in the U.S., ranking No. 2 in total passengers boarded in 2019, according to the U.S. Department of Transportation. While air traffic across the U.S. has plummeted in the last year during the pandemic, about 2 million domestic and international passengers still traveled through LAX each month in November and December, 2020.

LAX is a key U.S. gateway for a number of the foreign destinations, including Australia, and New Zealand, where CAL.20C now is found. Several Western states that have the strain, including Arizona, Nevada and New Mexico, are popular vacation destinations for Southern Californians.

It is not clear whether CAL.20C might be more deadly than current coronavirus strains, or whether it might resist current vaccines. Cedars-Sinai investigators are conducting new, collaborative research to find the answers.

CAL.20C is defined by five recurring variants in the SARS-CoV-2 virus that causes COVID-19, including the L452R variant that was earlier reported by the California Department of Public Health. It is distinct from other strains present in the U.S., including the United Kingdom's B.1.1.7 and South Africa's B.1.351.

"New variants do not always affect the behavior of a virus in the body. But we are interested in the CAL.20C strain because three of its five variants involve the so-called spike protein, which enables the SARS-CoV-2 virus to invade and infect normal cells," said the JAMA study's other co-senior author, Eric Vail, MD, assistant professor of Pathology and director of Molecular Pathology in the Department of Pathology and Laboratory Medicine at Cedars-Sinai.

Although the pace of new coronavirus cases has slowed recently, Los Angeles County remains one of the nation's pandemic hotspots. Through Feb. 9, the county had reported more than 1 million COVID-19 cases and more than 18,000 deaths since the start of the pandemic.

Cedars-Sinai investigators are tracking the rise and spread of CAL.20C using an advanced technique known as next-generation sequencing to analyze the genes of the viruses. Their research draws on virus samples collected from Cedars-Sinai patients who tested positive for coronavirus, along with analysis of random samples of SARS-CoV-2 viruses in publicly available databases - to which Cedars-Sinai also contributes.

The JAMA study's other co-authors, all from Cedars-Sinai, include first author Wenjuan Zhang, PhD, assistant professor in the Department of Pathology and Laboratory Medicine; Brian Davis, BS, and Stephanie Chen, BS, both from the Center for Bioinformatics and Functional Genomics; and Jorge Sincuir Martinez from the Department of Pathology and Laboratory Medicine.

"This study is the latest Cedars-Sinai initiative to contribute a major finding to the international effort to address the COVID-19 pandemic," said Jeffrey A. Golden, MD, vice dean of Research and Graduate Education and director of the Burns and Allen Research Institute at Cedars-Sinai. "Among other accomplishments, our research teams have shown how the SARS-CoV-2 virus can directly attack the heart. They have uncovered immune dysfunction in coronavirus patients who develop respiratory failure and co-authored high-profile papers reporting results of nationwide clinical trials involving remdesivir and antibody treatments for COVID-19."

Credit: 
Cedars-Sinai Medical Center

Many genes associated with the risk of coronary artery disease act through the liver

image: In this study, promoter capture Hi-C, ChIP-Seq, STARR-Seq and CRISPR technologies were used to study the action of coronary artery disease risk variants in hepatocyte cells of the liver.

Image: 
UEF/Raija Törrönen.

According to a new study published in The American Journal of Human Genetics, more than one third of genetic variants that increase the risk of coronary artery disease regulate the expression of genes in the liver. These variants have an impact on the expression of genes regulating cholesterol metabolism, among other things. The findings provide valuable new insight into the genetics of coronary artery disease. The study was conducted in collaboration between the University of Eastern Finland, Kuopio University Hospital, the University of California Los Angeles, and the University of Arizona.

Coronary artery disease (CAD) and its most important complication myocardial infarction (MI) are among the leading causes of death in the Western world. Both genetic and environmental factors contribute to the disease and recent genome-wide association studies have identified approximately 200 risk loci for CAD. However, the vast majority of such variants are located in the non-coding regions of the genome and have no known biological function. Even though the functional characterization of such variants has been difficult in the past, thanks to new and advanced genomics techniques such as RNA-seq, ChIP-seq, STARR-seq and HiC, and computational analysis, understanding the variants' functions is now possible.

The involvement of the liver in the progression of coronary artery disease is not completely understood. In the new study, the researchers show that over one third of risk variants for CAD are located in regulatory elements specific to liver, and they act to regulate the expression of genes implicated in traditional risk factors, such as glucose and cholesterol related traits.

"Our results not only confirm the correlation of cholesterol levels and the risk of coronary artery disease but also pinpoint for the first time the causal single nucleotide polymorphisms and the potential target genes that mediate the risk," Academy Research Fellow, Associate Professor Minna Kaikkonen-Määttä from the University of Eastern Finland says.

Another important finding was the discovery that risk variant-containing regulatory elements often seem to regulate many genes, not just one.

"Overall, our findings expand the list of genes and regulatory mechanisms acting in the liver and governing the risk of CAD development. Deciphering gene regulatory networks is becoming increasingly important in understanding disease mechanisms and developing next generation drug therapies," Associate Professor Kaikkonen-Määttä says.

Credit: 
University of Eastern Finland

Signal coupling between neuron-glia super-network may lead to improved memory formation

image: Optogenetic control of glial pH suppresses or enhances the glial release of glutamate

Image: 
Ko Matsui

Tohoku University scientists have shown that neuronal and glial circuits form a loosely coupled super-network within the brain. Activation of the metabotropic glutamate receptors in neurons was shown to be largely influenced by the state of the glial cells. Therefore, artificial control of the glial state could potentially be used to enhance the memory function of the brain.

The findings were detailed in the Journal of Physiology.

Although the glial cells occupy more than half of the brain, they were thought to act as glue--merely filling the gap between neurons. However, recent findings show that the concentration of intracellular ions in glia, such as calcium and proton, can fluctuate over time.

"Glial cells appear to have the capacity of coding information," says professor Ko Matsui of the Super-network Brain Physiology lab at Tohoku University, who led the research. "However, the role of the added layer of signals encoded in the glial circuit has always been an enigma."

Using patch clamp electrophysiology techniques in acute brain slices from mice, Dr. Kaoru Beppu, Matsui, and their team show that glial cells in the cerebellum react to excitatory transmitter glutamate released from synapses of neurons. The glial cells then release additional glutamate in return. Therefore, these glial cells effectively function as excitatory signal amplifiers.

The additional glutamate released from glial cells efficiently activate metabotropic glutamate receptors on Purkinje neurons--essential for cerebellar motor learning. The amount of feedforward excitation was controlled by the intracellular pH of the glia cells.

"Depending on the state of our mind, the same experience could become a lasting memory or could fade away," says Matsui. "It is possible that the pH of the glial cells at the time of the experience could have a pivotal role on memory formation."

In this study, light-sensitive proteins were genetically expressed in glial cells to control their pH at will. Such optogenetics technology would be difficult to apply in human patients. "Although it would take a long time for clinical use, it is possible to imagine a future where a therapeutic strategy is designed to target glial cells to control their pH for memory enhancement to treat dementia," added Matsui.

Credit: 
Tohoku University

New study suggests supermassive black holes could form from dark matter

image: Artist's impression of a spiral galaxy embedded in a larger distribution of invisible dark matter, known as a dark matter halo (coloured in blue). Studies looking at the formation of dark matter haloes have suggested that each halo could harbour a very dense nucleus of dark matter, which may potentially mimic the effects of a central black hole, or eventually collapse to form one.

Image: 
ESO / L. Calçada

A new theoretical study has proposed a novel mechanism for the creation of supermassive black holes from dark matter. The international team find that rather than the conventional formation scenarios involving 'normal' matter, supermassive black holes could instead form directly from dark matter in high density regions in the centres of galaxies. The result has key implications for cosmology in the early Universe, and is published in Monthly Notices of the Royal Astronomical Society.

Exactly how supermassive black holes initially formed is one of the biggest problems in the study of galaxy evolution today. Supermassive black holes have been observed as early as 800 million years after the Big Bang, and how they could grow so quickly remains unexplained.

Standard formation models involve normal baryonic matter - the atoms and elements that that make up stars, planets, and all visible objects - collapsing under gravity to form black holes, which then grow over time. However the new work investigates the potential existence of stable galactic cores made of dark matter, and surrounded by a diluted dark matter halo, finding that the centres of these structures could become so concentrated that they could also collapse into supermassive black holes once a critical threshold is reached.

According to the model this could have happened much more quickly than other proposed formation mechanisms, and would have allowed supermassive black holes in the early Universe to form before the galaxies they inhabit, contrary to current understanding.

Carlos R. Argüelles, the researcher at Universidad Nacional de La Plata and ICRANet who led the investigation comments: "This new formation scenario may offer a natural explanation for how supermassive black holes formed in the early Universe, without requiring prior star formation or needing to invoke seed black holes with unrealistic accretion rates."

Another intriguing consequence of the new model is that the critical mass for collapse into a black hole might not be reached for smaller dark matter halos, for example those surrounding some dwarf galaxies. The authors suggest that this then might leave smaller dwarf galaxies with a central dark matter nucleus rather than the expected black hole. Such a dark matter core could still mimic the gravitational signatures of a conventional central black hole, whilst the dark matter outer halo could also explain the observed galaxy rotation curves.

"This model shows how dark matter haloes could harbour dense concentrations at their centres, which may play a crucial role in helping to understand the formation of supermassive black holes," added Carlos.

"Here we've proven for the first time that such core-halo dark matter distributions can indeed form in a cosmological framework, and remain stable for the lifetime of the Universe."

The authors hope that further studies will shed more light on supermassive black hole formation in the very earliest days of our Universe, as well as investigating whether the centres of non-active galaxies, including our own Milky Way, may play host to these dense dark matter cores.

Credit: 
Royal Astronomical Society

World's first video of a space-time crystal

A German-Polish research team has succeeded in creating a micrometer-sized space-time crystal consisting of magnons at room temperature. With the help of the scanning transmission X-ray microscope Maxymus at Bessy II at Helmholtz Zentrum Berlin, they were able to film the recurring periodic magnetization structure in a crystal. Published in the Physical Review Letters, the research project was a collaboration between scientists from the Max Planck Institute for Intelligent Systems in Stuttgart, Germany, the Adam Mickiewicz University and the Polish Academy of Sciences in Pozna? in Poland.

Order in space and a periodicity in time

A crystal is a solid whose atoms or molecules are regularly arranged in a particular structure. If one looks at the arrangement with a microscope, one discovers an atom or a molecule always at the same intervals. It is similar with space-time crystals: however, the recurring structure exists not only in space, but also in time. The smallest components are constantly in motion until, after a certain period, they arrange again into the original pattern.

In 2012, the Nobel Prize winner in physics Frank Wilczek discovered the symmetry of matter in time. He is considered the discoverer of these so-called time crystals, although as a theorist he predicted them only hypothetically. Since then, several scientists have searched for materials in which the phenomenon is observed. The fact that space-time crystals actually exist was first confirmed in 2017. However, the structures were only a few nanometers in size and formed only at very cold temperatures below minus 250 degrees Celsius. The fact that the German-Polish scientists have now succeeded in imaging relatively large space-time crystals of a few micrometers in a video at room temperature is therefore considered groundbreaking. But also because they were able to show that their space-time crystal, which consists of magnons, can interact with other magnons that encounter it.

An exceptional experiment succeeded

"We took the regularly recurring pattern of magnons in space and time, sent more magnons in, and they eventually scattered. Thus, we were able to show that the time crystal can interact with other quasiparticles. No one has yet been able to show this directly in an experiment, let alone in a video," says Nick Träger, a doctoral student at Max Planck Institute for Intelligent Systems who, together with Pawel Gruszecki, is first author of the publication.

In their experiment, Gruszecki and Träger placed a strip of magnetic material on a microscopic antenna through which they sent a radio-frequency current. This microwave field triggered an oscillating magnetic field, a source of energy that stimulated the magnons in the strip - the quasiparticle of a spin wave. Magnetic waves migrated into the strip from left and right, spontaneously condensing into a recurring pattern in space and time. Unlike trivial standing waves, this pattern was formed before the two converging waves could even meet and interfere. The pattern, which regularly disappears and reappears on its own, must therefore be a quantum effect.

Gisela Schütz, Director at Max Planck Institute for Intelligent Systems who heads the Modern Magnetic Systems Department, points out the uniqueness of the X-ray camera: "Not only can it make the wavefronts visible with very high resolution, which is 20 times better than the best light microscope. It can even do so at up to 40 billion frames per second and with extremely high sensitivity to magnetic phenomena as well."

"We were able to show that such space-time crystals are much more robust and widespread than first thought," says Pawel Gruszecki, a scientist at the Faculty of Physics of the Adam Mickiewicz University in Pozna?. "Our crystal condenses at room temperature and particles can interact with it - unlike in an isolated system. Moreover, it has reached a size that could be used to do something with this magnonic space-time crystal. This may result in many potential applications."

Joachim Gräfe, former research group leader in the Department of Modern Magnetic Systems and last author of the publication, concludes: "Classical crystals have a very broad field of applications. Now, if crystals can interact not only in space but also in time, we add another dimension of possible applications. The potential for communication, radar or imaging technology is huge."

Credit: 
Max-Planck-Gesellschaft