Culture

Considering social influences across the customer journey

Researchers from Emory University, University of Maryland, Vanderbilt University, and Hong Kong University of Science and Technology published a new paper in the Journal of Marketing that re-examines the classic customer journey model from a social perspective by emphasizing the social influences expected at each stage of the journey and across the journey stages..

The study forthcoming in the the Journal of Marketing is titled "Traveling with Companions: The Social Customer Journey" and is authored by Ryan Hamilton, Rosellina Ferraro, Kelly Haws, and Anirban Mukhopadhyay.

Customer journey models go back more than 100 years to the earliest days of marketing as a discipline. These models break down the customer's path to purchase and beyond into discrete steps or stages and have proven remarkably useful to marketing academics and practitioners. These journey models vary a great deal in their specifics, but what nearly all previous incarnations of the customer journey have in common is the depiction of an individual journey. Hamilton explains, "While previous customer journey models have acknowledged the possible influence of social others on customers, our approach was unique because it fully integrates social influences. This social approach is especially relevant given the ways technology has facilitated more and different social influences throughout the customer journey."

The nature and type of social influences are varied. This article grapples with this diversity by organizing social influences on the customer journey along a social distance continuum. The researchers suggest that social distance is comprised primarily of five dimensions: number of social others, extent to which the other is known, temporal and physical presence, group membership, and strength of ties. They also suggest that these dimensions converge to form a global sense of social distance, but that not all dimensions need to be on the extreme ends of the continuum for the social other to be interpreted as overall more proximal or distal. Rather, a preponderance of the factors will determine how close the social other is perceived to be.

Distal social others can be larger groups or the whole of society, whose members might not be individuated, present, temporally proximal, or even known to the consumer. When a distal other is a single individual, it will tend to be someone the consumer does not know personally, such as a YouTube tutor or an anonymous review writer. For example, a vacation-planning consumer may be influenced by distal social others including the reviews on a travel website representing many, relatively unknown, non-physically present social others with only weak social ties and unlikely membership in a readily identifiable in-group.

Proximal social others are typically specific, individuated others who provide distinct, discrete, articulated inputs to the focal customer's journey. They tend to be close, in terms of temporal and physical proximity, members of the customer's in-group and have strong ties to the focal consumer. For example, the same vacation-planning consumer mentioned above may be influenced by inputs from a proximal social other, such as a single, close friend representing one, well-known, physically present, in-group member with strong social ties.

Perhaps the most fundamentally social journey is one wherein two or more consumers journey together. With respect to the social distance continuum, when a certain threshold is surpassed, social others may become incorporated into the decision-making unit (DMU) itself, creating a joint journey characterized by interdependence in most or all stages of the customer journey. This results in a pluralized DMU, where two or more people travel on a joint-decision, joint-consumption journey together. Decision making in such situations is qualitatively different because the members of the DMU have interdependent utilities and each member of the DMU may, at each stage of the journey, base his or her own responses on the responses of the other. Because of the relationship dynamics that must be managed, joint journeys are complex and distinct from individual journeys.

Key managerial issues across the entire social customer journey involve how and when to become involved in what might otherwise be consumer-to-consumer-only interactions. Considerations might include: when and how firms should respond to negative customer reviews or social media call-outs, when to highlight a social media influencer who is implicitly or explicitly endorsing one's product, how to manage "sponsored" blog posts, and when to provide "corrective" information when consumers are being exposed to unfavorable product information by their peers. Of particular interest to marketers, social influence may be used to nudge consumers from evaluation to decision. Technology has increased the number of opinions that bear upon a customer's journey and has even begun to insert itself as a decision support system wherein the customer and an artificial intelligence (AI)-enabled agent, such as a chatbot, together reach a final decision. Firms must carefully consider their usage of AI technologies, attending specifically to the social implications.

Credit: 
American Marketing Association

Supercomputer simulations present potential active substances against coronavirus

Several drugs approved for treating hepatitis C viral infection were identified as potential candidates against COVID-19, a new disease caused by the SARS-CoV-2 coronavirus. This is the result of research based on extensive calculations using the MOGON II supercomputer at Johannes Gutenberg University Mainz (JGU). One of the most powerful computers in the world, MOGON II is operated by JGU and the Helmholtz Institute Mainz. As the JGU researchers explained in their paper recently published at the World Health Organization (WHO) website, they had simulated the way that about 42,000 different substances listed in open databases bind to certain proteins of SARS-CoV-2 and thereby inhibit the penetration of the virus into the human body or its multiplication. "This computer simulation method is known as molecular docking and it has been recognized and used for years. It is much faster and less expensive than lab experiments," said Professor Thomas Efferth of the JGU Institute of Pharmacy and Biomedical Sciences, lead author of the study. "As far as we know, we were the first to have used molecular docking with SARS-CoV-2. And it is fantastic news that we have found a number of approved hepatitis C drugs as promising candidates for treatment."

Using the MOGON II supercomputer, the reseachers made more than 30 billion single calculations within two months and found that compounds from the four hepatitis C drugs simeprevir, paritaprevir, grazoprevir, and velpatasvir have a high affinity to bind SARS-CoV-2 very strongly and may therefore be able to prevent infection. "This is also supported by the fact that both SARS-CoV-2 and the hepatitis C virus are a virus of the same type, a so-called single-stranded RNA virus," explained Efferth. According to the researchers, a natural substance from the Japanese honeysuckle (Lonicera japonica), which has been used in Asia against various other diseases for some time now, might be another strong candidate against SARS-CoV-2. "Our research results now need to be checked in laboratory experiments and clinical studies," said Efferth and added that molecular docking had already been used successfully in the search for active substances against the coronaviruses MERS-CoV and SARS-CoV.

Credit: 
Johannes Gutenberg Universitaet Mainz

Saving energy and lives: How a solar chimney can boost fire safety

video: In a world-first, RMIT University researchers have designed a solar chimney optimised for both energy saving and fire safety, as part of the sustainable features of a new building in Melbourne, Australia.

Image: 
RMIT University

A must-have in green building design, solar chimneys can slash energy costs up to 50%. Now research reveals they could also help save lives in a building fire.

In a world-first, researchers designed a solar chimney optimised for both energy saving and fire safety, as part of the sustainable features of a new building in Melbourne, Australia.

Modelling shows the specially-designed solar chimney radically increases the amount of time people have to escape the building during a fire - extending the safe evacuation time from about two minutes to over 14 minutes.

Watch and embed the video

A solar chimney is a passive solar heating and cooling system that harnesses natural ventilation to regulate the temperature of a building.

With an estimated 19% of the world's energy resources going to heating, ventilating and cooling buildings*, integrating solar chimneys into new builds and retrofitting to existing structures offers great potential for reducing this massive environmental cost.

In the new project, a collaboration between RMIT University and the City of Kingston, researchers designed a solar chimney to maximise its efficiency for both ventilating fresh air and sucking smoke out of a building in case of fire.

Researcher Dr Long Shi said solar chimneys have well established environmental credentials, but their potential for improving fire safety had not been explored.

"In an emergency situation where every second counts, giving people more time to escape safely is critical," Shi said.

"Our research demonstrates that solar chimneys offer powerful benefits for both people's safety and the environment.

"Delivering on two important functions could boosts the already strong cost-effectiveness of this sustainable technology.

"We hope our findings will inspire more investment and development of solar chimneys in Australia, and around the world."

Kingston Mayor Georgina Oxley said Council was excited to be a part of the groundbreaking project.

"Creating new and innovative ways of reducing energy consumption in our building design is something that is a priority for Council," Oxley said.

"The solar-chimney that has been installed at the new state-of-the-art Mentone Reserve Pavilion not only allows us to harness clean green energy to heat and cool the building, helping Council achieve its environmental goals, but it also has the potential to save lives in the event of a fire. This is a truly remarkable design."

While calculations around the 6-fold increase in safe evacuation time were specific to the new building, previous research by the team from RMIT's School of Engineering has confirmed solar chimneys can successfully achieve both functions - ventilation and smoke exhaustion.

Hot air rises: how a solar chimney works

The passive design approach behind solar chimneys operates on the well-known principle that hot air always rises.

Modern solar chimneys usually feature a wall of glass next to a wall that is painted black, to maximise the absorption of solar radiation. Vents at the top and bottom control the airflow in and out of the chimney for heating or cooling.

As the sun warms the chimney, this heats the air inside it.

The hot air rises and is vented out of the top of the chimney, which draws more air in at the bottom, driving ventilation through a building to naturally cool it down.

When it's cold outside, the chimney can be closed, to direct the absorbed heat back into the building and keep it warm.

It's an ingeniously simple concept that is relatively cheap to retrofit and adds almost no extra cost to a new build, but can drive energy consumption down.

Reducing smoke, increasing safety

During a fire, the same principle - hot air rises - enables the solar chimney to suck smoke out of the building.

Less smoke means better visibility, lower temperatures and reduced carbon monoxide - all of which contribute to increasing the amount of time people have to safely evacuate.

To understand exactly how much evacuation time a solar chimney could deliver for a specific building, you need to model for that exact design, Shi said.

"This will differ from building to building, but we know that any extra time is precious and improves fire safety, which could ultimately help to save lives," he said.

The new research offers a technical guide for optimising the design and engineering of solar chimneys in real buildings, to expand their application across the two functions.

Credit: 
RMIT University

New therapeutic targets for treating memory impairment in Down syndrome

Barcelona, 4th May, 2020. A team of researchers led by Dr. Victoria Puig from the Hospital del Mar Medical Research Institute (IMIM), which also involved the Centre for Genomic Regulation (CRG), has studied the neural basis of intellectual disability in mice with Down syndrome and has discovered that the neural networks of brain circuits relevant to memory and learning are over-activated and that the connectivity of these circuits is poor.

The researchers have also observed that neural activity during sleep is abnormal and probably interferes with memory consolidation. The study has even identified biomarkers in brain rhythms that can predict memory deficits in the mice which are corrected by chronic treatment with a natural component of green tea, epigallocatechin gallate, which other studies have already shown to improve executive function in adults with Down syndrome.

"These results suggest that both hyperactivity of neuronal networks and deficiencies in the connectivity of specific brain circuits are possible dysfunctional mechanisms that contribute to memory deficits in Down syndrome and, therefore, offer new therapeutic possibilities for treating intellectual disability," explains Dr. Victoria Puig, researcher in the Integrated Pharmacology and Systems Neuroscience Research Group at the IMIM.

To date, it had been recognised that epigallocatechin gallate corrects certain alterations at the molecular and cellular levels derived from the trisomy of chromosome 21 that are associated with cognitive deficits in Down syndrome. However, a dynamic description of the actions of epigallocatechin gallate on neural activity during distinct brain states was lacking. This is, therefore, the first time that anyone has looked at how mouse brain responds to chronic treatment with epigallocatechin gallate at a functional level in trisomy conditions.

The study involved recording neuronal activity simultaneously in two brain regions critical for learning and memory, the prefrontal cortex and the hippocampus, in trisomic and non-trisomic mice during periods of rest while awake, asleep, and during the performance of a simple memory task. The data was collected before and after one month of treatment with epigallocatechin gallate, and the alterations in the activity of the neuronal networks in the two regions as well as the connectivity of the circuitry correlating with memory capacities were analysed and found to have been corrected with the green tea extract.

According to Dr. Mara Dierssen from the Cellular and Systems Neurobiology lab at the CRG, "This study provides an in-depth description of the neurophysiological abnormalities present in different brain states in Down syndrome model mice and provides the keys for understanding the cellular mechanisms underlying the improved executive function observed in people with Down syndrome after chronic treatment with epigallocatechin gallate". (De la Torre R et al. Lancet Neurology del 2016 doi: 10.1016/S1474-4422(16)30034-5)

Dr. Maria Alemany, first author of the paper and also a researcher in the IMIM's Integrated Pharmacology and Neuroscience Systems research group, explains "that the group is evaluating the effects of cognitive stimulation during brain development on the neuronal activity of mice with Down syndrome. This is important for understanding the cellular mechanisms of cognitive stimulation that are normally used in people to improve intellectual disability".

Down syndrome is a genetic alteration produced by the presence of an extra copy of chromosome 21, which is why this syndrome is also known as trisomy 21. It is the main cause of intellectual disability and the most common human genetic alteration. It is estimated that 34,000 people with Down's syndrome live in Spain and that there are a total of six million sufferers worldwide.

According to Dr. Mara Dierssen from the Cellular and Systems Neurobiology lab at the CRG, "This study provides an in-depth description of the neurophysiological abnormalities present in different brain states in Down syndrome model mice and provides the keys for understanding the cellular mechanisms underlying the improved executive function observed in people with Down syndrome after chronic treatment with epigallocatechin gallate". (De la Torre R et al. Lancet Neurology del 2016 doi: 10.1016/S1474-4422(16)30034-5)

Credit: 
IMIM (Hospital del Mar Medical Research Institute)

Mutations in SARS-CoV-2 offer insights into virus evolution

By analysing virus genomes from over 7,500 people infected with Covid-19, a UCL-led research team has characterised patterns of diversity of SARS-CoV-2 virus genome, offering clues to direct drugs and vaccine targets.

The study, led by the UCL Genetics Institute, identified close to 200 recurrent genetic mutations in the virus, highlighting how it may be adapting and evolving to its human hosts.

Researchers found that a large proportion of the global genetic diversity of SARS-CoV-2 is found in all hardest-hit countries, suggesting extensive global transmission from early on in the epidemic and the absence of single 'Patient Zeroes' in most countries.

The findings, published today in Infection, Genetics and Evolution, also further establish the virus only emerged recently in late 2019, before quickly spreading across the globe.
Scientists analysed the emergence of genomic diversity in SARS-CoV-2, the new coronavirus causing Covid-19, by screening the genomes of over 7,500 viruses from infected patients around the globe. They identified 198 mutations that appear to have independently occurred more than once, which may hold clues to how the virus is adapting.

Co-lead author Professor Francois Balloux (UCL Genetics Institute) said: "All viruses naturally mutate. Mutations in themselves are not a bad thing and there is nothing to suggest SARS-CoV-2 is mutating faster or slower than expected. So far we cannot say whether SARS-CoV-2 is becoming more or less lethal and contagious."

The small genetic changes, or mutations, identified were not evenly distributed across the virus genome. As some parts of the genome had very few mutations, the researchers say those invariant parts of the virus could be better targets for drug and vaccine development.

"A major challenge to defeating viruses is that a vaccine or drug might no longer be effective if the virus has mutated. If we focus our efforts on parts of the virus that are less likely to mutate, we have a better chance of developing drugs that will be effective in the long run," Professor Balloux explained.

"We need to develop drugs and vaccines that cannot be easily evaded by the virus."

Co-lead author Dr Lucy van Dorp (UCL Genetics Institute) added: "There are still very few genetic differences or mutations between viruses. We found that some of these differences have occurred multiple times, independently of one another during the course of the pandemic - we need to continue to monitor these as more genomes become available and conduct research to understand exactly what they do."

The results add to a growing body of evidence that SARS-CoV-2 viruses share a common ancestor from late 2019, suggesting that this was when the virus jumped from a previous animal host, into people. This means it is most unlikely the virus causing Covid-19 was in human circulation for long before it was first detected.

In many countries including the UK, the diversity of viruses sampled was almost as much as that seen across the whole world, meaning the virus entered the UK numerous times independently, rather than via any one index case.

The research team have developed a new interactive, open-source online application so that researchers across the globe can also review the virus genomes and apply similar approaches to better understand its evolution.

Dr van Dorp said: "Being able to analyse such an extraordinary number of virus genomes within the first few months of the pandemic could be invaluable to drug development efforts, and showcases how far genomic research has come even within the last decade. We are all benefiting from a tremendous effort by hundreds of researchers globally who have been sequencing virus genomes and making them available online."

Credit: 
University College London

Non-caloric sweetener reduces signs of fatty liver disease in preclinical research study

There is clear evidence that high sugar consumption leads to obesity and fatty liver disease. Synthetic and natural alternatives to sugar are available, but little is known about the effects of these non-caloric sweeteners on the liver. A new study led by Rohit Kohli, MBBS, MS, shows that stevia extract can reduce markers of fatty liver disease. The results of the pre-clinical research, published in the journal Scientific Reports led to a clinical trial, now in progress.

The Centers for Disease Control and Prevention reports that obesity affects nearly 19% of children. An associated condition called non-alcoholic fatty liver disease affects one out of every 10 children. Fatty liver disease can lead to cirrhosis and liver cancer. Consumption of too much sugar can lead to both obesity and fatty liver disease.

"Sugary foods and drinks can cause scarring in the liver," says Dr. Kohli, "but we don't know how non-caloric sweeteners may affect liver disease." In a first-of-its-kind study, Dr. Kohli addressed and answered the question: Can non-caloric sweeteners improve signs of fatty liver disease?

Using a preclinical model, he tested two non-caloric sweeteners, sucralose and stevia extract. Both are widely available and appear in many sweetened foods and drinks. "We were interested in those two compounds because they are the newest and least studied in the context of liver disease and obesity," says Dr. Kohli.

The results were striking. "We compared these sweeteners head to head with sugar," he says. "Stevia extract lowers glucose levels and improves markers of fatty liver disease." These markers include fibrosis and fat levels in the liver. The study also uncovered some potential mechanisms that could be responsible for reversing these markers of fatty liver disease. "We saw a decrease in signs of cellular stress and some changes in the gut microbiome," says Dr. Kohli, "but there is more work to do in order for us to understand the clinical relevance."

The preclinical study was funded by the Stanley W. Ekstrom Foundation. The results led Dr. Kohli's team directly into a clinical trial - also funded by the Stanley W. Ekstrom Foundation - to test the effects of stevia in pediatric patients. "The exciting thing is that we have taken a problem that we see in the clinic, studied it preclinically, and now we are back to test the solution - all in under two years." says Dr. Kohli.

Credit: 
Children's Hospital Los Angeles

Genetic doppelgaengers: Emory research provides insight into two neurological puzzles

An international team led by Emory scientists has gained insight into the pathological mechanisms behind two devastating neurodegenerative diseases. The scientists compared the most common inherited form of amyotrophic lateral sclerosis and frontotemporal dementia (ALS/FTD) with a rarer disease called spinocerebellar ataxia type 36 (SCA36).

Both of the diseases are caused by abnormally expanded and strikingly similar DNA repeats. However, ALS progresses quickly, typically killing patients within a year or two, while the disease progression of SCA36 proceeds more slowly over the course of decades. In ALS/FTD it appears that protein products can poison cells in the nervous system. Whether similar protein products exist in SCA36 was not known.

What Zachary McEachin, PhD and Gary Bassell, PhD from Emory's Department of Cell Biology, along with a team of collaborators at Emory, Mayo Florida, and internationally from Spain and Japan, discovered have provided a new paradigm for thinking about how aberrant protein species are formed. Despite the disparate clinical outcomes between these diseases, this research could broaden the avenue of research toward genetically targeted treatments for such related neurodegenerative diseases.

Their study, published Tuesday in Neuron, provides a guide to types of protein gobbledygook that build up in brain cells in both disorders, and which should be reduced if the new mode of treatment is working in clinical trials.

"We are thinking of these diseases as genetic doppelgängers," says McEachin, a postdoctoral fellow in Bassell's lab. "By that, I mean they are genetically similar, but the neurodegeneration progresses differently for each disease. We can use this research to understand each of the respective disorders much better -- and hopefully help patients improve their quality of life down the road with better treatments."

An estimated 16,000 people in the United States have ALS, a progressive neurodegenerative disease that affects nerve cells in the brain and spinal cord. The most common inherited form of ALS/FTD occurs because there is an abnormally expanded repeat of six DNA "letters" stuck into a gene called c9orf72.

Some of the degeneration of motor neurons comes from the expanded repeat being made into proteins that repeat two amino acids over and over. It's like a printing press being forced to print entire pages of "GPGPGPGP", "GAGAGAGA"..." or "PRPRPRPR" In c9 ALS, those proteins are thought to build up inside brain cells and poison them. A striking finding is that in c9 ALS, there are others chimeric variants e.g. "GAGAGAGPGPGP" suggesting these toxic proteins are more complex than previously thought.

"A major goal in the field has been to identify and characterize the various types of pathological aggregates that accumulate in patient brains as 'bad actors' that cause neurodegeneration," Bassell says. "Here we identify fundamental differences in this process for these two neurological diseases with disparate clinical outcomes. I think this is a paradigm shift for how heterogenous this process is in ALS and points to possible therapeutic strategies that might mitigate multiple bad actors simultaneously."

The two diseases (c9 form of ALS and SCA36) both have abnormally expanded 6-letter genetic repeats, but the letters that are being repeated are different. McEachin was able to show that the nonsensical protein products from the expanded repeats aggregate in c9 ALS patients' cells but do not in SCA36. He says it was surprising because the proteins were predicted to be similar in the two diseases.

"Studying the differences in the aberrant proteins gives insight into both diseases," McEachin says.

SCA36, one of several types of SCA, is a condition characterized by progressive problems with movement that typically begins in mid-adulthood. They also develop hearing loss and muscle twitches over time, including losing the ability to move their tongue. Their legs, forearms and hands atrophy as the condition worsens.

The mutation for SCA36 was discovered around same time as the c9 form of ALS/FTD in 2011. The ALS/FTD discovery has since formed the basis for myriad studies implicating these toxic protein aggregates as contributing drivers of disease in ALS and frontotemporal dementia (FTD).

"The genetic overlap was known, however the rate limiting step in studying SCA36 has been the availability of patient samples," says McEachin.

He traveled to Spain and Japan where there are known pedigrees of SCA36 patients to learn more about the disorder. To obtain SCA36 patient samples, the authors reached out to neurologists Maria-Jesus Sobrido, MD and Manuel Arias, MD at the Hospital Clínico Universitario in Santiago de Compostela, Spain, and neurologist Koji Abe, MD of Okayama University in Japan.

Emory University's ALS Clinic, directed by Jonathan Glass, MD, a close collaborator on this study, is the largest in the southeast and has curated a substantial biorepository of C9orf72 ALS and FTD biospecimens, including those used in this study.

"The study would not have been possible without the interest of these clinicians and willingness of their patients to participate," McEachin says. "We are truly grateful for their collaboration as this study wouldn't have been possible otherwise."

The authors also worked in collaboration with Leonard Petrucelli, PhD, and Tania Gendron, PhD at Mayo Clinic Jacksonville, who developed specialized antibodies to analyze the nonsensical protein products. Wilfried Rossoll, PhD now at Mayo Clinic, formerly at Emory, was also a collaborator.

In a separate paper released the same day in Cell Reports, these finding were further supported by similar observations in a novel mouse model of SCA36 generated by Leonard Petrucelli, Wilfried Rossoll and colleagues at Mayo. McEachin and Bassell were also co-authors on this manuscript.

A new technology for treating diseases with expanded DNA repeats - including Huntington's disease and myotonic dystrophy, as well as some forms of ALS and SCA -- is emerging. "Antisense oligonucleotides" can be delivered to the nervous system and shut down production of the toxic proteins. The Emory ALS clinic is participating in some of the ASO clinical trials.

"We think our analysis will guide future research into potential treatments, for example, by developing drugs that can best target these related aggregation-prone molecules," Bassell says.

Credit: 
Emory Health Sciences

Genetic variation in a brain-cleansing water channel affects human sleep

The reason why we sleep remains an unresolved question of the 21st century. Research by Sara Marie Ulv Larsen, Sebastian Camillo Holst and colleagues from the Neurobiology Research Unit at the University Hospital Copenhagen, published this week in the open access journal PLoS Biology, now shows that the depth of non-rapid-eye-movement (nonREM) sleep in humans is associated with different genetic versions of a gene that encodes a water channel involved in fluid flow in the brain.

Recent insights suggest that sleep may enable and promote a flow of cerebrospinal fluid into the brain that literally removes metabolic waste. In experimental animals, this process is aided by water channels called AQP4; these form water-permeable pores through the cell membranes of brain cells called astrocytes. The role of these water channels in the human brain and whether they are associated with the regulation of deep nonREM sleep, also called slow wave sleep, had not yet been examined.

A common set of genetic variants that are inherited together are called a haplotype. One such a haplotype (containing eight individual DNA variants) was previously shown to modulate the levels of AQP4. By carefully studying more than 100 healthy individuals, the authors found that the depth of slow wave sleep, which can be measured by analyzing the brain waves recorded during sleep, differs between carriers of this haplotype and a control group. The difference was most pronounced at the beginning of the night, when our need for sleep is highest. Interestingly, the two haplotype groups also coped differently when kept awake for two full days, suggesting that changes in the flow of fluids through AQP4 water channels may modify how we cope with sleep loss.

Because the genetic variants within the AQP4 haplotype were also previously associated with the progression of Alzheimer's disease, the results of this study may suggest that a sleep-driven exchange of fluids through AQP4 water channels could be linked to Alzheimer's progression. To explore the possible association between Alzheimer's disease and AQP4 water channels, further studies are warranted. "A more immediate implication of our results" the authors note, "is by improving our understanding of the importance of sleep". In other words, this is the first study to show that the genetics of AQP4 water channels affect the intensity of deep sleep and how we cope with loss of sleep. These findings add support to the current theory that sleep may be involved in the regulation of "brain clearance" and as such highlights the link between sleep and fluid flow in the human brain.

Credit: 
PLOS

Robots help some firms, even while workers across industries struggle

Overall, adding robots to manufacturing reduces jobs -- by more than three per robot, in fact. But a new study co-authored by an MIT professor reveals an important pattern: Firms that move quickly to use robots tend to add workers to their payroll, while industry job losses are more concentrated in firms that make this change more slowly.

The study, by MIT economist Daron Acemoglu, examines the introduction of robots to French manufacturing in recent decades, illuminating the business dynamics and labor implications in granular detail.

"When you look at use of robots at the firm level, it is really interesting because there is an additional dimension," says Acemoglu. "We know firms are adopting robots in order to reduce their costs, so it is quite plausible that firms adopting robots early are going to expand at the expense of their competitors whose costs are not going down. And that's exactly what we find."

Indeed, as the study shows, a 20 percentage point increase in robot use in manufacturing from 2010 to 2015 led to a 3.2 percent decline in industry-wide employment. And yet, for firms adopting robots during that timespan, employee hours worked rose by 10.9 percent, and wages rose modestly as well.

A new paper detailing the study, "Competing with Robots: Firm-Level Evidence from France," will appear in the May issue of the American Economic Association: Papers and Proceedings. The authors are Acemoglu, who is an Institute Professor at MIT; Clair Lelarge, a senior research economist at the Banque de France and the Center for Economic Policy Research; and Pascual Restrepo Phd '16, an assistant professor of economics at Boston University.

A French robot census

To conduct the study, the scholars examined 55,390 French manufacturing firms, of which 598 purchased robots during the period from 2010 to 2015. The study uses data provided by France's Ministry of Industry, client data from French robot suppliers, customs data about imported robots, and firm-level financial data concerning sales, employment, and wages, among other things.

The 598 firms that did purchase robots, while comprising just 1 percent of manufacturing firms, accounted for about 20 percent of manufacturing production during that five-year period.

"Our paper is unique in that we have an almost comprehensive [view] of robot adoption," Acemoglu says.

The manufacturing industries most heavily adding robots to their production lines in France were pharmaceutical companies, chemicals and plastic manufacturers, food and beverage producers, metal and machinery manufacturers, and automakers.

The industries investing least in robots from 2010 to 2015 included paper and printing, textiles and apparel manufacturing, appliance manufacturers, furniture makers, and minerals companies.

The firms that did add robots to their manufacturing processes became more productive and profitable, and the use of automation lowered their labor share -- the part of their income going to workers -- between roughly 4 and 6 percentage points. However, because their investments in technology fueled more growth and more market share, they added more workers overall.

By contrast, the firms that did not add robots saw no change in the labor share, and for every 10 percentage point increase in robot adoption by their competitors, these firms saw their own employment drop 2.5 percent. Essentially, the firms not investing in technology were losing ground to their competitors.

This dynamic -- job growth at robot-adopting firms, but job losses overall -- fits with another finding Acemoglu and Restrepo made in a separate paper about the effects of robots on employment in the U.S. There, the economists found that each robot added to the work force essentially eliminated 3.3 jobs nationally.

"Looking at the result, you might think [at first] it's the opposite of the U.S. result, where the robot adoption goes hand in hand with destruction of jobs, whereas in France, robot-adopting firms are expanding their employment," Acemoglu says. "But that's only because they're expanding at the expense of their competitors. What we show is that when we add the indirect effect on those competitors, the overall effect is negative and comparable to what we find the in the U.S."

Superstar firms and the labor share issue

The competitive dynamics the researchers found in France resemble those in another high-profile piece of economics research recently published by MIT professors. In a recent paper, MIT economists David Autor and John Van Reenen, along with three co-authors, published evidence indicating the decline in the labor share in the U.S. as a whole was driven by gains made by "superstar firms," which find ways to lower their labor share and gain market power.

While those elite firms may hire more workers and even pay relatively well as they grow, labor share declines in their industries, overall.

"It's very complementary," Acemoglu observes about the work of Autor and Van Reenen. However, he notes, "A slight difference is that superstar firms [in the work of Autor and Van Reenen, in the U.S.] could come from many different sources. By having this individual firm-level technology data, we are able to show that a lot of this is about automation."

So, while economists have offered many possible explanations for the decline of the labor share generally -- including technology, tax policy, changes in labor market institutions, and more -- Acemoglu suspects technology, and automation specifically, is the prime candidate, certainly in France.

"A big part of the [economic] literature now on technology, globalization, labor market institutions, is turning to the question of what explains the decline in the labor share," Acemoglu says. "Many of those are reasonably interesting hypotheses, but in France it's only the firms that adopt robots -- and they are very large firms -- that are reducing their labor share, and that's what accounts for the entirety of the decline in the labor share in French manufacturing. This really emphasizes that automation, and in particular robots, is a critical part in understanding what's going on."

Credit: 
Massachusetts Institute of Technology

Evaluation of pedestrian walking speed change patterns at crosswalks in Palestine

The convenience and safety for pedestrians worldwide has been a topic gaining much attention, when we consider road designs, crosswalks, traffic signals and transportation regulations and infrastructure. The research study conducted by Dr. Fady M. A. Hassouna, from An-Najah National University, Nablus, Palestine, focuses on the pedestrian walking speed change patterns with reference to different factors related to pedestrian characteristics in Palestine. The factors that were considered included the age of the pedestrians, gender and crossing patterns based on traffic control at different locations, i.e. signalized and unsignalized crosswalks.

The researchers gathered data of crossing speeds for 4,301 pedestrians in a populous city of Palestine, Nablus. The data was taken from six different locations, of which three had traffic signals and three did not have traffic signals. Statistical analysis including Z-Test and ANOVA Test was conducted and certain significant results were found from the study.

The research concluded that male pedestrians walked significantly faster than female pedestrians. Considering the age, younger people turned out to be faster crossing pedestrians than the older generation, which was much anticipated. But the walking speed on crosswalks for unsignaled crosswalks compared to signaled crosswalks did not appear to be significantly high.

Lastly the research found the average and the 15 degree percentile for the pedestrians crossing the crosswalks. These figures were essential to design the facilities and traffic signals for the pedestrians.

Credit: 
Bentham Science Publishers

UCLA researchers develop chemistry needed to create marijuana breathalyzer

image: The legalization and decriminalization of marijuana in California and elsewhere have made marijuana detection especially important, said senior author Neil Garg.

Image: 
Jesse Herring

UCLA chemists have reported the key chemical discovery necessary for the creation of a small, electronic marijuana breathalyzer. The research is published in Organic Letters, a peer-reviewed journal of the American Chemical Society.

The legalization and decriminalization of marijuana in California and elsewhere have made marijuana detection especially important, said senior author Neil Garg, UCLA's Kenneth N. Trueblood Professor of Chemistry and Biochemistry and chair of UCLA's department of chemistry and biochemistry.

"When I grew up, people were taught not to drive drunk," Garg said. "I haven't seen the same type of messages for marijuana yet, and statistics indicate more than 14 million people in the U.S. smoke marijuana and drive. Our goal was to devise a very simple solution that could be adopted by society. We have shown in this study we can change the chemical structure and properties of THC -- the primary psychoactive ingredient in marijuana -- using perhaps the simplest chemical means possible: electricity, to determine whether a person is impaired."

"We want a simple breathalyzer that doesn't require specialized training because a police officer is not a trained synthetic organic chemist," said lead author Evan Darzi, a former postdoctoral scholar in Garg's laboratory.

While Darzi and Garg have developed the chemistry that would be at the heart of a marijuana breathalyzer, they have not created an actual device. "We have established the fundamental proof of concept," said Garg, who received the 2018 Robert Foster Cherry Award -- which is the largest university teaching prize in the U.S., awarded by Baylor University -- and was named the 2015 California Professor of the Year.

Darzi and Garg developed a simple oxidation process similar to that used in an alcohol breathalyzer. Oxidation is the loss of an electron from a molecule. The researchers removed a hydrogen molecule from THC (whose full name is delta-9-tetrahydrocannabinol). Alcohol breathalyzers convert ethanol to an organic chemical compound, and hydrogen is lost through the oxidation process.

"The chemistry we are doing with THC is the same thing," Garg said. "We remove a molecule of hydrogen from THC. That is oxidation. This leads to changes in the color of the molecule that can be detected."

Darzi and Garg report two ways to do the oxidation of THC. Their preferred, inexpensive approach is to use electricity.

"Some of our initial ideas involved trying to get complicated molecules to bind to THC in order to detect a signal," Garg said. "After a while, we realized the simplest solution is to pump electricity into THC and have a chemical reaction occur that produces a change we can detect. It doesn't matter what the change is, as long as it is easy to detect. Oxidation is one of the simplest reactions one can do to a molecule."

The structure of THC includes a unit called a phenol. When chemists oxidize a phenol, the oxidation produces a member of a class of organic compounds called quinones. "We know how to oxidize a phenol into a quinone," Garg said.

THC and the quinone absorb light differently. "Once we knew that," Garg said, "we decided to use electricity to perform the oxidation." Darzi used a new device in Garg's laboratory (called an ElectraSyn 2.0 by IKA Works) that allows him to perform electrochemical reactions.

The chemists saw a change in where the molecules absorb light. THC absorbs light at a certain wavelength, and Darzi and Garg found that when it is oxidized, it absorbs light at a different wavelength.

"Doing organic chemistry using electrochemistry is not something that people in my field historically have done regularly," Garg said. "Evan studied different variations of how to set up the chemical reactions until he found the best way to oxidize THC."

In order to conduct the research, the chemists first obtained a license from the U.S. Drug Enforcement Administration to study THC in their laboratory.

The chemists said they have had positive responses from other chemists with whom they have shared their research.

The next big step, Garg said, is to achieve the same result with a breath sample from a person who has very recently consumed marijuana, and to avoid false positives. Studies suggest marijuana on the breath can reliably reveal whether marijuana was smoked or otherwise consumed in the last four or five hours, Darzi said. Garg hopes his laboratory will continue this research in collaboration with a company interested in developing the technology. However, he noted there are significant challenges to develop this technology at a university due to federal regulations. UCLA has filed a provisional patent application on the THC oxidation.

Garg's hope is that a marijuana breathalyzer would be inexpensive enough for consumers to buy so they can test themselves before deciding whether to drive. Garg and Darzi expect that a marijuana breathalyzer would produce a numerical result, perhaps similar to the blood alcohol level measurements of an alcohol breathalyzer -- but the details go beyond the scope of this research.

"Professor Garg and I both have young children," Darzi said, "and our children will grow up in a world where marijuana is legal. We're glad we can play a role in helping society address this issue."

Credit: 
University of California - Los Angeles

Real-time data show COVID-19 led to 60% drop in leisure, hospitality and retail employment

image: The figure shows the employment in Leisure and Hospitality and Retail Trade.

Image: 
Andre Kurmann, Etienne Lale and Lien Ta

There is no doubt that the COVID-19 crisis is affecting the U.S. economy and labor markets in an unprecedented way. The leisure, hospitality and retail industries have been hit the hardest by shutdown orders nationwide but new research that uses data from Homebase, a time-tracking software, to provide real-time employment estimates shows that the report by the Bureau of Labor Statistics, expected later this week, may not capture the full extent of the contraction.

Andre Kurmann, associate professor in the School of Economics of Drexel University's LeBow College of Business, Etienne Lale of the Universite du Quebec à Montreal and Lien Ta, a Drexel doctoral student, analyzed data from Homebase to come up with real-time estimates of the COVID-19 crisis on U.S. employment and hours worked.

They found that employment in leisure, hospitality and retail trade contracted by an estimated 19.8 million - from 32.3 million in mid-February to 12.5 million by the end of April - a staggering 60% decline. Most of this decline occurred in the second half of March as stay-at-home orders came into effect.

The researchers also found that about 6.7 million or one-third of the employment decline is due to businesses shutting down, and that remaining employees in continuing businesses saw a 10% reduction in their weekly hours.

While the contraction in the two sectors is dramatic, there are some first glimmers of a recovery. As of the end of April, about 15% of inactive businesses have returned to activity, and weekly hours worked and employment have started to increase.

The results of the analysis contrast with the latest estimates by the BLS that pertain to mid-March, before most of the effects of the crisis were felt, according to the researchers.

"With the COVID-19 crisis unfolding with tremendous speed and affecting labor markets in such an unprecedented way, it's all the more important to have timely and accurate measures of the actual impact," Kurmann said. "The results imply that the employment losses reported by the BLS in its March report, though large by historical standards, show only the tip of the iceberg and we should expect much worse estimates in the April report due to be released this Friday."

The analysis also highlights the importance of taking into account the negative employment effects of business inactivity. Historically, the BLS estimates from its establishment survey (Current Employment Statistics) adjusted for this so-called net birth/death effect only indirectly based on past data. In the current situation, where many of the businesses have at least temporarily shut down, this adjustment is no longer appropriate, according to the researchers. The BLS recently announced on its website that it will modify its birth/death model with the April report but hasn't provided details on how the methodology will change.

According to Kurmann, it could be a year from now -- after the BLS has processed and analyzed data from the census of all businesses -- until we will know the full extent of the coronavirus outbreak on employment, which is why it's important to have at least estimates of the current situation.

"Our report complements a number of concurrent efforts to measure the real-time impact of the crisis on labor markets," he said. "We provide a direct estimate of establishment inactivity and, going forward, the extent to which the U.S. labor market is starting to recover as restrictions are lifted and the economy opens up again."

At the same time, the researchers caution that the Homebase data does not come without limitations. The majority of workers tracked by Homebase are hourly employees and the company's clientele consists mostly of small businesses, which account for about half of total employment in leisure, hospitality and retail. The Homebase data does not capture the response of larger companies to the crisis, but the report shows that even under conservative assumptions about the employment decline among larger businesses, the estimated employment loss in leisure, hospitality and retail amounts to 16.5 million - still a staggering number.

The researchers plan to continue providing real-time updates to their estimates with the latest data from Homebase. Their future plans include tracking the extent to which inactive organizations return to activity, as well as tracking whether or not furloughed or laid off employees return to their previous positions. Plans also include using real-time data to assess the impact of COVID-19 on other industries, as well as producing estimates for various regions throughout the country.

Credit: 
Drexel University

Study reports high level of hazardous drinking among Pacific Islander young adults in US

image: Andrew Subica is an assistant professor of social medicine, population, and public health at the UC Riverside School of Medicine.

Image: 
UCR School of Medicine.

RIVERSIDE, Calif. -- Pacific Islander young adults in the United States have an extremely high level of hazardous drinking and potential alcohol-use disorders, a study led by a health disparities researcher at the University of California, Riverside, has found.

The study, published in the American Journal of Orthopsychiatry, found 56% of Pacific Islander young adults screened positive for hazardous drinking, a level that places people at heavy risk for accidents, drunk driving, and serious social and health problems. Similarly, 49% of young adults screened positive for active alcohol-use disorders, or AUD, more than eight times the national AUD rate.

The study is the first and largest alcohol study of community Pacific Islander young adults in the U.S.

"To our knowledge, we are the first to investigate and detail the scope of community alcohol use and associated harms affecting adults of this understudied U.S. racial group," said Andrew Subica, an assistant professor of social medicine, population, and public health in the UCR School of Medicine, who led the study. "Pacific Islanders have been exposed to extensive U.S. colonization and historical trauma and have been warning people for years about the severe levels of alcohol-use problems and need for treatment in their communities, especially among young adults. Yet, young adults in these communities, who are at highest risk for AUD and harms in the general population, had not been studied in an empirical fashion."

Subica and his team collected survey data from 156 Pacific Islander young adults aged 18 to 30 living in Los Angeles County and Northwest Arkansas. Forty percent of the survey participants reported experiencing alcohol-related harm, far exceeding the 4-9% rate found in other populations. Additionally, 52% and 49% of participants reported using cigarettes and marijuana, respectively, with 47% of participants reporting dual alcohol-cigarette use and 30% of participants reporting lifetime alcohol-cigarette-marijuana use.

"These are concerning rates of cigarette and marijuana use for any population, and the rate of comorbidity with alcohol use is of special public health concern," Subica said.

Subica added that while almost half of Pacific Islander young adults screened positive for active AUD, only one quarter perceived a need for substance-use treatment.

"Further analyses suggested culturally sensitive interventions are needed to reduce untreated AUDs in this group, not so much by highlighting a person's addictive behaviors, but by raising awareness of how their alcohol use is harming the person socially, physically, and economically," he said.

Subica's team relied on a close partnership with two leading Pacific Islander community organizations in Los Angeles and Arkansas, the Office of Samoan Affairs and the Arkansas Coalition of Marshallese, who will now partner with Subica to create and test interventions to prevent AUDs in their communities.

"Now that we've demonstrated that a major problem exists, we are starting to design culturally tailored interventions that could help these communities suffering excessive alcohol use and alcohol-related harms," Subica said. "These harms are especially important to prevent because they greatly affect the health of a community by causing accidents, drunk driving, increased public violence and property damage, family problems, and increased risk for chronic diseases such as heart disease and cancer."

Credit: 
University of California - Riverside

COVID-19 news from Annals of Internal Medicine

Below please find a summary and link(s) of new coronavirus-related content published today in Annals of Internal Medicine. The summaries below are not intended to substitute for the full article as a source of information. A collection of coronavirus-related content is free to the public at http://go.annals.org/coronavirus.

1. Epidemiology of and Risk Factors for Coronavirus Infection in Health Care Workers: A Living Rapid Review

Researchers from the Pacific Northwest Evidence-based Practice Center and Oregon Health & Science University, Portland, Oregon studied multiple databases, including the WHO (World Health Organization) Database of Publications on Coronavirus Disease to assess the burden of coronavirus infections, including SARS-CoV-2 on health care workers. They found that health care workers experience significant burdens from infections, but their risk for infection was decreased with use of personal protective equipment (PPE) and infection control training. Certain exposures, such as involvement in intubations, direct contact with infected patients, or contact with bodily secretions, were associated with increased risk of infection. Read the full text: http://annals.org/aim/article/doi/10.7326/M20-1632.

2. Keeping Up With Emerging Evidence in (Almost) Real Time

An accompanying editorial by the editors of Annals of Internal Medicine discusses the unique challenges of publishing up-to-the-minute research during the COVID-19 pandemic and the journal's commitment to doing so. One strategy involves systematic reviews that are rapid and living. A rapid review simplifies some components of the review process to produce information in a timely manner and a living review commits to ongoing evidence review and synthesis at prespecified intervals. The promise of this approach is relevant now more than ever. Read the full text: http://annals.org/aim/article/doi/10.7326/M20-2627.

Media contacts: A PDF for this article is not yet available. Please click the link to read full text. The lead author, Roger Chou, MD, can be reached through Erik Robinson at robineri@ohsu.edu. The lead editorialist, Christine Laine, MD, MPH, and the Annals editors can be reached through Angela Collom at acollom@acponline.org.

3. Health Care Supply Chains: COVID-19 Challenges and Pressing Actions
The supply chain for U.S. health care is really five different supply chains - pharmaceuticals, personal protective equipment (PPE), medical devices, medical supplies, and blood - and each one has its own problems and opportunities for improvement. The author from the University of Pittsburgh explains the implications of COVID-19 on this system and what can be done to ensure that our supply chains support health care providers. Read the full text: http://annals.org/aim/article/doi/10.7326/M20-1326.

Media contacts: A PDF for this article is not yet available. Please click the link to read full text. The lead author, Prakash Mirchandani, MBA, PhD, please contact Kimberly Barlow at kbarlow@pitt.edu.

Credit: 
American College of Physicians

AMP recommends minimum set of pharmacogenetic alleles to help standardize clinical genotyping testing for warfarin response

ROCKVILLE, Md. - May 5, 2020 - The Association for Molecular Pathology (AMP), the premier global, molecular diagnostic professional society, today published consensus, evidence-based recommendations to aid in the design, validation and interpretation of clinical genotyping tests for the prediction of warfarin response. The manuscript, "Recommendations for Clinical Warfarin Sensitivity Genotyping Allele Selection: A Joint Recommendation of the Association for Molecular Pathology and College of American Pathologists," was released online ahead of publication in The Journal of Molecular Diagnostics.

The new guideline on clinical warfarin sensitivity genotyping allele selection completes a series of three reports that were intended to facilitate testing and promote standardization for frequently used pharmacogenetics (PGx) genotyping assays. Developed by the AMP PGx Working Group with organizational representation from the College of American Pathologists (CAP) and the Clinical Pharmacogenetics Implementation Consortium (CPIC), the latest report builds on the earlier recommendations for clinical CYP2C19 and CYP2C9 genotyping. The recommendations should be implemented together with other clinical guidelines such as those issued by the CPIC, which focus primarily on the interpretation of PGx test results and therapeutic recommendations for specific drug-gene pairs.

"Clinical genotyping assays that help predict warfarin response and optimize a patient's dosage requirements have enabled some of the earliest success stories of this precision medicine era," said Victoria M. Pratt, PhD, FACMG, Professor and Director of Pharmacogenetics and Molecular Genetics Laboratories, Indiana University School of Medicine, and AMP PGx Working Group Chair. "Together, the AMP PGx Working Group defined a standard set of evidence-based recommendations that will help build on these past successes and improve phenotype prediction and test interpretation for all future warfarin sensitivity genotyping panels."

Similar to the previous reports in the series, this new warfarin genotyping guideline offers a two-tier categorization of alleles that are recommended for inclusion in clinical PGx genotyping assays. Using criteria such as allele frequencies in different populations and ethnicities, the availability of reference materials and other technical considerations, the AMP PGx Working Group recommended a minimum set of alleles and their defining variants that should be included in all clinical warfarin sensitivity genotyping tests (Tier 1). The team also defined a Tier 2 list of optional alleles that do not currently meet one or more of the criteria for inclusion in Tier 1. These recommendations are meant to be a reference guide and not to be interpreted as a restrictive list. AMP intends to update these recommendations as new data and/or reference materials become available.

"AMP members are among the earliest adopters of pharmacogenetic testing in clinical settings," said Karen E. Weck, MD, Professor of Pathology and Laboratory Medicine, Professor of Genetics and Director, Molecular Genetics and Pharmacogenomics at University of North Carolina Chapel Hill, and AMP President and PGx Working Group Member. "This series of guidelines for common clinical PGx genotyping tests is another example of AMP's ongoing commitment to sharing our collective expertise with the broader laboratory community in order to improve professional practice and patient care.

Credit: 
Association for Molecular Pathology