Tech

Coaching sales agents? Use AI and human coaches

Researchers from Temple University, Sichuan University, and Fudan University published a new paper in the Journal of Marketing that explores the growing use of AI to coach sales agents to determine if there are any caveats that inhibit the effective use of this technology.

The study forthcoming, in the the Journal of Marketing, is titled "Artificial Intelligence (AI) Coaches for Sales Agents: Caveats and Solutions" and is authored by Xueming Luo, Shaojun Qin, Zheng Fang, and Zhe Qu.

Many companies now turn to artificial intelligence (AI) to provide sales agents with coaching services that were originally offered by human managers. AI coaches are computer software solutions that leverage deep learning algorithms and cognitive speech analytics to analyze sales agents' conversations with customers and provide feedback to improve their job skills. Due to their high computation power, scalability, and cost efficiencies, AI coaches are more capable of generating data-driven training feedback than human managers. MetLife, an insurance giant adopted an AI coach named Cogito to offer training feedback to its call center frontline employees to improve customer service skills. Similarly, Zoom uses its AI coach, Chorus, to offer on-the-job training to its sales force.

Precisely because of the big data analytics power of AI coaches, one concern is that feedback generated by the technology may be too comprehensive for agents to assimilate and learn, especially for bottom-ranked agents. Further, despite their superior "hard" data computation skills, AI coaches lack the "soft" interpersonal skills to communicate the feedback to agents, which is a key advantage of human managers. The lack of soft skills may result in agents' aversion to receiving feedback from AI coaches, thus hampering their learning and performance improvement. Indeed, the design of AI coaches often focuses on information generation, but less on learning by agents who may differ in learning abilities. Therefore, it would be naïve to expect a simple, linear impact of AI coaches, relative to human managers, across heterogeneous sales agents.

With this background, the study addresses several research questions:

Which types of sales agents, bottom-, middle-, or top-ranked, benefit the most or the least from AI vis-à-vis human coaches? Is the incremental impact of AI coaches on agent performance heterogeneous in a non-linear manner?

What is the underlying mechanism? Does learning from the training feedback account for the impact of AI coaches?

Can an assemblage of AI and human coach qualities circumvent caveats and improve the sales performance of distinct types of agents?

These questions are answered by a series of randomized field experiments with two fintech companies. In the first experiment, a total of 429 agents were randomly assigned to undergo on-the-job sales training with an AI or human coach. Results show that the incremental impact of the AI coach over human coach is heterogeneous in an inverted-U shape. While middle-ranked agents improve the most, both bottom- and top-ranked agents show limited incremental gains. Results suggest that this pattern is driven by a learning-based underlying mechanism. Bottom-ranked agents encounter the most severe information-overload with the AI coach. By contrast, top-ranked agents display the strongest AI aversion problem, which obstructs their incremental learning and performance.

The slim improvement in bottom-ranked agents is an obstacle for AI coach adoption because they have the largest room and most acute needs to sharpen their job skills. The researchers re-designed the AI coach by restricting the amount of feedback provided to bottom-ranked agents. With a separate sample of 100 bottom-ranked agents, the second experiment affirmed a substantial improvement in agent performance with a restricted AI coach. A third experiment tackled the limitations of either AI or human coaches alone by examining an AI-human coach assemblage, wherein human managers communicate the feedback generated by the AI coach to the agents. A new sample of 451 bottom- and top-ranked agents were randomly assigned to the AI coach, human coach, and AI-human coach assemblage conditions. The results suggest that both bottom- and top-ranked agents in the AI-human coach assemblage condition enjoy higher performance than their counterparts in the AI coach alone or the human coach alone condition. In addition, bottom-ranked agents gain more performance improvement than top-ranked agents with the hybrid of AI and human coaching. Thus, this assemblage that harnesses the soft communication skills of human managers and hard data analytics power of AI coaches can effectively solve both problems faced by bottom- and top-ranked agents.

As Luo explains, "Managerially speaking, our research empowers companies to tackle the challenges they may encounter when investing in AI coaches to train distinct types of agents. We show that instead of simply applying an AI coach to the entire workforce, managers ought to prudently design it for targeted agents." Qin adds, "Companies should be aware that AI and human coaches are not dichotomous choices. Instead, an assemblage between AI and human coaches engenders higher workforce productivity, thus allowing companies to reap substantially more value from their AI investments."

Credit: 
American Marketing Association

Simple, no-cost ways to help the public care for the commons

Researchers from University of Wisconsin-Madison, New York Institute of Technology, University of Iowa, and Cornell University published a new paper in the Journal of Marketing that examines whether it is possible to make people feel as if the property is theirs--a feeling known as psychological ownership--and how this affects their stewardship behaviors.

The study, forthcoming in the Journal of Marketing, is titled "Caring for the Commons: Using Psychological Ownership to Enhance Stewardship Behavior for Public Goods" and is authored by Joann Peck, Colleen Kirk, Andrea Luangrath, and Suzanne Shu.

Maintaining the natural environment is a pressing issue. The intentional care of public goods, such as publicly owned parks, waterways, drinking water, and air quality, has become increasingly difficult. For example, for public parks, it has become more challenging during the pandemic as park services are reduced while the number of people spending time outside has increased. It is widely acknowledged that property that is publicly, versus individually, owned tends to be more neglected by its users - a phenomenon known in economics as the tragedy of the commons.

The most extreme solution to a tragedy of the commons problem is to convert common property into private property so that a single owner has responsibility for its care. As Peck explains, "We wondered whether it is possible to instead make people feel as if the property is theirs--a feeling known as psychological ownership--without any change to legal ownership. The hypothesis is that people who feel as if they own a public resource might be more likely to engage in stewardship behaviors." Leveraging psychological ownership, the researchers developed a series of actionable interventions that managers of public goods can implement to elicit feelings of ownership in users. Four experiments tested this hypothesis.

The first study was at a public lake with kayakers. Floating trash was set in the water where kayakers would see it. As visitors rented kayaks, half were asked to create a nickname for the lake before entering the water. Using binoculars, the researchers observed whether the kayakers tried to pick up the planted trash. Kayakers who gave the lake a nickname felt more ownership of the lake. Most importantly, they were more than five times as likely to try to pick up the planted trash (41% vs. 7% of the other kayakers).

In the second study, participants imagined taking a walk in a park. They were shown a sign at the park entrance that said either "Welcome to the Park" or "Welcome to YOUR Park." Participants who saw the "YOUR park" sign felt more ownership and responsibility for the park, were more likely to pick up trash, and would donate 34% more to the park ($32.35 vs. $24.08).

The third study tested yet a different way to elicit psychological ownership to see if it could increase actual donations. This study involved cross-country ski renters at a state park. As they rented equipment, they received a map. Half of them were asked to plan their route on the map in advance. The prediction was that this investment of time might increase the skiers' psychological ownership of the park and thus increase their donations through the addition of $1.00 to the rental fee. As expected, skiers who planned their routes and therefore felt more ownership donated to the park 2.5 times more often than those who did not plan their routes. They also reported being more likely to volunteer for the park, to donate in the future, and to promote the park on social media.

The fourth study explored whether managers of public goods may be unintentionally discouraging stewardship behaviors. Many parks tout their attendance numbers, but the intuition was that an attendance sign with a large number of people on it might diffuse users' feelings of responsibility. Research participants imagined they were visiting a park and saw either a "the park" or "YOUR park" welcome sign. Then half of them imagined seeing an attendance sign that read "This week, you are visitor #22,452". (Many U.S. parks have over a million visitors annually, so we designed an attendance sign that included an appropriately large number.) Participants were given money for participating, but also had the option to use some of that money for an anonymous donation to the park. As in the prior studies, individuals who felt more ownership of the park donated more to the park. They were also more likely to say that they would volunteer to help the park, including picking up trash. However, these effects were reduced when participants imagined the attendance sign, which possibly suggested the feeling that these other people would take responsibility for the park.

"This research has implications for consumers, organizations caring for public resources, policy makers, and for-profit companies by demonstrating that simple interventions based on increasing psychological ownership can enhance stewardship of public goods. The actionable interventions we designed and tested to increase psychological ownership are inexpensive, novel, and flexible solutions that successfully motivate individual stewardship behaviors" says Luangrath. By fostering visitors' individual feelings of ownership of a public resource, visitors will feel more responsible for it, take better care of it, and donate more time and money for its benefit.

Credit: 
American Marketing Association

New solvent-based recycling process could cut down on millions of tons of plastic waste

MADISON, Wis. -- Multilayer plastic materials are ubiquitous in food and medical supply packaging, particularly since layering polymers can give those films specific properties, like heat resistance or oxygen and moisture control. But despite their utility, those ever-present plastics are impossible to recycle using conventional methods.

About 100 million tons of multilayer thermoplastics -- each composed of as many as 12 layers of varying polymers -- are produced globally every year. Forty percent of that total is waste from the manufacturing process itself, and because there has been no way to separate the polymers, almost all of that plastic ends up in landfills or incinerators.

Now, University of Wisconsin-Madison engineers have pioneered a method for reclaiming the polymers in these materials using solvents, a technique they've dubbed Solvent-Targeted Recovery and Precipitation (STRAP) processing. Their proof-of-concept is detailed today (Nov. 20, 2020) in the journal Science Advances.

By using a series of solvent washes guided by thermodynamic calculations of polymer solubility, UW-Madison professors of chemical and biological engineering George Huber and Reid Van Lehn and their students used the STRAP process to separate the polymers in a commercial plastic composed of common layering materials polyethylene, ethylene vinyl alcohol, and polyethylene terephthalate.

The result? The separated polymers appear chemically similar to those used to make the original film.

The team now hopes to use the recovered polymers to create new plastic materials, demonstrating that the process can help close the recycling loop. In particular, it could allow multilayer-plastic manufacturers to recover the 40 percent of plastic waste produced during the production and packaging processes.

"We've demonstrated this with one multilayer plastic," says Huber. "We need to try other multilayer plastics and we need to scale this technology."

As the complexity of the multilayer plastics increases, so does the difficulty of identifying solvents that can dissolve each polymer. That's why STRAP relies on a computational approach used by Van Lehn called the Conductor-like Screening Model for Realistic Solvents (COSMO-RS) to guide the process.

COSMO-RS is able to calculate the solubility of target polymers in solvent mixtures at varying temperatures, narrowing down the number of potential solvents that could dissolve a polymer. The team can then experimentally explore the candidate solvents.

"This allows us to tackle these much more complex systems, which is necessary if you're actually going to make a dent in the recycling world," says Van Lehn.

The goal is to eventually develop a computational system that will allow researchers to find solvent combinations to recycle all sorts of multilayer plastics. The team also hopes to look at the environmental impact of the solvents it uses and establish a database of green solvents that will allow them to better balance the efficacy, cost and environmental impact of various solvent systems.

The project stems from UW-Madison's expertise in catalysis. For decades, the university's chemical and biological engineering researchers have pioneered solvent-based reactions to convert biomass -- like wood or agricultural waste -- into useful chemicals or fuel precursors. Much of that expertise translates into solvent-based polymer recycling as well.

The team is continuing its research on STRAP processing through the newly established Multi-University Center on Chemical Upcycling of Waste Plastics, directed by Huber. Researchers in the $12.5 million U.S. Department of Energy-funded center are investigating several chemical pathways for recovering and recycling polymers.

Credit: 
University of Wisconsin-Madison

A biochemical random number

True random numbers are required in fields as diverse as slot machines and data encryption. These numbers need to be truly random, such that they cannot even be predicted by people with detailed knowledge of the method used to generate them.

As a rule, they are generated using physical methods. For instance, thanks to the tiniest high-frequency electron movements, the electrical resistance of a wire is not constant but instead fluctuates slightly in an unpredictable way. That means measurements of this background noise can be used to generate true random numbers.

Now, for the first time, a research team led by Robert Grass, Professor at the Institute of Chemical and Bioengineering, has described a non-physical method of generating such numbers: one that uses biochemical signals and actually works in practice. In the past, the ideas put forward by other scientists for generating random numbers by chemical means tended to be largely theoretical.

DNA synthesis with random building blocks

For this new approach, the ETH Zurich researchers apply the synthesis of DNA molecules, an established chemical research method frequently employed over many years. It is traditionally used to produce a precisely defined DNA sequence. In this case, however, the research team built DNA molecules with 64 building block positions, in which one of the four DNA bases A, C, G and T was randomly located at each position. The scientists achieved this by using a mixture of the four building blocks, rather than just one, at every step of the synthesis.

As a result, a relatively simple synthesis produced a combination of approximately three quadrillion individual molecules. The scientists subsequently used an effective method to determine the DNA sequence of five million of these molecules. This resulted in 12 megabytes of data, which the researchers stored as zeros and ones on a computer.

Huge quantities of randomness in a small space

However, an analysis showed that the distribution of the four building blocks A, C, G and T was not completely even. Either the intricacies of nature or the synthesis method deployed led to the bases G and T being integrated more frequently in the molecules than A and C. Nonetheless, the scientists were able to correct this bias with a simple algorithm, thereby generating perfect random numbers.

The main aim of ETH Professor Grass and his team was to show that random occurrences in chemical reaction can be exploited to generate perfect random numbers. Translating the finding into a direct application was not a prime concern at first. "Compared with other methods, however, ours has the advantage of being able to generate huge quantities of randomness that can be stored in an extremely small space, a single test tube," Grass says. "We can read out the information and reinterpret it in digital form at a later date. This is impossible with the previous methods."

Credit: 
ETH Zurich

One-way street for electrons

image: With the help of ultra-short laser pulses physicists at the University of Oldenburg study the ultra-fast processes occuring in nanomaterials after the absorption of light.

Image: 
University of Oldenburg

Whether in solar cells, in photosynthesis or in the human eye: when light falls on the material, a green leaf or the retina, certain molecules transport energy and charge. This ultimately leads to the separation of charges and the generation of electricity. Molecular funnels, so-called conical intersections, ensure that this transport is highly efficient and directed.

An international team of physicists has now observed that such conical intersections also ensure a directed energy transport between neighbouring molecules of a nanomaterial. Theoretical simulations have confirmed the experimental results. Until now, scientists had observed this phenomenon only within one molecule. In the long term, the results could help to develop more efficient nanomaterials for organic solar cells, for example. The study, led by Antonietta De Sio, University of Oldenburg, and Thomas Frauenheim, University of Bremen, Germany, was published in the current issue of the scientific journal Nature Nanotechnology.

Photochemical processes play a major role in nature and in technology: when molecules absorb light, their electrons transit to an excited state. This transition triggers extremely fast molecular switching processes. In the human eye, for example, the molecule rhodopsin rotates in a certain way after absorbing light and thus ultimately triggers an electrical signal - the most elementary step in the visual process.

First experimental evidence for conical intersections between molecules

The reason for this is a special property of rhodopsin molecules, explains Christoph Lienau, professor of ultrafast nano-optics at the University of Oldenburg and co-author of the study: "The rotation process always takes place in a similar way, although from a quantum mechanical point of view there are many different possibilities for the molecular movement".

This is due to the fact that the molecule has to funnel through a conical intersection during the rotation process, as a research team demonstrated experimentally in visual pigment in 2010: "This quantum mechanical mechanism functions like a one-way street in the molecule: It channels the energy in a certain direction with a very high probability," explains Lienau.

The research team led by Antonietta De Sio, senior scientist in the research group Ultrafast Nano-optics at the University of Oldenburg, and Thomas Frauenheim, professor of Computational Materials Science at the University of Bremen, has now observed such a one-way street for electrons in a nanomaterial. The material has been synthesized by colleagues from the University of Ulm, Germany, and is already used in efficient organic solar cell devices.

"What makes our results special is that we have experimentally demonstrated conical intersections between neighbouring molecules for the first time," explains De Sio. Until now, physicists worldwide had only observed the quantum mechanical phenomenon within a single molecule and only speculated that there might also be conical intersections between molecules lying next to each other.

Theoretical calculations support experimental data

De Sio's Team has discovered this one-way street for electrons by using methods of ultrafast laser spectroscopy: The scientists irradiate the material with laser pulses of only a few femtoseconds in duration. One femtosecond is a millionth of a billionth of a second. The method enables the researchers to record a kind of film of the processes that take place immediately after the light reaches the material. The group was able to observe how electrons and atomic nuclei moved through the conical intersection.

The researchers found that a particularly strong coupling between the electrons and specific nuclear vibrations helps to transfer energy from one molecule to another as if on a one-way street. This is exactly what happens in the conical intersections. "In the material we studied, it took only about 40 femtoseconds between the very first optical excitation and the passage through the conical intersection," says De Sio.

In order to confirm their experimental observations, the researchers from Oldenburg and Bremen also collaborated with theoretical physicists from the Los Alamos National Laboratory, New Mexico, USA, and CNR-Nano, Modena, Italy. "With their calculations, they have clearly shown that we have interpreted our experimental data correctly," explains De Sio.

The Oldenburg researchers are not yet able to estimate in detail the exact effect of these quantum mechanical one-way streets on future applications of molecular nanostructures. However, in the long term the new findings could help to design novel nanomaterials for organic solar cells or optoelectronic devices with improved efficiencies, or to develop artificial eyes from nanostructures.

Credit: 
University of Oldenburg

New insights into memristive devices by combining incipient ferroelectrics and graphene

image: This illustration shows how strontium titanium oxide is combined with graphene strips. The combination opens up a new path to memristive heterostructures combining ferroelectric materials and 2D materials.

Image: 
Banerjee lab, University of Groningen

Scientists are working on new materials to create neuromorphic computers, with a design based on the human brain. A crucial component is a memristive device, the resistance of which depends on the history of the device - just like the response of our neurons depends on previous input. Materials scientists from the University of Groningen analysed the behaviour of strontium titanium oxide, a platform material for memristor research and used the 2D material graphene to probe it. On 11 November 2020, the results were published in the journal ACS Applied Materials and Interfaces.

Computers are giant calculators, full of switches that have a value of either 0 or 1. Using a great many of these binary systems, computers can perform calculations very rapidly. However, in other respects, computers are not very efficient. Our brain uses less energy for recognizing faces or performing other complex tasks than a standard microprocessor. That is because our brain is made up of neurons that can have many values other than 0 and 1 and because the neurons' output depends on previous input.

Oxygen vacancies

To create memristors, switches with a memory of past events, strontium titanium oxide (STO) is often used. This material is a perovskite, whose crystal structure depends on temperature, and can become an incipient ferroelectric at low temperatures. The ferroelectric behaviour is lost above 105 Kelvin. The domains and domain walls that accompany these phase transitions are the subject of active research. Yet, it is still not entirely clear why the material behaves the way it does. 'It is in a league of its own,' says Tamalika Banerjee, Professor of Spintronics of Functional Materials at the Zernike Institute for Advanced Materials, University of Groningen.

The oxygen atoms in the crystal appear to be key to its behaviour. 'Oxygen vacancies can move through the crystal and these defects are important,' says Banerjee. 'Furthermore, domain walls are present in the material and they move when a voltage is applied to it.' Numerous studies have sought to find out how this happens, but looking inside this material is complicated. However, Banerjee's team succeeded in using another material that is in a league of its own: graphene, the two-dimensional carbon sheet.

Conductivity

'The properties of graphene are defined by its purity,' says Banerjee, 'whereas the properties of STO arise from imperfections in the crystal structure. We found that combining them leads to new insights and possibilities.' Much of this work was carried out by Banerjee's PhD student Si Chen. She placed graphene strips on top of a flake of STO and measured the conductivity at different temperatures by sweeping a gate voltage between positive and negative values. 'When there is an excess of either electrons or the positive holes, created by the gate voltage, graphene becomes conductive,' Chen explains. 'But at the point where there are very small amounts of electrons and holes, the Dirac point, conductivity is limited.'

In normal circumstances, the minimum conductivity position does not change with the sweeping direction of the gate voltage. However, in the graphene strips on top of STO, there is a large separation between the minimum conductivity positions for the forward sweep and the backward sweep. The effect is very clear at 4 Kelvin, but less pronounced at 105 Kelvin or at 150 Kelvin. Analysis of the results, along with theoretical studies carried out at Uppsala University, shows that oxygen vacancies near the surface of the STO are responsible.

Memory

Banerjee: 'The phase transitions below 105 Kelvin stretch the crystal structure, creating dipoles. We show that oxygen vacancies accumulate at the domain walls and that these walls offer the channel for the movement of oxygen vacancies. These channels are responsible for memristive behaviour in STO.' Accumulation of oxygen vacancy channels in the crystal structure of STO explains the shift in the position of the minimum conductivity.

Chen also carried out another experiment: 'We kept the STO gate voltage at -80 V and measured the resistance in the graphene for almost half an hour. In this period, we observed a change in resistance, indicating a shift from hole to electron conductivity.' This effect is primarily caused by the accumulation of oxygen vacancies at the STO surface.

All in all, the experiments show that the properties of the combined STO/graphene material change through the movement of both electrons and ions, each at different time scales. Banerjee: 'By harvesting one or the other, we can use the different response times to create memristive effects, which can be compared to short-term or long-term memory effects.' The study creates new insights into the behaviour of STO memristors. 'And the combination with graphene opens up a new path to memristive heterostructures combining ferroelectric materials and 2D materials.'

Credit: 
University of Groningen

New guide on using drones for conservation

image: A drone

Image: 
Karen Anderson

Drones are a powerful tool for conservation - but they should only be used after careful consideration and planning, according to a new report.

The report, commissioned by the global conservation organisation WWF, outlines "best practices" for using drones effectively and safely, while minimising impacts on wildlife. This is the 5th issue in a series on Conservation Technologies and Methodologies.

The lead authors are Dr Karen Anderson and Dr James Duffy, of the Environment and Sustainability Institute at the University of Exeter.

"This is a detailed handbook for conservation practitioners - not just academics - to understand the benefits, opportunities, limits and pitfalls of drone technology," Dr Anderson said.

"Drones are often hailed as a panacea for conservation problems, but in this guide we explain - with reference to detailed case studies by conservation managers and scientists - how and where drones can be used to deliver useful information, and what the key considerations surrounding their use can be."

Dr Karen Anderson leads the University of Exeter's DroneLab, and the research done within her group has developed and guided drone methodologies within geography, ecology and environmental science.

The WWF worked with the Exeter team to produce this report, after being introduced to their DroneLab for a hands-on training a few years ago.

Co-author Aurélie Shapiro is a Senior Remote Sensing Specialist from WWF Germany's Space+Science group.
She said: "I bought a drone online, like many people because we had a lot of applications for this accessible technology.

"Through the DroneLab I realised I had a lot of homework to do with regards to ensuring safety both for humans and wildlife in my research.

"Instructions on how to plan, what to consider - among a myriad of technological options - are priceless.

"It was clear we needed to communicate this wealth of information to the growing drone community so that scientists lead by example with good protocols."

The report includes examples of practical case studies from conservationists and environmental scientists and includes a list of drone "best practices":

Adopt a "precautionary principle". Little is known about different animals' sensitivity to drones, and care should be taken if endangered species or sensitive habitats are involved.

Researchers should follow all ethical rules and processes set by their institution.

Be aware of local and national rules and laws, and seek approval when appropriate.

Use the right drone for the job, being aware of the impact of noise and visual stimulus on target and non-target species.

Minimise wildlife disturbance by launching and landing away from animals, maintaining distance and keeping flight movements steady.

Monitor humans and animals during flights. If distress is being caused, stop.

Report methods and results accurately in publications, to assist good practice by others.

Credit: 
University of Exeter

Ribosome assembly - The final trimming step

Ribosomes synthesize all the proteins in cells. Studies mainly done on yeast have revealed much about how ribosomes are put together, but an Ludwig-Maximilians-Universitaet (LMU) in Munich team now reports that ribosome assembly in human cells requires factors that have no counterparts in simpler model organisms.

In every cell, hundreds of thousands of intricate molecular machines called ribosomes fabricate new proteins, extending each growing chain at a rate of a few amino acids per second. Not surprisingly therefore, the construction of these vital protein factories is itself a highly complex operation, in which more than 200 assembly factors are transiently involved. Mature ribosomes are made up of approximately 80 proteins and four ribosomal RNAs. But how these constituents are assembled in the correct order to yield a functional ribosome is still not fully understood. Moreover, most of our knowledge of the process comes from studies carried out on model organisms like bacteria and yeast, and may not necessarily be applicable to the cells of higher organisms. Researchers led by Professor Roland Beckmann (Gene Center, LMU Munich) have now uncovered new details of the crucial steps in the maturation of ribosomes in human cells.

Active ribosomes consist of two separately assembled particles, which differ in size and interact with each other only after the first steps in protein synthesis have taken place on the smaller of the two (in human cells, the '40S subunit'). Beckmann's team has used cryo-electron microscopy to determine the structures of several precursors of the 40S subunit isolated from human cells and follow the course of its maturation. "This study follows on from an earlier project, in which we obtained initial insights into the process," says Michael Ameismeier. He is a doctoral student in Beckmann's team and lead author of the new report, which is concerned with the final steps in the assembly of the small subunit.

At this late stage in the process, one end of the ribosomal RNA associated with the small particle protrudes from the body of the immature subunit. The last step in the maturation of the 18S subunit consists in the removal of this now superfluous segment. To ensure that this reaction does not occur prematurely, the enzyme responsible - NOB1 - is maintained in an inactive state until it is required. The new study shows that the activation of NOB1 is preceded by a conformational change that results in the detachment of a binding partner from the enzyme. This in turn triggers a structural rearrangement in NOB1 itself, which enables the enzyme to snip off the protruding rRNA segment. "The activation of NOB1 is coordinated by another enzyme," Ameismeier explains. Together with a protein we have discovered - which is not found in yeast - the latter enzyme inserts like a wedge into the maturing 40S subunit, and this facilitates the decisive conformational change in NOB1."

The authors have also shown that yet another protein not found in yeast plays an (as yet) enigmatic role in the maturation of the 40S subunit. "This demonstrates the importance of considering the human system separately from other experimental models," says Beckmann. Use of the evolutionarily simpler yeast system is sufficient for a basic understanding of the process. But certain pathological syndromes have been linked to errors in ribosomal biogenesis in humans, which provides an obvious rationale for the study of ribosomal assembly in human cell systems.

Credit: 
Ludwig-Maximilians-Universität München

Frequent, rapid testing could cripple COVID within weeks, study shows

Testing half the population weekly with inexpensive, rapid-turnaround COVID-19 tests would drive the virus toward elimination within weeks-- even if those tests are significantly less sensitive than gold-standard clinical tests, according to a new study published today by University of Colorado Boulder and Harvard University researchers.

Such a strategy could lead to "personalized stay-at-home orders" without shutting down restaurants, bars, retail stores and schools, the authors said.

"Our big picture finding is that, when it comes to public health, it's better to have a less sensitive test with results today than a more sensitive one with results tomorrow," said lead author Daniel Larremore, an assistant professor of computer science at CU Boulder. "Rather than telling everyone to stay home so you can be sure that one person who is sick doesn't spread it, we could give only the contagious people stay-at-home orders so everyone else can go about their lives."

For the study, published in the journal Science Advances, Larremore teamed up with collaborators at CU's BioFrontiers Institute and the Harvard T.H. Chan School of Public Health to explore whether test sensitivity, frequency, or turnaround time is most important to curb the spread of COVID-19.

The researchers scoured available literature on how viral load climbs and falls inside the body during infection, when people tend to experience symptoms, and when they become contagious.

They then used mathematical modeling to forecast the impact of screening with different kinds of tests on three hypothetical scenarios: in 10,000 individuals; in a university-type setting of 20,000 people; and in a city of 8.4 million.

When it came to curbing spread, they found that frequency and turnaround time are much more important than test sensitivity.

For instance, in one scenario in a large city, widespread twice-weekly testing with a rapid but less sensitive test reduced the degree of infectiousness, or R0 ("R naught"), of the virus by 80%. But twice-weekly testing with a more sensitive PCR (polymerase chain reaction) test, which takes up to 48 hours to return results, reduced infectiousness by only 58%. When the amount of testing was the same, the rapid test always reduced infectiousness better than the slower, more sensitive PCR test.

That's because about two-thirds of infected people have no symptoms and as they await their results, they continue to spread the virus.

"This paper is one of the first to show we should worry less about test sensitivity and, when it comes to public health, prioritize frequency and turnaround," said senior co-author Roy Parker, director of the BioFrontiers Institute and a Howard Hughes Medical Institute investigator.

The study also demonstrates the power of frequent testing in shortening the pandemic and saving lives.

In one scenario, in which 4% of individuals in a city were already infected, rapid testing three out of four people every three days reduced the number ultimately infected by 88% and was "sufficient to drive the epidemic toward extinction within six weeks."

The study comes as companies and academic research centers are developing low-cost, rapid turnaround tests that could be deployed in large public settings or commercialized for do-it-yourself use.

Sensitivity levels vary widely. Antigen tests require a relatively high viral load - about 1,000 times as much virus compared to the PCR test -- to detect an infection. Another test, known as RT-lamp (reverse transcription loop-mediated isothermal amplification), can detect the virus at around 100 times as much virus compared to the PCR. The benchmark PCR test requires as little as 5,000 to 10,000 viral RNA copies per milliliter of sample, meaning it can catch the virus very early or very late.

In the past, federal regulators and the public have been reluctant to embrace rapid tests out of concern that they may miss cases early in infection. But, in reality, an infected person can go from 5,000 particles to 1 million viral RNA copies in 18 to 24 hours, said Parker.

"There is a very short window, early in infection, in which the PCR will detect the virus but something like an antigen or LAMP test won't," Parker said.

And during that time, the person often isn't contagious, he said.

"These rapid tests are contagiousness tests," said senior co-author Dr. Michael Mina, an assistant professor of epidemiology at the Harvard T.H. Chan School of Public Health. "They are extremely effective in detecting COVID-19 when people are contagious."

They are also affordable, he added. The rapid tests can cost as little as $1 each and return results in 15 minutes. Some PCR tests can take several days.

Mina envisions a day when the government sends simple, cheap DIY tests to every home. Even if half of Americans tested themselves weekly and self-isolated if positive, the result would be profound, he said.

"Within a few weeks we could see this outbreak going from huge numbers of cases to very manageable levels," Mina said.

Rapid testing could also be the key to breathing life back into former super spreader threats like football stadiums, concert venues and airports, with patrons testing themselves on the way in and still wearing masks as a precautionary measure, Larremore said.

"Less than .1% of the current cost of this virus would enable frequent testing for the whole of the U.S. population for a year," said Mina, referencing a recent Harvard economic analysis.

The authors say they are heartened to see that several countries have already begun testing all of their citizens, and hopeful that the new U.S. administration has named rapid testing as a priority.

"It's time to shift the mentality around testing from thinking of a COVID test as something you get when you think you are sick to thinking of it as a vital tool to break transmission chains and keep the economy open," Larremore said.

Credit: 
University of Colorado at Boulder

Study: Countering hate on social media

image: Figure 1 from the paper: Examples of Twitter conversations (reply trees) with labeled hate (red), counter (blue), and neutral speech (white). The root node is shown as a large square.

Image: 
Garland et al, EMNLP 2020 <https://www.aclweb.org/anthology/2020.alw-1.13/>

The rise of online hate speech is a disturbing, growing trend in countries around the world, with serious psychological consequences and the potential to impact, and even contribute to, real-world violence. Citizen-generated counter speech may help discourage hateful online rhetoric, but it has been difficult to quantify and study. Until recently, studies have been limited to small-scale, hand-labeled endeavors.

A new paper published in the proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) offers a framework for studying the dynamics of online hate and counter speech. The paper offers the first large-scale classification of millions of such interactions on Twitter. The authors developed a learning algorithm to assess data from a unique situation on German Twitter, and the findings suggest that organized movements to counteract hate speech on social media are more effective than individuals striking out on their own. 

The authors will present their paper, "Countering hate on social media: Large-scale classifications of hate and counter speech" during the November 20, 2020, Workshop on Online Abuse and Harms, which is running in conjunction with EMNLP 2020. 

"I've seen this big shift in civil discourse in the last two or three years towards being much more hateful and much more polarized," says Joshua Garland, a mathematician and Applied Complexity Fellow at the Santa Fe Institute. "So, for me, an interesting question was: what's an appropriate response when you're being cyber-bullied or when you're receiving hate speech online? Do you respond? Do you try to get your friends to help protect you? Do you just block the person?" 

To study such questions scientifically, researchers must first have access to a wealth of real-world data on both hate speech and counter-speech, and the ability to distinguish between the two. That data existed, and Garland and collaborator Keyan Ghazi-Zahedi at the Max Planck Institute in Germany found it in a five-year interaction that played out over German Twitter: As an alt-right group took to the platform with hate speech, an organized movement rose up to counter it.

"The beauty of these two groups is they were self-labeling," explains Mirta Galesic, the team's social scientist and a professor of human social dynamics at SFI. She says researchers who study counter-speech usually have to employ hundreds of students to hand-code thousands of posts. But Garland and Ghazi-Zahedi were able to input the self-labeled posts into a machine-learning algorithm to automate large swaths of the classification. The team also relied on 20-30 human coders to check that the machine classifications matched up with intuition about what registers as hate and counter-speech.

The result was a dataset of unprecedented size that allows the researchers to analyze not just isolated instances of hate and counter speech, but also compare long-running interactions between the two.

The team collected one dataset of millions of tweets posted by members of the two groups, using these self-identified tweets to train their classification algorithm to recognize hate and counter speech. Then, they applied their algorithm to study the dynamics of some 200,000 conversations that occurred between 2013 and 2018. The authors plan to soon publish a follow-up paper analyzing the dynamics revealed by their algorithm.

"Now we can resolve a massive data set from 2016 to 2018 to see how the proportion of hate and counter-speech changed over time, who gets more likes, who is retweeted, and how they replied to each other" Galesic says.

The quantity of data, a tremendous boon, also makes it "incredibly complex," Garland notes. The researchers are in the process of comparing tactics for both groups and pursuing broader questions such as whether certain counter-speech strategies are more effective than others.

"What I'm hoping is that we can come up with a rigorous social theory that tells people how to counter hate in a productive way that's non-polarizing," Garland says, "and bring the Internet back to civil discourse."  

Credit: 
Santa Fe Institute

Supramolecular chemistry - Self-constructed folded macrocycles with low symmetry

Molecules that are made up of multiple repeating subunits, known as monomers, which may vary or not in their chemical structure, are classified as macromolecules or polymers. Examples exist in nature, including proteins and nucleic acids, which are at the heart of all biological systems. Proteins not only form the basis of structural elements in cells, they also serve as enzymes - which catalyze essentially all of the myriad of chemical transformations that take place in living systems. In contrast, nucleic acids such as DNA and RNA serve as informational macromolecules. DNA stores the cell's genetic information, which is selectively copied into RNA molecules that provide the blueprints for the synthesis of proteins. In addition, long chains comprised of sugar units provide energy reserves in the form of glycogen, which is stored in the liver and the muscles. These diverse classes of polymeric molecules all have one feature in common: They spontaneously fold into characteristic spatial conformations, for example the famous DNA double helix, which in most cases are essential for their biochemical functions.

Professor Ivan Huc (Department of Pharmacy, LMU) studies aspects of the self-organization processes that enable macromolecules to adopt defined folded shapes. The molecular structures found in nature provide him with models, whose properties he tries to reproduce in the laboratory with non-natural molecules that are neither proteins, nucleic acids or sugar-like. More specifically, he uses the tools of synthetic chemistry to elucidate the underlying principles of self-organization - by constructing molecules that are expressly designed to fold into predetermined shapes. Beginning with monomers that his group has developed, he sets out to produce what he calls 'foldamers', by assembling the monomers one by one to generate a folded macromolecule.

Structures with low degrees of symmetry

"The normal way to get the complex structure of proteins is to use different types of monomers, called amino acids", as Huc reports. "And the normal method to connect different amino acids in the the correct order is to link them one by one." The sequence of amino acids contains the folding information that allows different protein sequences to fold in different ways.

"But we discovered something unexpected and spectacular", comments Huc. He and his colleagues in Munich, Groningen, Bordeaux and Berlin used organic, sulfur-containing monomers to spontaneously get cyclic macromolecules with a complex shape, as illustrated by their low degree of symmetry, without requiring a specific sequence. The macromolecules self-synthesize - no further conditions are necessary. "We only put one monomer type in a flask and wait", Huc says. "This is typical for a polymerization reaction, but polymers from a single monomer usually don´t adopt complex shapes and don't stop growing at a precise chain length."

To further control the reaction, the scientists also used either a small guest molecule or a metal ion. The regulator binds within the growing macromolecule and causes monomers to arrange themselves around it. By choosing a regulator with the appropriate characteristics, the authors of the new study were able to produce structures with a predetermined number of subunits. The cyclic macromolecules exhibited low levels of symmetry. Some consisted of either 13, 17 or 23 subunits. Since 13, 17 and 23 are prime numbers, the corresponding folded shapes exhibit low degrees of symmetry.

A model for biological and industrial processes

Interest in the elucidation of such mechanisms is not restricted to the realm of basic research. Huc and his colleagues hope that their approach will lead to the fabrication of designer plastics. Conventional polymers usually consist of mixtures of molecules that vary in length (i.e. the number of monomers they contain). This heterogeneity has an impact on their physical properties. Hence, the ability to synthesize polymer chains of an exact length and/or geometry is expected to lead to materials with novel and interesting behaviors.

Furthermore, foldamers like those that have now been synthesized show close structural resemblances to biopolymers. They therefore offer an ideal model system in which to study the properties of proteins. Every protein is made up of a defined linear (i.e. unbranched) sequence of amino acids, which constitutes its 'primary structure'. But most amino-acid chains fold into local substructures such as helically coiled stretches, or parallel strands that can form sheets. These units represent the protein's secondary structure. The term 'tertiary structure' is applied to the fully folded single chain. This in turn can interact with other chains to form a functional unit or quaternary structure.

Huc's ultimate goal is to mimic complex biological mechanisms using structurally defined, synthetic precursors. He wants to understand how, for example, enzymes fold into the correct, biologically active conformation following their synthesis in cells. Molecules whose properties can be precisely controlled in the laboratory provide ideal models with which to work out the answers and perhaps to go beyond enzymes themselves.

Credit: 
Ludwig-Maximilians-Universität München

States unfairly burdening incarcerated people with 'pay-to-stay' fees

Pay-to-stay, the practice of charging people to pay for their own jail or prison confinement, is being enforced unfairly by using criminal, civil and administrative law, according to a new Rutgers University-New Brunswick led study.

The study, published in the Journal of Contemporary Criminal Justice, finds that charging pay-to-stay fees is triggered by criminal justice contact but possible due to the co-opting of civil and administrative institutions, like social service agencies and state treasuries that oversee benefits, which are outside the realm of criminal justice.

"A person can be charged 20 dollars to 80 dollars a day for their incarceration," said author Brittany Friedman, an assistant professor of sociology and a faculty affiliate of Rutgers' criminal justice program. "That per diem rate can lead to hundreds of thousands of dollars in fees when a person gets out of prison. To recoup fees, states use civil means such as lawsuits and wage garnishment against currently and formerly incarcerated people, and regularly use administrative means such as seizing employment pensions, tax refunds and public benefits to satisfy the debt."

Friedman says states require incarcerated people to declare their assets upon arrival to the prison and actively examine their inmate accounts to uncover any assets. People with pensions, savings accounts or regular deposits to their inmate accounts by friends and family members are at risk of suit.

The study traced pay-to-stay statutes within criminal legal codes to showcase how a conviction and incarceration can trigger a host of civil penalties as a mechanism for states to recoup the cost of incarcerating people.

"Every state in the U.S., except Hawaii, charges pay-to-stay fees," said Friedman. "These fees and civil recoupment strategies force us to question the purpose and morality of criminal justice."

Friedman says rationales justifying these fees routinely do not recognize them as a form of punishment and instead policymakers see pay-to-stay as financial reimbursement to the state by portraying incarcerated people as using up system resources. The justification allows pay-to-stay statutes to survive legal arguments alleging double punishment.

Civil penalties are enacted on family members if the defendant cannot pay and in states such as Florida, Nevada and Idaho can occur even after the original defendant is deceased.

Friedman says states will often enlist their attorney general's office to sue people with the hopes of uncovering more assets during civil motions. Typically the formerly incarcerated person is representing themselves because they are not guaranteed an attorney in a civil case. Yet, they are unable to afford a private attorney.

Suing often results in civil judgments in the several thousands of dollars, with many cases reaching more than $100,000.

"When we think about the impact of incarceration on the ability to re-enter society, imagine the damage done when we allow an Attorney General to sue the incarcerated for six-figure sums they will never recoup," said Friedman. "We must ask ourselves -- are these fees and civil lawsuits simply to make a point? Should we make people pay back the state both through incarceration and financial indebtedness, often in perpetuity?"

The study suggests that states focus on reducing prison and jail populations and shrinking their criminal justice system through legal reform, such as doing away with mandatory minimums and "three-strike" laws.

"People will have a better chance at re-entering society if we shrink the size of our correctional system and abolish pay-to-stay fees as a revenue scheme simultaneously," Friedman said.

According to Friedman, more research is needed to expose the underlying consequences for inequality. As such, she is a part of a collaborative team with researchers from North Carolina State and Northwestern University comparatively investigating pay-to-stay and civil lawsuits against incarcerated people across a number of states.

Credit: 
Rutgers University

Commentary: Want to understand health disparities? Get your antiracist goggles on

image: African American mom and daughter take a trip to the doctor's office for an appointment.

Image: 
Dell Medical School

AUSTIN, Texas -- When it comes to understanding why children from non-white race groups have such poor health outcomes compared with their white counterparts, it's time for researchers to look beyond their genes and delve deeper into social factors, according to a commentary published today in the journal Pediatrics.

"Framing race in biological terms within health sciences is not only shortsighted, but it also absolves us from dealing with how structural racism and other problems in society are far stronger causes of disparities than genetics," said co-author Elizabeth Matsui, M.D., professor of pediatrics and population health at Dell Medical School at The University of Texas at Austin.

Matsui points to decades of research into racial and ethnic health disparities that have failed to resolve disproportionate health outcomes among kids for conditions such as premature birth, asthma and obesity.

Even when there's no scientific basis to make such a claim, Matsui and her co-authors Adewole Adamson, M.D., of Dell Med and Tamara Perry, M.D., of The University of Arkansas argue that observed associations between race, ethnicity and disease among minority populations are misconstrued as evidence that innate biological differences are a key cause of health disparities.

"Take a Black child with asthma, for instance," Matsui said. "Many of us are inclined to conflate the color of his skin with an intrinsic biologic difference rather than thinking about his condition not only in the context of where he lives, but also the history that led to that context. And it's this context that's overwhelmingly responsible for the disproportionate burden of asthma and other chronic conditions within Black communities."

The commentary cites other research exemplifying this problem, including a study on atopic dermatitis that described inflammatory markers in skin between Black and white people, without discussing the potential role of contextual factors in causing these differences.

"This overemphasis on biology is persistent, even though these genetic differences between racial groups are often meaningless," said Adamson, a dermatologist and assistant professor of internal medicine at Dell Med. "Until we recast minority health research that positions race and ethnicity as social - not biologic - constructs, we'll see little progress. So, we're calling for a research framework that is explicitly antiracist."

Matsui, Adamson and Perry contend that new research framework must be:

Embedded in systems that fund, evaluate, disseminate and promote health sciences,

Guided by antiracist principles,

Explicitly considerate of contextual factors such as race and ethnicity when designing or interpreting studies,

Engaging of the community disproportionately affected by the health condition being examined or studied.

The researchers also advocate for the construction of "trans-disciplinary" research teams.

"We want experts at the table that include social scientists, race scholars, environmental health scientists, epidemiologists, population geneticists, behavioral scientists and others," Matsui said. "Right now, that's simply not how most investigative teams are structured."

Matsui believes the pediatric field is best positioned to lead this re-casting of health disparities research because of its constant focus on prevention and routine collaboration with social workers, schools and other public services that focus on child wellness.

"Although this agenda is ambitious, it's critical in the effort to have a meaningful impact on minority health," said Matsui. "As the issue of structural racism grows louder, the opportunity for the pediatric community to lead the implementation of an antiracist research agenda has never been greater."

Credit: 
University of Texas at Austin

Study of hope and optimism: New paper examines research in emerging fields

image: Explore the latest research in this white paper by Professor Michael Milona.

Image: 
John Templeton Foundation

A new paper published by the John Templeton Foundation explores the latest scientific and philosophical research on the related but distinct virtues of hope and optimism. The 45-page white paper, written by Michael Milona, a philosophy professor at Ryerson University in Toronto, Canada, examines findings on the benefits and risks involved in both hope and optimism. Milona's summary gave particular focus to the results of a three-year, $4.4 million project led by Samuel Newlands at Notre Dame and Andrew Chignell at the University of Pennsylvania, which funded projects by more than 29 researchers worldwide on topics on the effects of hope and optimism in education, faith, healthcare, politics, and more.

A HOPEFUL OVERVIEW

Milona's white paper surveys more than 145 sources from the last 50 years, with special attention to new work in the past decade. According to his analysis, philosophers and psychologists view optimism and hope as distinct but related traits. Optimism is generally categorized as being dispositional (involving a general tendency to expect things to go well) or contextualized (being oriented around a specific goal). Optimism can give motivation, improve health, and help people to cope in tough times. At the same time, optimism can run the risk of being untethered from reality, which may set people up for disappointment.

While optimism is the belief that a good outcome will occur, hope, Milona writes, is "something we can hold on to even when we've lost confidence." Hope is connected with both belief and desire. Though often allied with positive emotions, it can also connect with negative emotions such as fear. 

Critics of hope can cast it as overconfidence, demotivation, or otherworldliness, but as with optimism, hope can motivate us, and be a primary factor in our personal identities. In religious and secular outlooks, hope figures significantly in how we think about the reality of death -- both as we approach life's end and contemplate what might happen after we die.

Milona also examines several sub-varieties of hope, including "Christian Hope" -- which emphasizes confidence even when one is uncertain of the details -- and "Pragmatist Hope," which emphasizes flexibility, a commitment to what works, and taking the role of a participant rather than observer. He quotes Cornel West's 2008 book Hope on a Tightrope: "Hope," West says, "enacts the stance of the participant who actively struggles against the evidence."

Credit: 
John Templeton Foundation

Researchers recommend more transparency for gene-edited crops

Media contacts:

Jennifer Kuzma, jkuzma@ncsu.edu

Khara Grieger, kdgriege@ncsu.edu

Mick Kulikowski, News Services, 919.218.5937 or mick_kulikowski@ncsu.edu

Researchers at North Carolina State University call for a coalition of biotech industry, government and non-government organizations, trade organizations, and academic experts to work together to provide basic information about gene-edited crops to lift the veil on how plants or plant products are modified and provide greater transparency on the presence and use of gene editing in food supplies.

At issue is a May 2020 U.S. Dept. of Agriculture rule called SECURE (sustainable, ecological, consistent, uniform, responsible, efficient) that governs genetically engineered organisms. The rule is expected to exempt most genetically modified plants to pre-market field testing and data-based risk assessment. In fact, the USDA estimates that 99% of biotech crops would receive this exemption.

NC State researchers Jennifer Kuzma and Khara Grieger, in a policy forum paper published in the journal Science, say that SECURE, though decades in the making, falls short in providing enough public information about gene-edited crops in the food supply. Given consumer interest in GM foods and labeling information, the lack of public information on gene-edited crops could decrease trust and confidence as they begin to enter the marketplace and become more commonplace.

"It's pretty clear that consumers want to know which products are genetically modified and which are not, and we suspect that these desires will not be different for gene-edited crops," said Kuzma, the Goodnight-NC GSK Foundation Distinguished Professor in the Social Sciences and co-director of the Genetic Engineering and Society Center at NC State. "Crop developers, including companies, have signaled that they want to do better with gene editing to improve public trust. We present a model for them to improve transparency and obtain certification based on providing information about their gene-edited and other GM crops in a public repository."

To provide more transparency, the NC State researchers recommend the creation of CLEAR-GOV, or a "Community-Led and Responsive Governance" coalition, that would provide access to basic information on biotech crops in accessible language. That would include the species and variety of plant, type of trait modified, improved quality provided by the trait modification, general areas where the crop is grown, and downstream uses of the crop. CLEAR-GOV would be operated through a non-profit organization staffed by experts from a variety of academic fields.

"If leadership at a non-profit, independent research organization decided that this is something that they are passionate about, they could see a market value in housing this coalition and hosting this depository," said Grieger, a paper co-author and assistant professor and extension specialist in NC State's Department of Applied Ecology.

Kuzma adds that CLEAR-GOV would fill an important gap for consumers and other stakeholder groups who want to know more about gene-edited products in the marketplace.

"Because many gene-edited crops would be exempt under SECURE and new GM food-labeling rules may also not apply to them, there needs to be some information repository for companies that want to do the right thing and be more transparent," Kuzma said. "Our recommendations would provide a mechanism for that."

Credit: 
North Carolina State University