Culture

A fair reward ensures a good memory

How does our memory work and how can we optimize its mechanisms on a daily basis? This question is at the heart of many neuroscience research projects. Among the brain structures examined to better understand memory mechanisms, the reward system is now at the centre of investigations. Through the examination of brain activity in healthy human subjects, scientists from the University of Geneva (UNIGE) have highlighted the lasting positive effect of a reward - monetary, in this case - on the ability of individuals to retain a variety of information. Moreover, and much more surprisingly, the research team demonstrated that the average accumulation of reward should be neither too small nor too large. By ensuring an effective neural dialogue between the reward circuit and the memory circuit, this delicate balance allows the proper encoding of memories in our brain. These results can be read in Nature Communications.

Empirically, it seems quite logical that obtaining a reward can improve the memories associated with it. But what are the brain mechanisms at work, and how can we exploit them to optimize our memory capacity? "The positive influence of a reward on memory is a well-known phenomenon," says Sophie Schwartz, full professor in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine, who led this work. "However, our experiment aimed to take a further step in understanding this mechanism by looking at two important aspects: does the effect last over time and what role does the accumulation of reward play?"

A measured challenge to motivate the brain

To answer these questions, the scientists have developed an experiment using functional magnetic resonance imaging, an imaging technique that allows real-time observation of the brain in action. About 30 healthy subjects were asked to remember associations between objects and people; each correct answer was associated with points gained, and each incorrect answer with points lost (the points were then converted into money). Twenty minutes later, the subjects were asked to retrieve these associations to earn additional points. Critically, the average number of points that could be gained varied over the course of the experiment.

"Contrary to what one might have thought, the best results were not associated with the highest accumulation of rewards, the point where subjects should have been the most motivated", says Kristoffer Aberg, a researcher now at the Weizmann Institute of Science and the first author of this work. The most effective? Somewhere between the highest and lowest accumulated rewards. "Our brain needs rewards to motivate us, but also challenges", explains Sophie Schwartz. "If the task is too easy, motivation decreases as quickly as if it is too difficult, and that affects our ability to encode information. Imagine picking berries in the forest: if they are everywhere, you do not have to remember where to find them. If there are only a few, the effort required to pick them is too great in relation to the possible gain - a few berries will not feed us. Now, if clusters of berries are scattered throughout the forest, remembering their exact location will allow us to pick more in a short time."

A dialogue between brain areas

In the brain, memory is primarily managed by the hippocampus, a region of the brain responsible for encoding and storing memories. When a reward is involved, however, another region is activated, the ventral tegmental area, which is involved in the reward system and responsible for the release of dopamine related to the satisfaction of obtaining a reward. "It is the dialogue between these two brain areas that helps maintain motivation, improve learning, and consolidate memories, even over time," explains Kristoffer Aberg.

This experiment shows the importance of motivation in memory and learning, but also the subtle, and probably individual-specific, balance that should be instituted. These lessons are particularly useful in the school environment, with the idea of creating learning contexts that would foster this motivation according to the needs of children.

Credit: 
Université de Genève

How fish got onto land, and stayed there

image: A group of Pacific leaping blennies out of the water on the foreshores of the island Guam.

Image: 
Terry Ord

Research on blennies, a family of fish that have repeatedly left the sea for land, suggests that being a 'jack of all trades' allows species to make the dramatic transition onto land but adapting into a 'master of one' allows them to stay there. The findings are published in the British Ecological Society journal Functional Ecology.

Researchers from University of New South Wales and the University of Minnesota pooled data on hundreds of species of blennies, a diverse family of fish where some are aquatic and others have left the water completely. They found that a flexible diet and behaviour were likely to be instrumental in the transition to land.

However, once out of the water, restrictions on the type of food available triggered major evolutionary changes, particularly to their teeth, as land dwelling blennies have become specialists in scraping algae and detritus from rocks.

Dr Terry Ord, lead author of the research, said: "The implications of our findings are that having a broad diet or being behaviourally flexible can help you move into a new habitat. But once there, this flexibility becomes eroded by natural selection. This presumably means those highly specialised species are less likely to be able to make further transitions, or cope with abrupt environment changes in their existing habitat."

The scenario of fish colonising land has obvious parallels with the origin of all land vertebrates. "Fossils can give us important insights into how that transition might have unfolded, and the types of evolutionary adaptations it required or produced. But having a contemporary example of fish making similar ecological transitions can also help us understand the general challenges that are faced by fish out of the water" said Dr Ord.

Blennies are a remarkable family of fish with different species occupying strikingly different environments. Some are aquatic. Others spend time in and out of the water in the intertidal zone, an extreme environment with fluctuating water levels and pools that can rapidly change in temperature and oxygen levels.

Some species of blenny are terrestrial and spend almost their entire lives out of the water in the splash zone and must keep moist in order to breathe through their skin and gills. Despite these challenges, blennies have been incredibly successful in repeatedly making these dramatic transitions.

Because of this diversity, different blenny fish species represent clearly defined stages of the invasion process between two completely different environments. This makes them a unique group of animals to study.

Dr Ord explained the origin of the study with his co-author Dr Peter Hundt: "We both had extensive data collected on many different species of blenny from across the world. Peter had detailed information on diet and teeth morphology, while I had lots of data on behaviour and frequency of different species emerging from water for brief or extended periods on land.

"We threw a set of complex evolutionary statistical models at this combined data and we were able to reveal the sequence of events that likely allowed aquatic marine fishes to ultimately evolve into fishes that could leave water and then colonise land. Our study also showed how those species on land adaptively changed to better suit the specialised diet needed to survive on land."

The authors caution that although the observational data suggests a flexible diet and behaviour allows a transition to new environments to occur, it cannot confirm causality. "Ideally we would perform some type of experimental investigation to try to establish casualty. What this experimental study might be is hard to imagine at this stage, but we're working on it." Said Dr Ord.

The authors are also looking to further investigate how the invasion of land has impacted other aspects of blenny fish behaviour, ecology and bodies. "Terrestrial blennies are really agile out of water, and I suspect they've adapted their body shape to allow them to hop about the rocks so freely. Which in turn implies they might not be able to go back to the water" said Dr Ord, "It would also be exciting to know how their sensory systems might have adapted out of the water as well, given vision and smell would probably work quite differently in these environments."

Credit: 
British Ecological Society

Association between morbidity and poverty reversed during early US COVID-19 epidemic

The first confirmed case of COVID-19 in the USA was on January 20, 2020 in Washington State. Since then, there have been over two million confirmed cases and 113,000 deaths in the country. A shortage of tests has beleaguered the US healthcare system from the beginning of the pandemic.

The media have highlighted the apparently disproportionate toll that COVID-19 has taken among people of color and poorer communities in urban areas. This has been attributed to lower availability of quality healthcare as well as to testing and treatment for COVID-19, a higher disease burden of risk factors such as diabetes and heart or pulmonary disease, a lower probability of working from home, and a higher probability of using public transport.

Researchers from Ball State University in Muncie, Indiana, analyzed the pandemic's evolution during the first ten weeks in the US in a new paper in Frontiers in Sociology. Contrary to their expectation and anecdotal reports, they found a shift over time in the association between poverty and the number of confirmed cases early on during the pandemic, without a similar shift in the association between poverty and the number of deaths.

"The results of our study point to a higher incidence in both COVID-19 diagnosis and deaths due to the virus in counties that were more urban and less resourced. These trends changed over time, so that by the first of April the identification of COVID-19 occurred at a higher rate in relatively better resourced counties, thereby reversing the earlier trend", says first author Dr W. Holmes Finch, Distinguished Professor of Educational Psychology at Ball State University.

The researchers analyzed a dataset of confirmed COVID-19 cases and deaths in each of 2,853 counties between January 21 - April 1, collated from state and local health departments by The New York Times. Data on poverty were obtained from the Poverty Solutions Initiative at the University of Michigan, which includes an "Index of Deep Disadvantage" (IDD) for each county. The higher the IDD, the more prosperous the county.

The researchers show that in January - March, the association between the IDD and the number of confirmed cases was negative, indicating that counties with greater levels of reported poverty had a greater morbidity. But by April 1, this association had become positive, meaning that richer counties had a larger number of confirmed cases. This result, including the inversion of the association with time, was also consistent if tests were done on individual components of the IDD.

In contrast, the pattern for the number of deaths from COVID-19 was different, with a disproportionably greater number of deaths in counties with a low IDD (hence, poorer), especially after April 1. The positive association between deaths and poverty later in the epidemic was likewise consistent when components of the IDD were analyzed individually: for example, a greater number of deaths from COVID-19 was associated with a higher percentage of residents living in poverty or deep poverty, a higher incidence of low birthweights, and with urban counties, and these associations were stronger in April than in March.

How to explain these counter-intuitive results? The authors first discuss one possible explanation, namely that the virus truly became less prevalent in poorer urban communities over time, for example because it was less successful at infecting others than in richer urban communities, or because lockdown and social distancing were more efficient in poorer urban counties. But they hypothesize that another explanation is more likely: namely, that the number of confirmed cases has been strongly underestimated in poorer counties because the limited testing resources were mainly diverted to richer areas. More research will be needed to confirm this hypothesis, with its troubling implications for social equity.

"The results of this study point to the importance of access to adequate testing resources for those living in under-resourced communities in the United States, particularly as the need for testing grew nationwide with the spread of the coronavirus. In addition, efforts to mitigate the spread of the virus need to take into account the working lives of those individuals employed in areas such as the service sector, healthcare, and other essential occupations. Finally, and perhaps most importantly, the results of this study point to the need for the American health care system to pay careful attention to public health emergencies in every part of society," concludes Dr Maria E. Hernández Finch, the study's last author.

Credit: 
Frontiers

Fish evolution in action: Land fish forced to adapt after leap out of water

image: A group of Pacific leaping blennies out of the water on the foreshores of the island Guam.

Image: 
Terry Ord, UNSW Sydney

A diverse diet and flexible behaviour may have empowered blenny fish to make a dramatic transition out of the water - but once on land, they have been forced to become specialised, a new study led by UNSW shows.

The analysis of multiple big datasets, a collaboration between UNSW and the University of Minnesota, was published today in journal Functional Ecology.

"Some species of blennies never emerge from water and others stay on land full-time as adults - so they present a unique opportunity to study fish evolution in action and explore the transition from water to the land in a living animal," says study lead and UNSW evolutionary ecologist, Associate Professor Terry Ord.

"In this study, we found that having a flexible diet has likely allowed blennies to make a successful leap onto land - but once out of the water, these remarkable land fish have faced restrictions on the type of food available to them.

"These restrictions have triggered major evolutionary changes in their morphology, specifically dramatic changes in their teeth, as they have been forced to become specialist scrappers of the rocks to forage on algae and detritus."

Looking at the fish pre- and post-transition could hold broader clues about what makes such a dramatic move successful, the scientists say.

"There is ample evidence that transitions from one environment into another are responsible for the evolution of many of the species we see today, as well as the diversity in morphology and behaviour we see across different species. But little is known about the mechanisms behind what drives those transitions in the first place," A/Prof Ord says.

To shed light on exactly that, the researchers applied a set of complex evolutionary statistical models to their data. They were able to reveal the sequence of events that likely allowed aquatic marine fish to ultimately evolve into fish that could leave water and colonise land - and what happened once they got there.

"Our findings suggest that being a jack-of-all trade - for example, being flexible in the types of foods you can eat and being flexible in leaving water for very brief periods of time - can open the door to making what would seem to be a really dramatic change in habitat," A/Prof Ord says.

This core insight can be extended to any species making a move between habitats and might have other implications as well.

"The flipside of our study suggests that some species that are already uniquely specialised to their existing environment are probably less able to make further transitions in habitat, or might not cope well if abrupt changes occur to their environment, for example as a consequence of the current climate crisis."

Next up: experimental blenny study

The scientists say there are limits to their study and further research is needed.

"This study is essentially observation, or what we call a correlational study. The data is suggestive that diet and behavioural flexibility is important for making major transitions in habitat, and that once those transitions have occurred, that flexibility is eroded by adaptation," A/Prof Ord says.

"But ideally we would want to perform some type of experimental investigation to try to establish causality - that it is flexibility in diet and behaviour specifically and not something else that allows such transitions to occur. It's possible that diet or behavioural flexibility are not responsible, and that some other currently unknown factor is. What this experimental study might be is hard to imagine at this stage, but we're working on it."

Previous studies by A/Prof Ord found that avoiding nasty aquatic predators is a big motivator for blennies to spend time out of the water, and that once on land, they use camouflage to avoid further attacks by other predators on land.

Credit: 
University of New South Wales

CMU method makes more data available for training self-driving cars

image: A technique called scene flow can be used to predict the future position of a cyclist by comparing the current lidar point cloud of a street scene, in green, with the point cloud from the previous time step in the sequence, shown in red. Carnegie Mellon University researchers have developed a method that increases the amount of data available for training such systems.

Image: 
Carnegie Mellon University

PITTSBURGH--For safety's sake, a self-driving car must accurately track the movement of pedestrians, bicycles and other vehicles around it. Training those tracking systems may now be more effective thanks to a new method developed at Carnegie Mellon University.

Generally speaking, the more road and traffic data available for training tracking systems, the better the results. And the CMU researchers have found a way to unlock a mountain of autonomous driving data for this purpose.

"Our method is much more robust than previous methods because we can train on much larger datasets," said Himangi Mittal, a research intern working with David Held, assistant professor in CMU's Robotics Institute.

Most autonomous vehicles navigate primarily based on a sensor called a lidar, a laser device that generates 3D information about the world surrounding the car. This 3D information isn't images, but a cloud of points. One way the vehicle makes sense of this data is by using a technique known as scene flow. This involves calculating the speed and trajectory of each 3D point. Groups of points moving together are interpreted via scene flow as vehicles, pedestrians or other moving objects.

In the past, state-of-the-art methods for training such a system have required the use of labeled datasets -- sensor data that has been annotated to track each 3D point over time. Manually labeling these datasets is laborious and expensive, so, not surprisingly, little labeled data exists. As a result, scene flow training is instead often performed with simulated data, which is less effective, and then fine-tuned with the small amount of labeled real-world data that exists.

Mittal, Held and robotics Ph.D. student Brian Okorn took a different approach, using unlabeled data to perform scene flow training. Because unlabeled data is relatively easy to generate by mounting a lidar on a car and driving around, there's no shortage of it.

The key to their approach was to develop a way for the system to detect its own errors in scene flow. At each instant, the system tries to predict where each 3D point is going and how fast it's moving. In the next instant, it measures the distance between the point's predicted location and the actual location of the point nearest that predicted location. This distance forms one type of error to be minimized.

The system then reverses the process, starting with the predicted point location and working backward to map back to where the point originated. At this point, it measures the distance between the predicted position and the actual origination point, and the resulting distance forms the second type of error.

The system then works to correct those errors.

"It turns out that to eliminate both of those errors, the system actually needs to learn to do the right thing, without ever being told what the right thing is," Held said.

As convoluted as that might sound, Okorn found that it worked well. The researchers calculated that scene flow accuracy using a training set of synthetic data was only 25%. When the synthetic data was fine-tuned with a small amount of real-world labeled data, the accuracy increased to 31%. When they added a large amount of unlabeled data to train the system using their approach, scene flow accuracy jumped to 46%.

The research team presented their method at the Computer Vision and Pattern Recognition (CVPR) conference, which was held virtually June 14-19. The CMU Argo AI Center for Autonomous Vehicle Research supported this research, with additional support from a NASA Space Technology Research Fellowship.

Credit: 
Carnegie Mellon University

Poor sleep significantly linked with teenage depression

Teenagers who experience very poor sleep may be more likely to experience poor mental health in later life, according to a new study.

In a paper published in the Journal of Child Psychology and Psychiatry, researchers analysed self-reported sleep quality and quantity from teenagers and found that there was a significant relationship between poor sleep and mental health issues.

The team, based at the University of Reading, and Goldsmiths and Flinders Universities found that among the 4790 participants, those who experienced depression reported both poor quality and quantity of sleep, while those with anxiety had poor quality of sleep only, compared to those teenagers who took part who didn't report anxiety or depression.

Dr Faith Orchard, a Lecturer in Clinical Psychology at the University of Reading said:

"This latest research is another piece of evidence to show that there is a significant link between sleep and mental health for teenagers. This study highlights that those young people who have experienced depression and anxiety had overwhelmingly experienced poor sleep during their teens.

"What's noticeable is that the difference in average amount of sleep between those who experienced depression, which amounts to going to sleep 30 minutes later each night compared to other participants. Within the data, there were some participants who reported hugely worse quality and quantity of sleep, and the overall picture highlights that we need to take sleep much more into account when considering support for teenager wellbeing."

Teens were asked to self-report on sleep quality and quantity over a series of issues, and the researchers found that the control group of teenagers were on average getting around eight hours of sleep a night on school nights and a little over nine and half hours sleep on weekends.

Meanwhile, the group who had a depressive diagnosis were getting less than seven and a half hours sleep on week nights and just over nine hours sleep at weekends.

A co-author, Professor Alice Gregory from Goldsmiths University, said:

"The National Sleep Foundation recommends that adolescents aged between 14-17 years typically need around 8-10 hours of sleep each night. What is notable here is that the group with a diagnosis of depression most clearly fell outside of these recommendations during the week - getting on average 7.25 hours of sleep on each school night"

The depression group were therefore reporting an average total of 3325 minutes of sleep a week compared to the control group who reported 3597, meaning that the depression group were on average getting 272 minutes or three and a half hours less sleep a week.

While the team noted that although the data was based on self-reporting of sleep and therefore less accurate, the fact of self-reported worse quality and quantity of sleep was still significant.

Dr Orchard said:

"What we are now seeing is that the relationship between sleep and mental health for teenagers is a two way street. While poorer sleep habits are associated with worse mental health, we are also seeing how addressing sleep for young people with depression and anxiety can have a big impact on their wellbeing.

"It's also important to note that the numbers of young people who report anxiety and depression are still low overall. Good sleep hygiene is important, and if you are concerned about yours or your child's wellbeing we strongly encourage you to seek support from your doctor, but any short term negative impact on sleep is not a cause for alarm."

Professor Gregory said:

"The Department for Education is aware of the importance of sleep in children and adolescence - and it is really good news that from September 2020 Statutory Guidance will mean that they will be taught about the value of good quality sleep for many aspects of their lives including their mood."

Co-author Professor Michael Gradisar from Flinders University, Australia said:

"This longitudinal study confirms what we see clinically - that poor sleep during adolescence can be a 'fork in the road', where a teen's mental health can deteriorate if not treated. Fortunately there are sleep interventions available for schools and individual families - and these can place teens back on the road to healthy sleep.".

Credit: 
University of Reading

The Lancet Infectious Diseases: Study from Chinese city of Guangzhou provides key insights on how COVID-19 spreads in households

New modelling research, published in The Lancet Infectious Diseases journal, suggests the coronavirus (SARS-CoV-2) that causes COVID-19 may spread more easily among people living together and family members than severe acute respiratory syndrome (SARS) or Middle East respiratory syndrome (MERS). The estimates are the first of their kind to quantify symptomless transmission.

The analysis, based on contact tracing data from 349 people with COVID-19 and 1,964 of their close contacts in Guangzhou (the most populated city in southern China), found people with COVID-19 were at least as infectious before they developed symptoms as during their actual illness, and that older people (aged 60 years or more) were most susceptible to household infection with SARS-CoV-2.

The study of people living together and family members (not living at the same address), and non-household contacts (eg, friends, co-workers, passengers) suggests that breaking the chain of transmission within households through timely tracing and quarantine of close contacts, in addition to case finding and isolation, could have a huge impact on reducing the number of COVID-19 cases.

While the model has been updated to reflect the current knowledge about the transmission dynamics of COVID-19, the authors caution that it is based on a series of assumptions, for example about the length of incubation and how long symptomatic cases are infectious, that are yet to be confirmed, and might affect the accuracy of the estimates.

"Our analyses suggest that the infectiousness of individuals with COVID-19 before they have symptoms is high and could substantially increase the difficulty of curbing the ongoing pandemic", says Dr Yang Yang from the University of Florida in the USA who co-led the research. "Active case finding and isolation in conjunction with comprehensive contact tracing and quarantine will be key to preventing infected contacts from spreading the virus during their incubation periods, which will be crucial when easing lockdown restrictions on movement and mixing." [1]

Household transmission of COVID-19 is suspected to have contributed substantially to the rise in cases in China following the introduction of lockdown measures. But little research has assessed the spread of disease at the household level. Previous estimates of household infections are specific to the setting where the data were obtained, and represent the proportion of infections among all traced contacts, which does not fully account for the difference in individual exposure history, or the fact that infections may not necessarily be secondary, and could be tertiary--ie, the possibility of transmission among contacts themselves, or infection risks from objects such as clothes, utensils, and furniture.

In the study, researchers developed a transmission model that accounted for individual-level exposure, tertiary transmission, potential exposure to untraced infection sources, and asymptomatic infections. Using data gathered by the Guangzhou Center for Disease Control and Prevention (CDC) on 215 primary COVID-19 cases (ie, with no known source of exposure, or assumed to have been infected outside Guangzhou), 134 secondary/tertiary cases, and 1,964 of their close contacts between January 7 and February 18, 2020, the study estimated the secondary attack rate (the probability that an infected person transmits the disease to a susceptible individual) among people living together and family members, and non-household contacts. Close contacts--unprotected individuals who had been within a metre of a person with COVID-19 less than 2 days before their symptoms developed--were traced, quarantined, and tested for SARS-CoV-2 on days 1 and 14.

The study also modelled the effects of age and sex on the infectivity of COVID-19 cases and susceptibility of their close contacts. For the primary results, researchers assumed an average incubation period of 5 days and a maximum infectious period of 13 days (including up to 5 days before illness onset). Among the 349 laboratory-confirmed primary and secondary COVID-19 cases, 19 (5%) reported no symptoms during the follow-up period.

The analyses estimated that the likelihood of secondary transmission--spread from an infected person to non-household contacts--was 2.4%. The likelihood of passing on the virus was higher among people living together and family members, with an attack rate of 17.1% (or around 1 in 6) among people living at the same address, and 12.4% (about 1 in 8) among family members.

"Family members such as parents and older children may not be living at the same address, which might explain why they appear at less risk of secondary infections than those living in the same household as the COVID-19 case", says co-author Dr Natalie Dean from the University of Florida, USA. "While the likelihood of transmitting COVID-19 in households may seem quite low, it is around twice what has been estimated for SARS (4.6-8%) and three times higher than for MERS (4-5%), although these data are only based on a small number of studies." [1]

The model also suggests that the likelihood of household infection is highest among older adults aged 60 or more (attack rate of 28% or around 1 in 4 of those living together, 18.4% or about 1 in 5 family members), and lowest in those aged 20 years or younger (attack rate 6.4% or around 1 in 15 of those living together, 5.2% or about 1 in 20 family members; tables 1 and 4).

The estimates also suggest substantial infectivity during the incubation period, comparable to, and potentially higher than, during the illness period. After one day of exposure (daily infection probability), family members were 39% less likely (OR 0.61) to become infected after symptoms emerged than during the incubation period, while those living together had 41% lower odds (OR 0.59)--although the difference was not statistically significant (table 3). There was no difference in the risk of infection between the sexes.

The researchers also calculated that the local reproductive number (the average number of infections a COVID-19 case can generate during the entire infectious period via both people living together and family members, and non-household contacts) was 0.5. However, had isolation of cases or quarantine of their contacts not been implemented the estimated local reproductive number would have been 20-50% higher, increasing to 0.6-0.76. If the reproductive number remains less than one, infection is not able to spread effectively.

"The relatively low reproductive number in the absence of case isolation reflects the small average number of contacts per person per day, which is probably partly due to the stringent control measures in Guangzhou during the study", explains co-author Dr Qin-Long Jing from Guangzhou CDC, China. "Although the effect of case isolation seems moderate, the high infectivity of the virus during the incubation period suggests quarantine of asymptomatic contacts could have prevented more onward transmissions." [1]

The authors note some important limitations, including that they were unable to reliably quantify the infectivity of asymptomatic infections, since only two of 15 asymptomatic cases included in the analyses were considered primary cases, and some asymptomatic infections may have been missed as close contacts were only tested twice, and the tests were done 14 days apart. Furthermore, the model assumed that asymptomatic infections have the same infectivity as symptomatic cases during their incubation period, which might not be accurate. The authors also note that some imported primary cases might have been infected locally, and that some asymptomatic infections or cases might have been missed by contact tracing or by false negative tests, which could underestimate the secondary attack rate. Finally, the rapid isolation of cases and quarantine of their close contacts might have limited the number of transmissions when the cases were ill and affected the accuracy of the estimates.

Writing in a linked Comment, Dr Virginia Pitzer (who was not involved in the study) from Yale School of Public Health in the USA, says, "The key difference between SARS-CoV-2 and SARS-CoV is that the probability of transmission is substantially higher during the presymptomatic incubation period for SARS-CoV-2, whereas little to no transmission occurred prior to the onset of symptoms for SARS-CoV. This made SARS-CoV much easier to control through case isolation and quarantine of contacts. Notably, the authors estimate that prompt case isolation was only able to prevent 20-50% of secondary cases of COVID-19 in Guangzhou."

She concludes, "This study demonstrates the value of carefully collected contact tracing data to understand risk factors for transmission and susceptibility. The findings confirm the relative importance of pre-symptomatic transmission and the relationship between older age and susceptibility, key insights which should inform design of intervention strategies."

Credit: 
The Lancet

Combination biomarker predicts response to immune checkpoint therapy in patients with advanced bladder cancer

image: Sangeeta Goswami, MD, Ph.D., assistant professor of Genitourinary Medical Oncology

Image: 
The University of Texas MD Anderson Cancer Center

HOUSTON -- In patients with metastatic bladder cancer, a novel combination of biomarkers from baseline tumor tissues was predictive of improved clinical responses and prolonged survival following treatment with immune checkpoint inhibitors, according to researchers from The University of Texas MD Anderson Cancer Center.

The study, published today in Science Translational Medicine, used multi-platform analyses of tumor samples to discover that ARID1A mutations in tumor cells and expression of the immune signaling protein CXCL13 in surrounding immune cells were enriched in patients who responded well to checkpoint inhibitors.

Retrospective analyses of two Phase II clinical trials confirmed that each of these biomarkers was associated with improved overall survival (OS), but a combination of these biomarkers was predictive of better OS compared to either biomarker alone. Patients with ARID1A mutations and high CXCL13 expression saw a median OS of more than 17 months in both trials, compared to fewer than eight months in patients with no mutations and low CXCL13 expression.

"Most biomarker studies have been limited to a single biomarker, such as tumor mutational burden or PD-L1 expression," said lead Sangeeta Goswami, M.D., Ph.D., assistant professor of Genitourinary Medical Oncology. "Our study indicates that combinatorial biomarkers that reflect both the tumor mutational status and immune response will improve predictive capability of the biomarker and may enable better patient selection for treatment with immune checkpoint therapy."

Urothelial cancers, which include bladder cancers as well as those of the renal pelvis and ureter, are the sixth most common cancer type in the U.S., and five-year OS rates for patients with metastatic cancers are roughly 5%. The approval of immune checkpoint therapy as an option for these patients has improved outcomes, explained Goswami, but only 15-20% of patients will see a benefit.

Currently, there are no clinically useful biomarkers to predict responses. Therefore, the research team performed immune and genomic profiling of baseline tumor tissues from MD Anderson patients participating in ongoing clinical trials to identify novel markers associated with responses to checkpoint inhibitors.

The work was done in collaboration with MD Anderson's immunotherapy platform, which is co-led by corresponding author Padmanee Sharma, M.D., Ph.D., professor of Genitourinary Medical Oncology and Immunology. The platform is part of MD Anderson's Moon Shots Program®, a collaborative effort to accelerate the development of scientific discoveries into clinical advances that save patient's lives.

Following discovery of the biomarkers, reverse translational studies in mouse models confirmed that ARID1A knockdown increased sensitivity to checkpoint blockade, whereas loss of CXCL13 expression rendered mice resistant to checkpoint inhibitors.

The researchers next sought to confirm the predictive capability of these biomarkers in additional cohorts from the Phase II CheckMate275 and IMvigor210 trials, which evaluated nivolumab or atezolizumab in patients with advanced urothelial cancers.

In the CheckMate275 trial, ARID1A mutations were associated with a median OS of 11.4 months compared to 6.0 months in those without mutations. High CXCL13 expression was associated with median OS of 13.5 months compared to just 5.7 months in those with the lowest CXCL13 expression. Patients with both markers had a median OS of 19.1 months compared to 5.3 months in patients with neither marker.

The IMvigor210 trial showed similar results. Patients with ARID1A mutations had a median OS of 15.4 months compared to 8.2 months in those without. Media OS was 17.1 months and 8.0 months in patients with high and low CXCL13 expression, respectively. Finally, patients with both biomarkers had a median OS of 17.8, while those without either biomarker had a median OS of just 7.1 months.

"We hope that our study will highlight the importance of developing combinatorial biomarkers that consider both tumor cells and immune cells," said Sharma. "This approach may identify better biomarkers that can reliably predict response to immune checkpoint therapy across various tumor types."

As this was a retrospective study, the researchers currently are planning a clinical trial to prospectively evaluate outcomes for patients who are positive for the combination biomarker following treatment with anti-PD-1 therapy.

Credit: 
University of Texas M. D. Anderson Cancer Center

RNA structures by the thousands

image: Franz Narberhaus and Vivian Brandenburg are discussing one of the deciphered RNA structures.

Image: 
RUB, Marquard

Researchers from Bochum and Münster have developed a new method to determine the structures of all RNA molecules in a bacterial cell at once. In the past, this had to be done individually for each molecule. Besides their exact composition, their structure is crucial for the function of the RNAs. The team describes the new high-throughput structure mapping method, termed Lead-Seq for lead sequencing, in the journal Nucleic Acids Research, published online on 28 May 2020.

Christian Twittenhoff, Vivian Brandenburg, Francesco Righetti and Professor Franz Narberhaus from the Chair of Microbial Biology at Ruhr-Universität Bochum (RUB) collaborated with the bioinformatics group headed by Professor Axel Mosig at RUB and the team led by Professor Petra Dersch at the University of Münster, previously from the Helmholtz Centre for Infection Research in Braunschweig.

No structure - no function

In all living cells, genetic information is stored in double-stranded DNA and transcribed into single-stranded RNA, which then serves as a blueprint for proteins. However, RNA is not only a linear copy of the genetic information, but often folds into complex structures. The combination of single-stranded and partially folded double-stranded regions is of central importance for the function and stability of RNAs. "If we want to learn something about RNAs, we must also understand their structure," says Franz Narberhaus.

Lead ions reveal single-stranded RNA positions

With lead sequencing, the authors present a method that facilitates the simultaneous analysis of all RNA structures in a bacterial cell. In the process, the researchers take advantage of the fact that lead ions cause strand breaks in single-stranded RNA segments; folded RNA structures, i.e. double strands, remain untouched by lead ions.

By applying lead, the researchers split the single-stranded RNA regions at random locations into smaller fragments, then transcribed them into DNA and sequenced them. The beginning of each DNA sequence thus corresponded to a former strand break in the RNA. "This tells us that the corresponding RNA regions were present as a single strand," explains Narberhaus.

Predicting the structure using bioinformatics

Vivian Brandenburg and Axel Mosig then used bioinformatics to evaluate the information on the single-stranded RNA sections obtained in the experiments. "We assumed that non-cut RNA regions were present as double strands and used prediction programs to calculate how the RNA molecules must be folded," elaborates Vivian Brandenburg. "This resulted in more reliable structures with the information from lead sequencing than without this information."

This approach enabled the researchers to simultaneously determine the structures of thousands of RNAs of the bacterium Yersinia pseudotuberculosis all at once. The team compared the results obtained by lead sequencing of some RNA structures with results obtained using traditional methods - they were both the same.

New RNA thermometers discovered

The group carried out their experiments at 25 and 37 degrees Celsius, since some RNA structures change depending on the temperature. Using what is known as RNA thermometers, bacteria such as the diarrhoea pathogen Yersinia pseudotuberculosis can detect whether they are inside the host. Using lead sequencing, the team not only identified already known RNA thermometers, but also discovered several new ones.

Establishing lead sequencing took about five years. "I'm happy to say that we are now able to map numerous RNA molecules in a bacterium simultaneously," concludes Franz Narberhaus. "One advantage of the method is that the small lead ions can easily enter living bacterial cells. We therefore assume that this method can be used universally and will in future facilitate the detailed structure-function analysis of bacterial RNAs."

Credit: 
Ruhr-University Bochum

What it means when animals have beliefs

Humans are not the only ones who have beliefs; animals do too, although it is more difficult to prove them than with humans. Dr. Tobias Starzak and Professor Albert Newen from the Institute of Philosophy II at Ruhr-Universität Bochum have proposed four criteria to understand and empirically investigate animal beliefs in the journal "Mind and Language". The article was published online on 16 June 2020.

Flexible use of information about the world

The first criterion for the existence of beliefs worked out by the philosophers is that an animal must have information about the world. However, this must not simply lead to an automatic reaction, like a frog instinctively snapping at a passing insect.

Instead, the animal must be able to use the information to behave in a flexible manner. "This is the case when one and the same piece of information can be combined with different motivations to produce different behaviours," explains Albert Newen. "For example, if the animal can use the information that there is food available at that moment for the purpose of eating or hiding the food."

Information can be relinked

The third criterion says that the information is internally structured in a belief; accordingly, individual aspects of that information can be processed separately. This has emerged, for example, in experiments with rats that can learn that a certain kind of food can be found at a certain time in a certain place. Their knowledge has a what-when-where structure.

Fourthly, animals with beliefs must be able to recombine the information components in novel ways. This reassembled belief should then lead to flexible behaviour. Rats can do this too, as the US researcher Jonathan Crystal demonstrated in experiments in an eight-armed labyrinth. The animals learned that if they received normal food in arm three of the maze in the morning, chocolate could be found in arm seven at noon.

Crows and scrub jays meet all criteria

The authors from Bochum also cite crows and scrub jays as examples of animals with beliefs. British researcher Nicola Clayton carried out conclusive experiments with scrub jays. When the birds are hungry, they initially tend to eat the food. When they are not hungry, they systematically hide the leftovers. In the process, they encode which food - worm or peanut - they have hidden where and when. If they are hungry in the following hours, they first look for the worms they prefer. After the period of time has elapsed that takes worms to become inedible, they head for the peanut hiding places instead.

"What best explains this change in behaviour is the birds' belief about the worms being spoiled and their beliefs about the location of other food items," says Tobias Starzak. The animals also react flexibly in other situations, for example if they notice that they are being watched by rivals while hiding; if this is the case, they hide the food again later.

Flexible behaviour, which can be interpreted as caused by beliefs, has also been shown in rats, chimpanzees and border collies. "But probably many more species have beliefs," supposes Albert Newen.

Credit: 
Ruhr-University Bochum

To make a good impression, leave cell phone alone during work meetings

LAWRENCE - To get on the good side of a new boss, colleague or acquaintance in a business meeting, leave your cell phone stashed in your pocket or purse.

That is the implication of a new study conducted by University of Kansas Assistant Professor of Communication Studies Cameron W. Piercy and doctoral candidate Greta R. Underhill. It is titled "Expectations of technology use during meetings: An experimental test of manager policy, device use, and task acknowledgment" and was published in the journal Mobile Media & Communication.

Looking at your phone during a meeting is akin to "phubbing," or snubbing your interlocutor, in a strictly social setting, the study found.

The authors prepared video vignettes of people using either a paper notebook, a cell phone or a laptop computer while participating in a business meeting. They refer to this scenario as "multicommunication." Then they asked 243 viewers to rate the distracted meeting member's competence and the effectiveness of the meeting.

Other variables studied included the meeting manager's expectations for technology use in the workplace and whether the user apologized later that their technology use was work-related.

It mattered not whether the cell phone user stipulated afterward that their usage was strictly business-related. Viewers still rated them down, and to a significant degree more than those who used a computer or notepad.

In a recent interview, Piercy said the results can largely be attributed to a phenomenon known to social scientists as "introspective illusion."

"We know you can do work on your phone," Piercy said. But he added that because we also know phones can be used to scroll idly through social-media feeds, "we assume that you're not working when we see you're using it."

This is true even of people who themselves use a mobile device during a business meeting.

"We can always infer our own thoughts and motives, but we can't ever know a partner's thoughts and motives, so we make negative assumptions about others, and we make excuses for ourselves," Piercy said.

In line with this new concept, the mobile introspective illusion, people did not rate the technology user any differently if they apologized for using their device. Piercy said, "People expect that technology is used for ill, even when the person using the technology says their use is related to the topic of conversation."

A manager's attitude toward technology in the workplace does seem to matter somewhat, in terms of viewers' evaluations.

"When the manager articulated a policy, those who acknowledged their multicommunication were evaluated higher and seen as more competent," the authors write in their paper. "In the absence of a policy, the pattern is reversed. Finally, the means for communicator evaluation and competence were highest in the pro-technology policy condition. In all, when the manager's policy is matched by employee's behavior, outcome means tend to be higher."

"The manager articulating a clear policy about expectations of technology use ought to affect the way that people engage with technology in the workplace," Piercy said. "But so is the idea that people would be excused if they apologize for using technology. And in that case, we didn't find a significant effect."

However, the effect of cell phone use on viewers' perceptions was dramatic.

"The effect for the phone is ginormous," Piercy quipped. "It's as big an effect as you'll ever see in a social-science study -- 30% of the variance. You can just look at the numbers and see it. But the notebook was less of a problem than the computer, which was less of a problem than the phone. So even if you were to use a laptop in the meeting, you'd be better off than using your phone because there was this big spike in all the numbers that are associated with using the phone, relative to the other two."

Piercy noted that the study asked viewers to judge the interactions they saw on screen, simulating a meeting with a new person or boss. Attitudes might change, he said, in a situation where all the participants know each other - and the boss's expectations -- well.

Credit: 
University of Kansas

Air quality impacts early brain development

Researchers at the University of California, Davis, have found a link between traffic-related air pollution and an increased risk for changes in brain development relevant to neurodevelopmental disorders. Their study, based on rodent models, corroborates previous epidemiological evidence showing this association.

While air pollution has long been a concern for pulmonary and cardiovascular health, it has only been within the past decade that scientists have turned their attention to its effects on the brain, said UC Davis toxicologist Pamela Lein, senior author of the study, recently published in Translational Psychiatry.

Researchers had previously documented links between proximity to busy roadways and neurodevelopmental disorders such as autism, but preclinical data based on real-time exposures to traffic-related air pollution was scarce to nonexistent.

Lein worked with UC Davis atmospheric scientist Anthony Wexler and first author Kelley Patten, a doctoral student in the UC Davis graduate group for pharmacology and toxicology, to develop a novel approach to study the impacts of traffic-related air pollution in real time. They set up a vivarium near a traffic tunnel in Northern California so they could mimic, as closely as possible, the experience of humans in a rodent model.

"This approach was a creative way to get at the question of what impacts air pollution has on the brain in the absence of confounding factors such as socioeconomic influences, diet, etc.," Lein said. "It's important to know if living close to these roadways poses a significant risk to the developing human brain.

"If it does," Lein continues, "scientists can warn susceptible individuals, such as pregnant women -- particularly those who have already had a child diagnosed with a neurodevelopmental disorder -- to take appropriate precautions to minimize risks to the health of their child's brain."

EARLY EXPOSURE OUTCOMES

The researchers compared the brains of rat pups exposed to traffic-related air pollution with those exposed to ?ltered air. Both air sources were drawn from the tunnel in real time.

They found abnormal growth and increased neuroinflammation in the brains of animals exposed to air pollution. This suggests that air pollution exposure during critical developmental periods may increase the risk for changes in the developing brain that are associated with neurodevelopmental disorders.

"What we witnessed are subtle changes," Patten said. "But we are seeing these effects using air pollution exposures that fall within regulatory limits. With the backdrop of other environmental and genetic risk factors in humans, this may have a more pronounced effect. This exposure also contains very fine particulate matter that isn't currently regulated."

In a separate study, Patten extended this exposure for 14 months to look at longer-term impacts of traffic-related air pollution and is in the process of writing up those results.

The team is also interested in what component of traffic-related air pollution is driving the neurodevelopmental outcomes.

If they can identify the culprits, Lein said, then scientists can approach legislators to develop scientifically based regulations to protect the developing human brain.

TEAM EFFORTS

UC Davis atmospheric scientist and co-author Keith Bein said that the single most challenging aspect of studying the health effects of air pollution may be replicating how, when and what people are exposed to throughout their lifetimes.

Tackling this requires creative thinking and a multidisciplinary team of researchers, including exposure engineers, atmospheric scientists, toxicologists, biologists, behaviorists and animal care specialists.

"We have managed to build a unique and talented team and taken advantage of our built environment to bring us closer than we've been before to achieving these objectives," Bein said. "Increasingly, these types of efforts are required to continue advancing the field, thereby informing policymakers and stakeholders about how best to protect human health."

Credit: 
University of California - Davis

Research brief: New discovery allows 3D printing of sensors directly on expanding organs

video: Researchers at the University of Minnesota have developed a 3D printing technique that uses sophisticated motion capture technology to print electronic sensors directly on surfaces that expand and contract.

Image: 
McAlpine Research Group, University of Minnesota Research study

In groundbreaking new research, mechanical engineers and computer scientists at the University of Minnesota have developed a 3D printing technique that uses motion capture technology, similar to that used in Hollywood movies, to print electronic sensors directly on organs that are expanding and contracting. The new 3D printing technique could have future applications in diagnosing and monitoring the lungs of patients with COVID-19.

The research is published in Science Advances, a peer-reviewed scientific journal published by the American Association for the Advancement of Science (AAAS).

The new research is the next generation of a 3D printing technique discovered two years ago by members of the team that allowed for printing of electronics directly on the skin of a hand that moved left to right or rotated. The new technique allows for even more sophisticated tracking to 3D print sensors on organs like the lungs or heart that change shape or distort due to expanding and contracting.

"We are pushing the boundaries of 3D printing in new ways we never even imagined years ago," said Michael McAlpine, a University of Minnesota mechanical engineering professor and senior researcher on the study. "3D printing on a moving object is difficult enough, but it was quite a challenge to find a way to print on a surface that was deforming as it expanded and contracted."

The researchers started in the lab with a balloon-like surface and a specialized 3D printer. They used motion capture tracking markers, much like those used in movies to create special effects, to help the 3D printer adapt its printing path to the expansion and contraction movements on the surface. The researchers then moved on to an animal lung in the lab that was artificially inflated. They were able to successfully print a soft hydrogel-based sensor directly on the surface. McAlpine said the technique could also possibly be used in the future to 3D print sensors on a pumping heart.

"The broader idea behind this research, is that this is a big step forward to the goal of combining 3D printing technology with surgical robots," said McAlpine, who holds the Kuhrmeyer Family Chair Professorship in the University of Minnesota Department of Mechanical Engineering. "In the future, 3D printing will not be just about printing but instead be part of a larger autonomous robotic system. This could be important for diseases like COVID-19 where health care providers are at risk when treating patients."

Credit: 
University of Minnesota

Quantum diamond sensing

Nuclear magnetic resonance (NMR) spectroscopy is a widely used tool for chemical analysis and molecular structure recognition. Because it typically relies on the weak magnetic fields produced by a small thermal nuclear spin polarization, NMR suffers from poor sensitivity compared to other analytical techniques. A conventional NMR apparatus typically uses large sample volumes of about a milliliter -- large enough to contain around a million biological cells.

In a study published in Physical Review X (PRX), researchers from the University of Maryland's Quantum Technology Center (QTC) and colleagues report a new quantum sensing technique that allows high-resolution NMR spectroscopy on small molecules in dilute solution in a 10 picoliter sample volume -- roughly equivalent to a single cell.

The experiments reported in the paper, entitled "Hyperpolarization-Enhanced NMR Spectroscopy with Femtomole Sensitivity Using Quantum Defects in Diamond," were performed by the research group of Prof. Ronald Walsworth, QTC Founding Director. Their finding is the next step in previous results, in which Walsworth and collaborators developed a system that utilizes nitrogen-vacancy quantum defects in diamonds to detect the NMR signals produced by picoliter-scale samples. In this past work, the researchers could only observe signals from pure, highly concentrated samples. To overcome this limitation, Walsworth and colleagues combined quantum diamond NMR with a "hyperpolarization" method that boosts the sample's nuclear spin polarization -- and hence NMR signal strength -- by more than a hundred-fold. The results reported in PRX realize, for the first time, NMR with femtomole molecular sensitivity.

On the impact of the research, Walsworth says, "The real-world goal is to enable chemical analysis and magnetic resonance imaging (MRI) at the level of individual biological cells." MRI is a type of scan that can process detailed pictures of parts of the body, including the brain. "Right now, MRI is limited in its resolution, and it can only image volumes containing about a million cells. Seeing individual cells noninvasively with MRI (to help diagnose illness and answer basic questions in biology) is one of the long-term goals of quantum sensing research," says Walsworth.

Credit: 
University of Maryland

COVID-19 news from Annals of Internal Medicine

Below please find a summary and link(s) of new coronavirus-related content published today in Annals of Internal Medicine. The summary below is not intended to substitute for the full article as a source of information. A collection of coronavirus-related content is free to the public at http://go.annals.org/coronavirus.

Update Alert: Should Clinicians Use Chloroquine or Hydroxychloroquine Alone or in Combination With Azithromycin for the Prophylaxis or Treatment of COVID-19? Living Practice Points From the American College of Physicians

Still no evidence to support the use of chloroquine or hydroxychloroquine to treat or prevent COVID-19

Researchers for the Scientific Medical Policy Committee of the American College of Physicians (ACP) conducted an updated evidence review on May 8, 2020 to determine if changes needed to be made to their Practice Points on the use of chloroquine or hydroxychloroquine alone or in combination with azithromycin for prophylaxis or treatment of coronavirus disease. The evidence update included one observational study focused on hydroxychloroquine alone and in combination with azithromycin, and one observational study assessed use of chloroquine alone (previously, no studies were available on the use of chloroquine alone). The new evidence added support to previous conclusions but resulted in no conceptual changes to the practice points. Read the full text: https://www.acpjournals.org/doi/10.7326/M20-3862.

Media contacts: A PDF for this article is not yet available. Please click the link to read full text. The lead author, Amir Qaseem, MD, PhD, MHA, can be reached through Andy Hachadorian at Ahachadorian@acponline.org.

Credit: 
American College of Physicians