Culture

UVA engineering professor Jack W. Davidson named an IEEE fellow

CHARLOTTESVILLE, Va. - UVA Engineering computer science professor Jack W. Davidson has been named an Institute of Electrical and Electronics Engineers Fellow in recognition of his contributions to compilers, computer security and computer science education.

The institute’s board of directors annually awards the designation to those who have contributed to the advancement or application of engineering, bringing significant value to society. Fellow is the highest level of membership and is recognized by the technical community as an important career achievement. The award is received by less than 1% of the total voting membership.

Davidson received his Ph.D. in computer science from the University of Arizona in 1981. The same year, he joined UVA Engineering as a professor in the Department of Computer Science. During his 38-year career at UVA, he has been the principal investigator on numerous high-profile grants to develop comprehensive methods for protecting software from malicious attacks.

His research accomplishments have made him internationally recognized in cybersecurity. Davidson leads the University’s Cyber Innovation and Society Institute, launched in 2018.

The Cyber Innovation and Society Institute brings faculty together from technical and humanities fields across the University to understand the impact of cyber systems on society, especially how they affect human values such as privacy, freedom, democracy and individual autonomy; to understand the risks and consequences of attacks on cyber systems and strategies to respond to attacks; to ensure that these systems operate securely and dependably as intended; and the data they collect and process is secure from improper use.

This year, Davidson and the Cyber Innovation and Society Institute were awarded a national grant from The Public Interest Technology University Network to establish a course aimed at teaching graduate students to deeply examine the complex ethical, legal and policy implications of new technologies. The graduate course, Innovation in the Public Interest, will be offered for the first time in the spring of 2020.

Davidson is a former recipient of the Institute of Electrical and Electronics Engineers Taylor L. Booth Education Award for outstanding achievement in computer science and engineering education. He has also been active in the Association for Computing Machinery, making many contributions leading to his current tenure on the association’s executive council. Davidson was associate editor on two separate association publications: ACM Transactions on Programming Languages and Systems and ACM Transactions on Architecture and Code Optimization. He also served as chair of the Special Interest Group on Programming Languages. In his role as part of the executive council, he serves as co-chair of the association’s Publications Board.

“It is a great honor to be selected for recognition as IEEE Fellow. It is also a reflection of the supportive environment provided by UVA Engineering that enables faculty to achieve their scientific and professional goals,” Davidson said.

Credit: 
University of Virginia School of Engineering and Applied Science

When David poses as Goliath

Stellar black holes form when massive stars end their life in a dramatic collapse. Observations have shown that stellar black holes typically have masses of about ten times that of the Sun, in accordance with the theory of stellar evolution. Recently, a Chinese team of astronomers claimed to have discovered a black hole as massive as 70 solar masses, which, if confirmed, would severely challenge the current view of stellar evolution.

The publication immediately triggered theoretical investigations as well as additional observations by other astrophysicists. Among those to take a closer look at the object was a team of astronomers from the Universities of Erlangen-Nürnberg and Potsdam. They discovered that it may not necessarily be a black hole at all, but possibly a massive neutron star or even an 'ordinary' star. Their results have now been published as a highlight-paper in the renowned journal Astronomy & Astrophysics.

The putative black hole was detected indirectly from the motion of a bright companion star, orbiting an invisible compact object over a period of about 80 days. From new observations, a Belgian team showed that the original measurements were misinterpreted and that the mass of the black hole is, in fact, very uncertain. The most important question, namely how the observed binary system was created, remains unanswered. A crucial aspect is the mass of the visible companion, the hot star LS V+22 25. The more massive this star is, the more massive the black hole has to be to induce the observed motion of the bright star. The latter was considered to be a normal star, eight times more massive than the Sun.

A team of astronomers from Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) and the University of Potsdam had a closer look at the archival spectrum of LS V+22 25, taken by the Keck telescope at Mauna Kea, Hawaii. In particular, they were interested in studying the abundances of the chemical elements on the stellar surface. Interestingly, they detected deviations in the abundances of helium, carbon, nitrogen, and oxygen compared to the standard composition of a young massive star. The observed pattern on the surface showed ashes resulting from the nuclear fusion of hydrogen, a process that only happens deep in the core of young stars and would not be expected to be detected at its surface.

'At first glance, the spectrum did indeed look like one from a young massive star. However, several properties appeared rather suspicious. This motivated us to have a fresh look at the archival data,' said Andreas Irrgang, the leading scientist of this study and a member of the Dr. Karl Remeis-Observatory in Bamberg, the Astronomical Institute of FAU.

The authors concluded that LS V+22 25 must have interacted with its compact companion in the past. During this episode of mass-transfer, the outer layers of the star were removed and now the stripped helium core is visible, enriched with the ashes from the burning of hydrogen.

However, stripped helium stars are much lighter than their normal counterparts. Combining their results with recent distance measurements from the Gaia space telescope, the authors determined a most likely stellar mass of only 1.1 (with an uncertainty of +/-0.5) times that of the Sun. This yields a minimum mass of only 2-3 solar masses for the compact companion, suggesting that it may not necessarily be a black hole at all, but possibly a massive neutron star or even an 'ordinary' star.

The star LS V+22 25 has become famous for possibly having a massive black hole companion. However, a closer look at the star itself reveals that it is a very intriguing object in its own right, as whilst stripped helium stars of intermediate mass have been predicted in theory, only very few have been discovered so far. They are key objects to understanding binary star interactions.

Credit: 
Friedrich-Alexander-Universität Erlangen-Nürnberg

Prosecutors' race, class bias may not drive criminal justice disparities

America's prison populations are disproportionately filled with people of color, but prosecutors' biases toward defendants' race and class may not be the primary cause for those disparities, new research from the University of Arizona suggests.

The finding, which comes from a unique study involving hundreds of prosecutors across the U.S., counters decades' worth of previous research. Those studies relied on pre-existing data, such as charges and punishments that played out in courtrooms. In a 1993 study, for example, researchers found that prosecutors in Los Angeles were 1.59 times more likely to fully prosecute an African American defendant for crack-related charges than a white defendant. That likelihood was 2.54 times greater for Hispanic defendants compared to white defendants.

The new study, led by Christopher Robertson, a professor of law and associate dean for research and innovation at the James E. Rogers College of Law, involved a controlled experiment with prosecutors, asking them to examine the same hypothetical case but changing the race and class of the defendant.

The study, administered online, provided prosecutors with police reports describing a hypothetical crime, which the researchers designed with assistance from experienced prosecutors. All details of the case were the same except for the suspect's race - either black or white - and occupation - fast-food worker or accountant - to indicate the suspect's socioeconomic status. Roughly half of the prosecutors received one version of the case; the other half received the other.

The study allowed researchers to "really isolate the prosecutor's decision-making in a way that mere observational research wouldn't allow," said Robertson, whose co-authors are Shima Baradaran Baughman of the University of Utah and Megan Wright of Penn State. The paper was published in the Journal of Empirical Legal Studies.

The outcomes the study looked for included whether prosecutors charged a felony, whether they chose to fine the defendant or seek a prison sentence, and the proposed cost of the fine or length of the sentence.

"When we put all those together, we see the same severity of charges, fines and sentences across all the conditions, whether the defendant was black, whether the defendant was white, whether the defendant had a high-class career or a low-class career," Robertson said. "Differences in the actual outcomes - in the actual behavior of the prosecutors - is what we would have expected if they were biased. But since we see no difference in the outcomes, we concluded that they were not substantially biased."

Given previous research that indicated rampant bias drives criminal justice disparities, Robertson's results may surprise many - just like they did the researchers.

"We were surprised at the bottom line," he said.

Robertson offered one possible explanation for the unexpected result.

"We conducted this study in 2017 and 2018 and prosecutors have been under a spotlight for some time," he said. "They've been training and are aware of and are working hard to not be biased in their own decision-making."

The results do not rule out race and class bias as factors in prosecutorial decision-making but suggest that policymakers committed to addressing systemic racism and classism in the legal system may be more successful seeking reforms in other areas.

"The disparities in outcomes are indisputable," Robertson said. "As we go through the criminal justice system and think about what the right reforms are, the sheer bias of the prosecutor doesn't seem to be the biggest one."

Robertson said policymakers may be better off focusing on disparities that occur before someone is even arrested, in areas such as economic development and education.

"Crime is associated with poverty, and race in America is associated with poverty, so I think some very front-end questions of social policy are really important," he said. "At the same time, I think, on the back end, to shift the focus, there's a growing consensus among people on the left and the right that our 40-year-long war on crime has been ineffectual in some ways and that we could make the criminal justice system much less severe and much less expensive and thereby reduce some of these same disparities."

Robertson also stresses that his study's results aren't the final word on prosecutor bias - a problem that still needs addressing, he said. Even after these findings, he remains a proponent of blinding prosecutors to defendants' race, a detail that is often not relevant to prosecutors after an arrest is made. Prosecutor blinding is the focus of Robertson's next research project.

Credit: 
University of Arizona

Faking emotions at work does more harm than good

The adage "Fake it until you make it" - the idea that someone can fake a positive attitude to elicit real-life benefits - often backfires when used with co-workers, according to a study led by a University of Arizona researcher.

Instead, researchers say, making an effort to actually feel the emotions you display is more productive.

Allison Gabriel, associate professor of management and organizations in the Eller College of Management, led a team that analyzed two types of emotion regulation that people use at work: surface acting and deep acting.

"Surface acting is faking what you're displaying to other people. Inside, you may be upset or frustrated, but on the outside, you're trying your best to be pleasant or positive," Gabriel said. "Deep acting is trying to change how you feel inside. When you're deep acting, you're actually trying to align how you feel with how you interact with other people."

The study surveyed working adults in a wide variety of industries including education, manufacturing, engineering and financial services.

"What we wanted to know is whether people choose to engage in emotion regulation when interacting with their co-workers, why they choose to regulate their emotions if there is no formal rule requiring them to do so, and what benefits, if any, they get out of this effort," Gabriel said.

Putting on a Happy Face

Gabriel says that when it comes to regulating emotions with co-workers, four types of people emerged from the study:

Nonactors, or those engaging in negligible levels of surface and deep acting;

Low actors, or those displaying slightly higher surface and deep acting;

Deep actors, or those who exhibited the highest levels of deep acting and low levels of surface acting; and,

Regulators, or those who displayed high levels of surface and deep acting.

In each study, nonactors made up the smallest group, with the other three groups being similar in size.

The researchers identified several drivers for engaging in emotion regulation and sorted them into two categories: prosocial and impression management. Prosocial motives include wanting to be a good co-worker and cultivating positive relationships. Impression management motives are more strategic and include gaining access to resources or looking good in front of colleagues and supervisors.

The team found that regulators, in particular, were driven by impression management motives, while deep actors were significantly more likely to be motivated by prosocial concerns. This means that deep actors are choosing to regulate their emotions with co-workers to foster positive work relationships, as opposed to being motivated by gaining access to more resources.

Faking It Versus Feeling It

"The main takeaway," Gabriel says, "is that deep actors - those who are really trying to be positive with their co-workers - do so for prosocial reasons and reap significant benefits from these efforts."

According to the researchers, those benefits include receiving significantly higher levels of support from co-workers, such as help with workloads and offers of advice. Deep actors also reported significantly higher levels of progress on their work goals and trust in their co-workers than the other three groups.

The data also showed that mixing high levels of surface and deep acting resulted in physical and mental strain.

"Regulators suffered the most on our markers of well-being, including increased levels of feeling emotionally exhausted and inauthentic at work," Gabriel said.

Lessons Learned

While some managers Gabriel spoke to during the course of her research still believe emotions have little to do with the workplace, the study results suggest there is a benefit to displaying positive emotions during interactions at work, she said.

"I think the 'fake it until you make it' idea suggests a survival tactic at work," Gabriel said. "Maybe plastering on a smile to simply get out of an interaction is easier in the short run, but long term, it will undermine efforts to improve your health and the relationships you have at work."

"In many ways," Gabriel added, "it all boils down to, 'Let's be nice to each other.' Not only will people feel better, but people's performance and social relationships can also improve."

Credit: 
University of Arizona

Fewer than half of US clinical trials have complied with the law on reporting results

January 2020 is the third anniversary of the implementation of the new US regulations that require clinical trials to report results within one year of completion (Final Rule of the FDA Amendments Act)--but compliance remains poor, and is not improving, with US Government sponsored trials most likely to breach.

Less than half (41%) of clinical trial results are reported promptly onto the US trial registry, and 1 in 3 trials remain unreported, according to the first comprehensive study of compliance since new US regulations came into effect in January 2017.

The findings, published in The Lancet, indicate that trials with non-industry sponsors (such as universities, hospitals, and governments) are far more likely to breach the rules than trials sponsored by industry [1]--with US Government sponsored trials least likely to post results on time at the world's largest clinical trial registry, ClinicalTrials.gov.

It has been known for several decades that the results of clinical trials are often not fully reported. To improve public disclosure, and limit selective publishing of results, the US Food and Drug Administration Amendment Act (FDAAA) of 2007 requires sponsors of most US-regulated clinical trials to register and report results on ClinicalTrials.gov within 12 months of primary completion, irrespective of whether the results are positive or negative.

A subsequent 'Final Rule' to the Act took effect in January, 2017. This introduced clearer reporting requirements including fines of up to US$10,000 a day for non-compliance (now US$ 12,103 inflation adjusted). National Institute of Health (NIH) leaders said that the Final Rule would result in "rapid increases" in the percentage of trials registered and shared on the US registry [2].

The authors say that the high rates of non-compliance found in the new study likely reflect the lack of enforcement by regulators, and they call for trial sponsors to be held to account by the FDA.

"Patients and clinicians cannot make informed choices about which treatments work best when trial results are routinely withheld. Clinical trials are not abstract research projects: they are large, expensive, practical evaluations that directly impact on patient care by informing treatment guidelines and evidence reviews." says Dr Ben Goldacre from Oxford University, UK, who led the research. [3]

He continues: "Sponsors are breaching their legal obligations, but also their ethical obligations to the patients who generously participate in clinical trials. Our study has identified over 2,400 trials breaching the rules, but to our knowledge the FDA has never levied a single fine or other enforcement action, despite all the levers available to them. Compliance will only improve when action is taken." [3]

Non-reporting of clinical trial results has been well documented since the 1980s, especially those trials finding no evidence of effectiveness for the treatment being tested [4]. However, failing to disclose trial results threatens the integrity of the evidence base of all clinical medicine, breaches participants' trust, and wastes valuable research resources.

The first trials covered by the Final Rule were due to report in January 2018. To investigate the extent of compliance with these new reporting requirements, the researchers examined all 4,209 trials registered on ClinicalTrials.gov that were legally required to report results between March 2018 and September 2019. They also assessed trends in compliance, factors associated with compliance, and ranked individual sponsors according to their level of compliance.

Of the completed trials included in the study, around half (52%; 2,178) had non-industry sponsors, most involved a drug intervention (71%; 2,968), and most were solely conducted in the USA (71%; 3,000).

Analyses found that only 41% (1,722/4,209) of completed clinical trials reported results within the one year legal deadline, whilst 36% (1,523/4,209) still had not been reported by September 16, 2019. Moreover, progress has stalled--the proportion of compliant trials has remained stable since July 2018. The median delay from completion to submitting results was 424 days--59 days higher than the legal reporting requirement of one year (figure 1).

Trials with an industry sponsor were much more likely to comply with the law than those with a non-industry or US Government sponsor (50% vs 34% vs 31% trials submitted in time). Better performance was also seen among sponsors with more experience of running large numbers of trials, when compared with those who have only ever run a very small number of projects (66% vs 21% trials submitted in time; table 3). Encouragingly, the authors say, this suggests that "research experience and robust internal governance processes can contribute to improved performance."

Further analyses estimate that had the law been strictly enforced, over US$4 billion in fines could have been collected up to the end of September 2019.

"Over four decades since non-reporting of clinical trials was first reported, it is disappointing to see that we have only progressed to legislation being passed, and then largely ignored," says co-author Nicholas DeVito from the University of Oxford, UK. "The fact that the US Government cannot comply with its own laws is particularly concerning." [3]

He continues: "Until effective enforcement action is taken, public audit may help. We have established an openly accessible public website at fdaaa.trialstracker.net where fresh data on compliance with FDAAA will be posted every day, identifying each individual overdue trial, and compliance statistics for each individual sponsor. We hope this will help to incentivise sponsors, and provide useful targeted information for all those who aim to comply with the law." [3]

The authors note that they only examine the availability of results on ClinicalTrials.gov as required by the law, and not the quality of the results or their availability elsewhere.

Writing in a linked Comment, lead author Dr Erik von Elm (who was not involved in the study) from the University of Lausanne in Switzerland points out that, "any law is only as good as its enforcement", adding that, "if this rule were to be enforced, academic sponsors would probably make substantial efforts to reduce the number of non- or late-reported trials and to improve data quality. Training, auditing and incentive mechanisms could be overseen by dedicated staff. A senior "transparency officer" versed in trial conduct and reporting could take a proactive mentoring role and help investigators overcome barriers that currently prevent them from timely reporting of trial results in registries. If completeness of reporting was a criterion in individual academic evaluations, this could have a considerable "signalling effect" within the local research community."

Credit: 
The Lancet

New dog, old tricks? Stray dogs can understand human cues

If you have a dog, hopefully you're lucky enough to know that they are highly attuned to their owners and can readily understand a wide range of commands and gestures. But are these abilities innate or are they exclusively learned through training?

To find out, a new study in Frontiers in Psychology investigated whether untrained stray dogs could understand human pointing gestures.

The study revealed that about 80% of participating dogs successfully followed pointing gestures to a specific location despite having never received prior training. The results suggest that dogs can understand complex gestures by simply watching humans and this could have implications in reducing conflict between stray dogs and humans.

Dogs were domesticated 10,000-15,000 years ago, likely making them the oldest domesticated animals on the planet. Humans then bred dogs with the most desirable and useful traits so that they could function as companions and workers, leading to domesticated dogs that are highly receptive to human commands and gestures.

However, it was not clear whether dogs understand us through training alone, or whether this was innate. Can dogs interpret a signal, such as a gesture, without specific training, or even without having met the signaling person previously? One way to find out is to see whether untrained, stray dogs can interpret and react to human gestures.

Stray dogs are a common feature in cities around the world and particularly in many developing countries. While they may observe and occasionally interact with people, such dogs have never been trained, and are behaviorally "wild". Conflicts between stray dogs and humans are a problem and understanding how humans shape stray dog behavior may help alleviate this.

To investigate, Dr. Anindita Bhadra of the Indian Institute of Science Education and Research Kolkata, India, and colleagues studied stray dogs across several Indian cities. The researchers approached solitary stray dogs and placed two covered bowls on the ground near them. A researcher then pointed to one of the two bowls, either momentarily or repeatedly, and recorded whether the dog approached the indicated bowl. They also recorded the perceived emotional state of the dogs during the experiment.

Approximately half of the dogs did not approach either bowl. However, the researchers noticed that these dogs were anxious and may have had bad experiences with humans before. The dogs who approached the bowls were noted as friendlier and less anxious, and approximately 80% correctly followed the pointing signals to one of the bowls, regardless of whether the pointing was momentary or repeated. This suggests that the dogs could indeed decipher complex gestures.

"We thought it was quite amazing that the dogs could follow a gesture as abstract as momentary pointing," explained Bhadra. "This means that they closely observe the human, whom they are meeting for the first time, and they use their understanding of humans to make a decision. This shows their intelligence and adaptability."

The results suggest that dogs may have an innate ability to understand certain human gestures which transcends training. However, it should be noted that the shyer, more anxious animals tended not to participate, so future studies are needed to determine more precisely how an individual dog's personality affects their ability to understand human cues.

Overall, dogs may be more perceptive than we realize. "We need to understand that dogs are intelligent animals that can co-exist with us," said Bhadra "They are quite capable of understanding our body language and we need to give them their space. A little empathy and respect for another species can reduce a lot of conflict."

Credit: 
Frontiers

Study finds disparity in critical care deaths between non-minority and minority hospitals

image: Disparities found in ICU deaths at hospitals with few minority patients vs those with a large number of minority patients.

Image: 
ATS

Jan. 17, 2020--While deaths steadily declined over a decade in intensive care units at hospitals with few minority patients, in ICUs with large numbers of minority patients, there was less improvement, according to new research published online in the American Thoracic Society's American Journal of Respiratory and Critical Care Medicine.
The disparity was most pronounced among critically ill African American patients.

In "Temporal Trends in Critical Care Outcomes in United States Minority Serving Hospitals," lead author John Danziger, MD, MPhil, and colleagues report on their analysis of nearly 1.1 million patients hospitalized at more than 200 hospitals that participated in the telehealth platform provided by Philips Healthcare from 2006-16.

In additional to studying mortality, the researchers looked at length of stay in the ICU and in the hospital. The data showed a similar pattern of improvement over the decade at non-minority serving hospitals and less improvement at minority serving hospitals.

"We wanted to know whether racial inequalities, previously described across a range of health care environments, extend into the highest level of care, namely the ICU," said Dr. Danziger, an assistant professor of medicine at Harvard Medical School and a physician at Beth Israel Deaconess Medical Center in Boston.

For the purposes of this study, the authors defined minority serving hospitals two ways. The first definition defined such hospitals as having twice as many minority patients as expected based on the percentage of African American or Hispanic living in the region according to the U.S. Census. The second defined a minority serving hospital as having more than 25 percent of its ICU patients identify as African American or Hispanic. The different definitions yielded similar results.

The study found that over a decade:

Nearly a third of critically ill African American and half of critically ill Hispanic patients were treated at only 14 of the surveyed hospitals.

A steady decline (about 2 percent) each year in ICU deaths at non-minority hospitals, but no decline after the first few years at minority hospitals.

Longer lengths of ICU and hospital stay at minority hospitals than non-minority hospitals.

African Americans treated at non-minority hospitals experienced a 3 percent decline in mortality each year, compared to no decline in mortality when treated at minority hospitals.

To avoid biasing results, the researchers took into account a range of variables, including age, gender, admission diagnosis, severity of illness and co-existing health problems. They found that minority serving hospitals tended to care for younger, but sicker patients.

The authors said that their study could not determine whether the health disparities they observed "reflect caring for an increasing disadvantaged population" or differences in hospital resources. The researchers did find that patients at minority service hospitals had to wait longer to be admitted to the ICU from the emergency room than patients who were treated in non-minority serving hospitals, suggesting that differences in resources contributed to the findings of their study.

Still, said Dr. Danziger, "The observation that large numbers of critically ill minorities are cared for in poorer performing ICUs gives us an important target for focused research efforts and additional resources to help close the health care divide amongst different minorities in the United States."

Credit: 
American Thoracic Society

Psychology program for refugee children improves wellbeing

A positive psychology program created by researchers at Queen Mary University of London focuses on promoting wellbeing in refugee children. It is unusual in that it focuses on promoting positive outcomes, rather than addressing war trauma exposure.

This is the first positive psychology-based intervention to be systematically evaluated for use with refugee children.

The purpose of the intervention, known as Strengths for the Journey, is to build positive psychological resources in young refugees - such as positive emotions, character strengths, optimistic thinking, community and nature connectedness, hope, and mindfulness - in order to promote their wellbeing and resilience.

In a study, published in the journal Development and Psychopathology, the researchers show that the intervention improved the children's positive outcomes (wellbeing, optimism, and self-esteem) and reduced their depressive symptoms.

Specifically, it more than doubled participants' ratings of their wellbeing and optimism and led to dramatic reductions in depressive symptoms.

Dr Sevasti Foka, lead author of the study from the Department of Biological and Experimental Psychology at Queen Mary University of London, said: "The key finding of the study is that the Strengths for the Journey intervention seems to be quite effective. Our results suggest that short, inexpensive positive psychology interventions such as Strengths for the Journey can lead to real improvements in refugee children's mental health and wellbeing, even when those children are experiencing the many challenges of living in a refugee camp."

The program is delivered over a seven-day period and was evaluated in refugee camps in Lesvos, Greece, with 72 children ranging from 7 to 14 years-old. The children were predominantly displaced from Syria and Afghanistan.

Over a million refugees have arrived by sea in Greece in the last four years, almost half of whom are under 18 years old*. Upon arrival in Greece, many are placed in camps with limited access to school and mental health services, and report high rates of attempted suicide, panic attacks, anxiety, and aggressive outbursts**.

The intervention is one of very few programs to be developed specifically for use with kids living in refugee camps. Refugee camps are quite a different context from resettlement or settlement in communities and are very challenging places for children to live.

The researchers suggest that the intervention should be expanded to a larger group of refugee children in Greece - and potentially those living in refugee camps elsewhere - because it has real potential to improve their mental health and wellbeing.

Isabelle Mareschal, an author of the study from Queen Mary University of London, said: "It seems like child refugees living in low-resource settings like refugee camps would benefit from Strengths for the Journey or other short positive psychology interventions that promote resilience."

In the study, the researchers ran a pilot evaluation using a wait-list controlled trial design to see whether the intervention improved children's mental health and wellbeing. This involved providing the program to one group of children and then providing it to the next group a little later in order to compare outcomes while still providing treatment for all participants.

The researchers were improving on most previous work in three ways: by doing a controlled trial, focusing on children living in refugee camps, and by looking at positive outcomes rather than just psychopathology.

Dr Kristin Hadfield, the corresponding author of the study from Queen Mary University of London, added: "In addition to the intervention being different, our evaluation of it is also different. Very few interventions for use with refugees are rigorously evaluated. NGOs and governments spend a lot of money attempting to improve refugee children's outcomes through various programmes, but many do not actually check whether these are effective. We have compared changes in the kids who took part in Strengths for the Journey with changes in kids living in the camps who did not take part to see whether those who took part in Strengths for the Journey did better."

Credit: 
Queen Mary University of London

Climate may play a bigger role than deforestation in rainforest biodiversity

image: By measuring characteristics like ear, foot, and tail size in species like Euryoryzomys russatus, researchers can quantify functional diversity in large rainforests.

Image: 
Photo courtesy of Ricardo S. Bovendorp

"Save the rainforests" is a snappy slogan, but it doesn't tell the full story of how complicated it is to do just that. Before conservationists can even begin restoring habitats and advocating for laws that protect land from poachers and loggers, scientists need to figure out what's living, what's dying, and which patterns explain why. Tackling these questions--in other words, finding out what drives a region's biodiversity--is no small task.

The better we measure what's in these rainforests, the more likely we are to find patterns that inform conservation efforts. A new study in Biotropica, for instance, crunched numbers on a behemoth dataset on small mammals in South America and found something surprising in the process: that climate may affect biodiversity in rainforests even more than deforestation does.

Noé de la Sancha, a scientist at the Field Museum in Chicago, professor at Chicago State University, and the paper's lead author, stresses that changing how we measure biodiversity can uncover patterns like these.

"When we think about biodiversity, we usually think about the number of species in a particular place--what we call taxonomic diversity," says de la Sancha. "This paper aims to incorporate better measures of biodiversity that include functional and phylogenetic diversity."

Functional diversity looks at biodiversity based on the roles organisms play in their respective ecosystems. Rather than simply counting the species in an area, scientists can use categories--"Do these mammals primarily eat insects, or do they primarily eat seeds?" and "Do they only live on the forest floor, or do they live in trees?" as well as quantitative characters like weight and ear, foot, and tail size, for instance--to determine and quantify how many different ecological roles a habitat can sustain.

Meanwhile, phylogenetic diversity looks at how many branches of the animal family tree are represented in a given area. By this measure, a patch of land consisting almost entirely of closely-related rodents would be considered far less diverse than another that was home to a wide genetic range of rodents, marsupials, and more--even if the two patches of land had the same number of species.

By applying these approaches to data on all known small mammal species and all those species' characteristics, scientists are able to see the forest from the trees, uncovering patterns they wouldn't have using any single dimension of diversity alone.

This is how de la Sancha and his co-authors found, based on functional and phylogenetic measures, that while deforestation causes local extinctions, climate-related variables had more of an effect on small mammal biodiversity patterns across the entire forest system.

In other words, if a section of rainforest was cut down, some of the animals living there might disappear from that area, while the same species living in intact patches of rainforest could survive. And, the researchers found, even if a species disappears from one area, different species that play a similar role in the ecosystem tend to replace them in other forest patches and other parts of the forest system. Meanwhile, changes to the climate may have big, sweeping effects on a whole rainforest system. This study found that BIO9, a bioclimatic variable measuring mean temperature of the driest quarter--more simply put, how hot the forest is in its least rainy season--affects biodiversity across the whole forest system.

Knowing these climate variables play a role in rainforest health can be concerning. This study and others provide strong evidence of climate change's effects on large ecosystems, underlining the urgency of studying and protecting habitats like the Atlantic Forest, the South American forest system at the center of the study.

"We still have so much that we don't know about so many of these species, which underlines the necessity for more fieldwork," de la Sancha says. "Once we have more specimens, we can improve how we quantify functional diversity and our understanding of why these small mammals evolved the way they did. From there, we can keep better track of biodiversity in these areas, leading to improved models and conservation strategies down the line."

Still, with only 9-16 percent of the Atlantic Forest's original habitat space remaining, this study lends a silver lining to an otherwise grim narrative about the effects of human activity on rainforests.

"I think this gives us a little bit of hope. As long as we have forest--and we need to have forest still--we can maintain biodiversity on a large scale," de la Sancha says. "As long as we don't wipe it all out, there's good evidence to show that we can maintain biodiversity, at least for small mammals, and the ecosystem services these critters provide."

Credit: 
Field Museum

Observational study explores fish oil supplements, testicular function in healthy young men

What The Study Did: An observational study of nearly 1,700 young healthy Danish men looked at how fish oil supplements were associated with testicular function as measured by semen quality and reproductive hormone levels. Limitations of this study include a lack of information on the actual concentration of omega-3 fatty acids in the fish oil supplements self-reported by the men. Researchers suggest randomized clinical trials are needed.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

Author: Tina Kaid Jensen, Ph.D., University of Southern Denmark, Odense, and coauthors.

(10.1001/jamanetworkopen.2019.19462)

Editor's Note: The article includes funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

Study traces evolution of acoustic communication

image: Frogs, like mammals, originated as predominantly nocturnal animals, but maintained the ability to communicate acoustically after switching to being active during the day.

Image: 
Peter Trimming/Creative Commons

Imagine taking a hike through a forest or a stroll through a zoo and not a sound fills the air, other than the occasional chirp from a cricket. No birds singing, no tigers roaring, no monkeys chattering, and no human voices, either. Acoustic communication among vertebrate animals is such a familiar experience that it seems impossible to imagine a world shrouded in silence.

But why did the ability to shout, bark, bellow or moo evolve in the first place? In what is likely the first study to trace the evolution of acoustic communication across terrestrial vertebrates, John J. Wiens of the University of Arizona and Zhuo Chen, a visiting scientist from Henan Normal University in Xinxiang, China, traced the evolution of acoustic communication in terrestrial vertebrates back to 350 million years ago.

The authors assembled an evolutionary tree for 1,800 species showing the evolutionary relationships of mammals, birds, lizards and snakes, turtles, crocodilians, and amphibians going back 350 million years. They obtained data from the scientific literature on the absence and presence of acoustic communication within each sampled species and mapped it onto the tree. Applying statistical analytical tools, they tested whether acoustic communication arose independently in different groups and when; whether it is associated with nocturnal activity; and whether it tends to be preserved in a lineage.

The study, published in the open-access journal Nature Communications, revealed that the common ancestor of land-living vertebrates, or tetrapods, did not have the ability to communicate through vocalization - in other words, using their respiratory system to generate sound as opposed to making noise in other ways, such as clapping hands or banging objects together. Instead, acoustic communication evolved separately in mammals, birds, frogs and crocodilians in the last 100-200 million years, depending on the group. The study also found that the origins of communication by sound are strongly associated with a nocturnal lifestyle.

This makes intuitive sense because once light is no longer available to show off visual cues such as color patterns to intimidate a competitor or attract a mate, transmitting signals by sound becomes an advantage.

Extrapolating from the species in the sample, the authors estimate that acoustic communication is present in more than two-thirds of terrestrial vertebrates. While some of the animal groups readily come to mind for their vocal talents - think birds, frogs and mammals - crocodilians as well as a few turtles and tortoises have the ability to vocalize.

Interestingly, the researchers found that even in lineages that switched over to a diurnal (active by day) lifestyle, the ability to communicate via sound tends to be retained.

"There appears to be an advantage to evolving acoustic communication when you're active at night, but no disadvantage when you switch to being active during the day," Wiens said. "We have examples of acoustic communication being retained in groups of frogs and mammals that have become diurnal, even though both frogs and mammals started out being active by night hundreds of millions of years ago."

According to Wiens, birds kept on using acoustic communication even after becoming diurnal for the most part. Interestingly, many birds sing at dawn, as every birdwatcher can attest. Although speculative, it is possible that this "dawn chorus" behavior might be a remnant of the nocturnal ancestry of birds.

In addition, the research showed that acoustic communication appears to be a remarkably stable evolutionary trait. In fact, the authors raise the possibility that once a lineage has acquired the ability to communicate by sound, the tendency to retain that ability might be more stable than other types of signaling, such as conspicuous coloration or enlarged, showy structures.

In another unexpected result, the study revealed that the ability to vocalize does not appear to be the driver of diversification - the rate at which a lineage evolves into new species - it has been believed to be.

To illustrate this finding, Wiens pointed to birds and crocodilians: Both lineages have acoustic communication and go back roughly 100 million years, but while there are close to 10,000 bird species known, the list of crocodilians doesn't go past 25. And while there are about 10,000 known species of lizards and snakes, most go about their lives without uttering a sound, as opposed to about 6,000 mammalian species, 95% of which vocalize.

"If you look at a smaller scale, such as a few million years, and within certain groups like frogs and birds, the idea that acoustic communication drives speciation works out," Wiens said, "but here we look at 350 million years of evolution, and acoustic communication doesn't appear to explain the patterns of species diversity that we see."

The authors point out that their findings likely apply not only to acoustic communication, but also to other evolutionary traits driven by the ecological conditions known to shape the evolution of species. While it had been previously suggested that ecology was important for signal evolution, it was thought to apply mostly to subtle differences among closely related species.

"Here, we show that this idea of ecology shaping signal evolution applies over hundreds of millions of years and to fundamental types of signals, such as being able to communicate acoustically or not," Wiens said.

Credit: 
University of Arizona

Internet use reduces study skills in university students

video: Research conducted at Swansea University and the University of Milan has shown that students who use digital technology excessively are less motivated to engage with their studies, and are more anxious about tests.

Image: 
Roberto Truzoli, Caterina Vigano, Paolo Gabriele Galmozzi, Phil Reed

Research conducted at Swansea University and the University of Milan has shown that students who use digital technology excessively are less motivated to engage with their studies, and are more anxious about tests. This effect was made worse by the increased feelings of loneliness that use of digital technology produced.

Two hundred and eighty-five university students, enrolled on a range of health-related degree courses, participated in the study. They were assessed for their use of digital technology, their study skills and motivation, anxiety, and loneliness. The study found a negative relationship between internet addiction and motivation to study. Students reporting more internet addiction also found it harder to organise their learning productively, and were more anxious about their upcoming tests. The study also found that internet addiction was associated with loneliness, and that this loneliness made study harder.

Professor Phil Reed of Swansea University said: "These results suggest that students with high levels of internet addiction may be particularly at risk from lower motivations to study, and, hence, lower actual academic performance."

About 25% of the students reported that they spent over four hours a day online, with the rest indicating that they spent between one to three hours a day. The main uses of the internet for the student sample were social networking (40%) and information seeking (30%).

Professor Truzoli of Milan University said: "Internet addiction has been shown to impair a range of abilities such as impulse control, planning, and sensitivity to rewards. A lack of ability in these areas could well make study harder."

In addition to the links between levels of internet addiction and poor study motivation and ability, internet addiction was found to be associated with increased loneliness. The results indicated that loneliness, in turn, made studying harder for the students.

The study suggests that loneliness plays a large role in positive feelings about academic life in higher education. The poorer social interactions that are known to be associated with internet addiction make loneliness worse, and, in turn, impact on motivation to engage in a highly social educational environment such as a university.

Professor Reed added: "Before we continue down a route of increasing digitisation of our academic environments, we have to pause to consider if this is actually going to bring about the results we want. This strategy might offer some opportunities, but it also contains risks that have not yet been fully assessed."

Credit: 
Swansea University

Rich rewards: Scientists reveal ADHD medication's effect on the brain

image: The fMRI machine located in the IDOR imaging suite. fMRI is used to indirectly detect increased neuronal activity in an area of the brain by measuring higher levels of oxygenated to deoxygenated blood.

Image: 
Andressa Dias Lemos, IDOR

Attention-deficit hyperactivity disorder (ADHD) is a neurobiological disorder characterized by symptoms of hyperactivity, inattention and impulsivity. People with the condition are often prescribed a stimulant drug called methylphenidate, which treats these symptoms. However, scientists do not fully understand how the drug works.

Now, researchers at the Okinawa Institute of Science and Technology Graduate University (OIST) have identified how certain areas of the human brain respond to methylphenidate. The work may help researchers understand the precise mechanism of the drug and ultimately develop more targeted medicines for the condition.

Previous research suggests that people with ADHD have different brain responses when anticipating and receiving rewards, compared to individuals without ADHD. Scientists at OIST have proposed that in those with ADHD, neurons in the brain release less dopamine - a 'feel-good' neurotransmitter involved in reward-motivated behavior - when a reward is expected, with dopamine neurons firing more when a reward is given.

"In practice, what this means is that children, or even young adults, with ADHD may have difficulty engaging in behavior that doesn't result in an immediate positive outcome. For example, children may struggle to focus on schoolwork, as it may not be rewarding at the time, even though it could ultimately lead to better grades. Instead, they get distracted by external stimuli that are novel and interesting, such as a classmate talking or traffic noises," said Dr Emi Furukawa, first author of the study and a researcher in the OIST Human Developmental Neurobiology Unit, led by Professor Gail Tripp.

Scientists believe that methylphenidate helps people with ADHD maintain focus by influencing dopamine availability in the brain. Therefore, Dr Furukawa and her colleagues set out to examine how the drug affects a brain region called the ventral striatum, which is a vital component of the reward system and where dopamine is predominantly released.

"We wanted to take a look at how methylphenidate affects the ventral striatum's responses to reward cues and delivery," said Furukawa.

The study, which was recently published in the journal Neuropharmacology, was jointly conducted with scientists at D'Or Institute for Research and Education (IDOR) in Rio de Janeiro, Brazil. The collaboration allowed the researchers to combine expertise across multiple disciplines and provided access to IDOR's functional magnetic resonance imaging (fMRI) facility.

Delving into the brain

The researchers used fMRI to measure brain activity in young adults with and without ADHD as they played a computer game that simulated a slot machine. The researchers scanned individuals in the ADHD group on two separate occasions - once when they took methylphenidate and another time when they took a placebo pill. Each time the reels of the slot machine spun, the computer also showed one of two cues, either the Japanese character ? (mi) or ? (so). While familiarizing themselves with the game before being scanned, the participants quickly learned that when the slot machine showed ?, they often won money, but when the slot machine showed ?, they didn't. The symbol ? therefore acted as a reward-predicting cue, whereas ? acted as a non-reward-predicting cue.

The researchers found that when individuals with ADHD took the placebo, neuronal activity in the ventral striatum was similar in response to both the reward predicting and non-reward predicting cue. However, when they took methylphenidate, activity in the ventral striatum increased only in response to the reward cue, showing that they were now able to more easily discriminate between the two cues.

The researchers also explored how neuronal activity in the striatum correlated with neuronal activity in the medial prefrontal cortex - a brain region involved in decision-making that receives information from the outside world and communicates with many parts of the brain, including the striatum.

When the individuals with ADHD took placebo instead of methylphenidate, neuronal activity in the striatum correlated strongly with activity in the prefrontal cortex at the exact moment the reward was delivered, and the participants received money from the slot machine game. Therefore, the researchers believe that in people with ADHD, the striatum and the prefrontal cortex communicate more actively, which may underline their increased sensitivity to rewarding external stimuli. In participants who took methylphenidate, this correlation was low, as it was in people without ADHD.

The results implicate a second neurotransmitter, norepinephrine, in the therapeutic effects of methylphenidate. Norepinephrine is released by a subset of neurons common in the prefrontal cortex. Researchers speculate that methylphenidate might boost levels of norepinephrine in the prefrontal cortex, which in turn regulates dopamine firing in the striatum when rewards are delivered.

"It's becoming clear to us that the mechanism by which methylphenidate modulates the reward response is very complex," said Furukawa.

Tailoring New Therapies for ADHD

Despite the complexity, the scientists believe that further research could elucidate methylphenidate's mechanism of action, which could benefit millions of people worldwide.

Pinning down how methylphenidate works may help scientists develop better therapies for ADHD, said Furukawa. "Methylphenidate is effective but has some side effects, so some people are hesitant to take the medication or give it to their children," she explained. "If we can understand what part of the mechanism results in therapeutic effects, we could potentially develop drugs that are more targeted."

Furukawa also hopes that understanding how methylphenidate impacts the brain could help with behavioral interventions. For example, by keeping in mind the difference in brain responses when children with ADHD anticipate and receive rewards, parents and teachers could instead help children with ADHD stay focused by praising them frequently and reducing the amount of distracting stimuli in the environment.

Credit: 
Okinawa Institute of Science and Technology (OIST) Graduate University

Rethinking interactions with mental health patients

New research overturns the belief that people with severe mental illness are incapable of effective communication with their psychiatrist, and are able to work together with them to achieve better outcomes for themselves.

"Interviews are a critical part of assessing people suffering from thought disorder (TD), and deciding what the best therapy is for them," says Professor Cherrie Galletly from the Adelaide Medical School, University of Adelaide.

"Clinical interactions with people suffering with severe mental illness can be challenging, especially if the patient has disordered communication."

Published in the journal Australian Psychiatry the study analysed 24 routine clinical interviews between psychiatrists and inpatients, with a mean age of just under 30 years, who were suffering from TD.

"The study, the first of its kind, examined the expertise with which psychiatrists conducted clinical interviews of people suffering from TD, and the shared goals that were accomplished," says Professor Galletly.

"When interviewing people with TD psychiatrists need to adopt a mindset that the information the patient provides in that particular moment is, for them, meaningful, truthful, relevant and clear.

"They have to piece together snippets of information in order to create and interpret meaning and build respectful relationships by inviting patients to share their perspectives no matter how disordered or delusional their responses appear."

Thought disorder is common in psychotic disorders. The thoughts and conversation of people suffering from TD appear illogical and lacking in sequence and may be delusional or bizarre in content.

In 2010, 0.3% of Australians aged 18-64 years, had a psychotic illness with men aged 25-34 experiencing the highest rates (0.5%) of illness.

"Patients are positioned as active participants by psychiatrists who adopt a non-confrontational, non-judgemental approach, conveying support and safety, and ask open ended questions which allows the patient to engage, feel listened to, and work with the psychiatrist to achieve a shared understanding," says Professor Galletly.

"Findings from this study of sample interviews between psychiatrists and their patients highlight the need to rethink the notion that patients experiencing TD are incapable of communicating productively with the people trying to help them.

"Psychiatrists use transactional, relational and interactional techniques when they are talking to patients with thought disorder, which go beyond techniques normally employed in clinical interviews.

"Experienced psychiatrists undertake meaningful interviews with these patients, who in turn respond in ways that belie the notion that effective communication is not possible.

"The findings from this research can be used to develop training resources for clinicians who work with people with psychotic disorders."

Credit: 
University of Adelaide

Molecules move faster on a rough terrain

image: 3D rendering of polymer chains near the asperities of a rough substrate. Faster molecules were pictured by warmer colors.

Image: 
© ULB

Roughness, the presence of irregularities on a surface, is commonly associated to slower motion and stickiness. This is true at different length scales: at human size (1 meter), it takes longer to walk along a path that goes up and down, rather than walking on a flat road. At the size of smaller objects (1/100 - 1/1000 meter), Italians use pasta shapes with a rough surface, e.g. rigatoni, to make better adhesive surfaces for the tomato sauce and cheese. Till now, however, no experiment was able to test if the behavior of molecules really follows the same trend observed at human scale.

Now writing in Physical Review Letters, Cristian Rodriguez-Tinoco and a team of Université libre de Bruxelles' (ULB) Faculty of Sciences lead by Simone Napolitano shows that large molecules actually move faster in the proximity of rougher surfaces at the nanometric scale. Their experiments clearly demonstrate that the common belief that surface irregularities allow molecules to better stick on a surface is actually wrong. When the size of the surface roughness, that is the average distance between the tiny hills and valleys present on the surface of a material, is reduced to few nanometers (1 nm = one billionth of a meter), molecules of P4ClS, a type of polymer, start to move faster.

Detecting molecular motion is not easy: molecules move fast (even up to 1 million and more steps per second) and their displacements are too small to be observed by microscopes. Performing such experiments on a rough surface is even more complicated, because of its uneven character and by the difficulties on adjusting the size and distribution of the surface irregularities. The ULB team has been able to form rough surface of aluminum, by evaporating the metal in a controlled way. To measure how fast molecules move, the researchers have applied weak electric fields and recorded how quickly the molecules responds to the stimulus.

Surprisingly, the team has noticed that molecules present near a rough substrate behave as if they were surrounded by less neighbors, which explains why they speed up instead of slowing down. This trend is in neat disagreement with the predictions of computer simulations, which proposed that molecules move slower near a rough wall. Differently than what considered in simulations, polymer molecules do not enjoy sitting near rough substrate. Because of the way in which these molecules like to arrange themselves in the space, they prefer to move away from the asperities. The few molecules present near the asperities form less contact with the wall, can enjoy more free volume and, consequently, they move faster.

By sharing their results with a group of theoreticians of Dartmouth College (USA) led by Jane Lipson, the ULB team has been able to find a strong link between the way hills and valleys are organized on a rough surface and how molecules move. The theoreticians have shown that a very small change in the free volume around a molecule induces a tremendous boost in mobility, and the prediction of their calculations are in perfect agreement with the experiments.

This paper shows that the current way we think at interfaces is not valid. This new molecular trend observed has thus huge impact at the level of fundamental science. The work of the ULB team could be exploited on a large number of applications. Since almost a decade, several research groups have shown that properties of many thin coatings - such as flow, the ability to retain or be repel water, the velocity of formation of crystals - depend on the number of contacts between the film and its supporting substrate. Till now, to modify this number it was necessary to change the type of molecules at the interface, often involving complex chemical reactions. The findings show that it is possible to tailor the performance of nanomaterials by simply changing the roughness of the surface. This method, hence, allows controlling the polymer layer without touching it, as by using a remote control!

Credit: 
Université libre de Bruxelles