Culture

SUTD wins best paper at 35th AAAI conference on Artificial Intelligence 2021

image: Equilibrium manifolds of learning agents in two famous games, Stag Hunt and Battle of the Sexes.

Image: 
SUTD

Game theory is known to be a useful tool in the study of Machine Learning (ML) and Artificial Intelligence (AI) Multi-Agent interactions.

One basic component of these ML and AI systems is the exploration-exploitation trade-off, a fundamental dilemma between taking a risk with new actions in the quest for more information about the environment (exploration) and repeatedly selecting actions that result in the current maximum reward or (exploitation).

However, the outcome of the exploration-exploitation process is often unpredictable in practice and the reasons behind its volatile performance have been a long-standing open question in the ML and AI communities.

Dr Stefanos Leonardos and Assistant Professor Georgios Piliouras, researchers from the Singapore University of Technology and Design (SUTD), applied analytical tools from the theory of dynamical systems in the study of multi-agent systems and established a deep connection between exploration-exploitation and Catastrophe Theory (Figures 1 and 2). The latter is a branch of mathematics that formally explains phase transitions in all kinds of natural systems ranging from the transition from water to ice and disease outbreaks to collapses of financial markets.

This newly established connection provides a tool to predict the consequences and improve the performance of exploration-exploitation techniques in the development of multi-agent AI systems, such as robotic space missions, healthcare management or automated financial investing algorithms.

Their work, titled 'Exploration-Exploitation in Multi-Agent Learning: Catastrophe Theory Meets Game Theory', was honored with the Best Paper Award at the 35th AAAI Conference on Artificial Intelligence 2021.

"In this work, we reasoned about the rich mathematical structure in multi-agent interactions and showed how this underlying geometry shapes the performance of AI systems. We believe our new findings will support the research community in achieving its ambitious goal to push beyond the current AI boundaries," explained first author Dr Stefanos Leonardos from SUTD.

"We are deeply honored by this recognition and are excited to continue our investigation of phase transitions and their implications to AI systems," added Assistant Professor Georgios Piliouras from SUTD.

Credit: 
Singapore University of Technology and Design

Bioaccumulation of phased-out fire retardants is slowly declining in bald eagles

image: Eight-week-old bald eagle nestling in its nest overlooking Lake Superior from Apostle Islands National Lakeshore. The study targeted 5 to 8-week-old nestlings because they can thermoregulate at that age without a parent and they are unlikely to take a risky flight from the nest when the climber approaches

Image: 
Jim Campbell-Spickler

Research published in Environmental Toxicology and Chemistry shows that the presence of polybrominated diphenyl ethers (PBDEs) in bald eagle populations is slowly declining. Bald eagles are apex predators that nest and, more importantly, feed along water bodies, making them excellent bioindicators of environmental contaminants that bioaccumulate up the aquatic food web. The findings are both good news for eagles and instructive for regulators tasked with managing surface water quality by setting protective levels for wildlife, as well as fish consumption advisories for humans.

Lead author Bill Route from the National Park Service Great Lakes Inventory and Monitoring Network, explained, "Bald eagles are similar to humans in that both are tertiary predators in aquatic systems. The patterns we observed in nestlings may be indicative of those in humans who consume fish from the same water bodies."

Introduced in the 1970s, PBDEs were designed as flame retardants and come in three primary formulations: penta-BDE, used primarily in furniture and cars; octa-BDE, used mainly in the electronics industry; and deca-BDE, used in a variety of appliances. Penta- and octa-BDE were phased out of use in the early 2000s and banned internationally in 2009 under the Stockholm Convention. Deca-BDEs were reduced and ultimately eliminated from manufacturing by 2013. Despite the regulations, PBDEs are still present in many products and continue to find their way into the environment. These chemical compounds are hydrophobic, which means they do not dissolve in water; instead, they stick to particles and settle at the bottom of rivers and lakes, where they are ingested by animals. Since they don't easily dissolve in water, they tend to sequester and accumulate in fatty tissue and subsequently get passed up the food chain.

Route and his co-authors assessed patterns and trends in PBDE concentrations in 492 bald eagle nestlings' blood samples from 241 territories across 12 study areas in Minnesota and Wisconsin. This work built upon previous studies in the same region, which enabled the authors to reveal trends. Perhaps unsurprisingly, the highest levels of PBDEs were found in urban areas, such as Minneapolis-St. Paul, and the lowest concentrations were in sparsely populated areas. However, overall, the research shows a sustained 3.8% annual rate of decline in the concentrations of five primary PBDEs in the samples collected across the region since PBDE production was reduced.

Credit: 
Society of Environmental Toxicology and Chemistry

Sharing shears: Conserved protein segment activates molecular DNA scissors for DNA repair

image: CtIP/Ctp1 stimulates endonuclease activity of the MRN complex, which not only polishes "dirty" ends of DNA breaks, but promotes accurate repair of DNA breaks through the homologous recombination mechanism.

Image: 
PNAS

Scientists at Tokyo Institute of Technology (Tokyo Tech) have uncovered mechanisms underlying the activation of the MRN complex-- the cell's DNA scissors. Using purified yeast proteins, they demonstrated that phosphorylation of Ctp1, a homolog of a tumor-suppressor protein, plays a key role in activating MRN complex's DNA clipping activity. Intriguingly, a short segment of yeast Ctp1 or its human counterpart could stimulate endonuclease activity of their respective MRN complexes, suggesting its conserved function across species.

DNA functions as a roadmap that guides the identity and functions of cells. A glitch in the DNA can have serious deleterious effects resulting in the malfunction or loss of crucial proteins, thus affecting normal cellular function and viability. These glitches often manifest as double stranded breaks in the DNA that may occur spontaneously or from exposure to certain chemicals. To deal with these kinks, cells have evolved a DNA repair machinery that scans, identifies, and fixes breaks in the DNA by ligating the gaps. However, DNA breaks often have ''dirty ends'' that cannot be directly ligated or sealed as they are unexposed or blocked by certain proteins or irregular chemical structures. Such DNA ends thus need to first be clipped and freed so that it can be processed further. Moreover, such end-resection of DNA break ends are prerequisite for them to be accurately repaired by homologous recombination. Among such molecular scissors, or nuclease enzymes, Mre11 is a key player.

Mre11 teams up with proteins Rad50 and Nbs1 to collectively form the 'MRN' complex. The interaction of this complex with the tumor-suppressor protein CtIP in humans, has been shown to trigger the DNA clipping function of the complex (Figure 1). However, the mechanisms underlying this interaction have hitherto remained unexplored.

Now, Assistant Professor Hideo Tsubouchi and Professor Hiroshi Iwasaki from Tokyo Institute of Technology and their team have decoded the stepwise interaction and activation of the MRN complex using Ctp1 proteins in yeast, which are homologous to the human CtIP. Discussing their findings that have recently been published in PNAS, Iwasaki says, "The MRN complex is pivotal in the homologous recombination-mediated repair of DNA double stranded breaks. To better understand how CtIP influences the activity of the MRN complex, we purified yeast proteins and quantified their interactions."

The scientists found that phosphorylation or the addition of phosphate groups to Ctp1 was the key first step in activating the MRN complex. More specifically, phosphorylation enabled the physical interaction of Ctp1 with the Nbs1 protein of the complex, which was vital for subsequent endonuclease stimulation. The DNA clipping activity was extremely poor when the MRN complex was mixed with unphosphorylated Ctp1.

Furthermore, the scientists identified a short stretch of merely 15 amino acids at the C-terminal region of Ctp1 that was indispensable for the endonuclease activity of the Ctp1-stimulated MRN. Moreover, a synthetic peptide mimicking this region of Ctp1 or CtIP was able to activate the yeast or human MRN complex, respectively, suggesting that the function of the C-terminal Ctp1 is likely conserved across species and is the ultimate determinant in MRN activation.

Excited about the prospective application of their findings, Tsubouchi remarks, "Modification of the CT15 peptide can yield a strong activator or potential inhibitor of the MRN complex. Targeting this endonuclease activity can have potentially useful applications in homologous recombination-based gene editing."

With rapid advancements in recombinant DNA and molecular medicine, these findings could empower geneticists to unravel the mysteries of the genome and identify the hidden intricacies of genetic disorders with greater ease and effectiveness in the days to come.

Credit: 
Tokyo Institute of Technology

Star employees get most of the credit and blame while collaborating with non-stars

BINGHAMTON, NY -- Star employees often get most of the credit when things go right, but also shoulder most of the blame when things go wrong, according to new research from Binghamton University, State University of New York.

The study explored the potential risks and rewards of collaborating with stars - individuals who have a reputation for exhibiting exceptional performance - and how individual performance factors into how much credit and blame is shared with collaborators.

"Stars are human, and they fail from time to time. We wanted to shift the focus away from stars, and find out what happens to the people who collaborate with them. How does working with a star hurt and help you, and how can you individually affect these outcomes," said Scott Bentley, assistant professor of strategy at Binghamton University's School of Management.

To get to their findings, Bentley and Rebecca Kehoe, from Cornell University's School of Industrial and Labor Relations, analyzed large amounts of data on the performance of hedge funds and fund managers. They also analyzed news articles and rankings from financial media outlets to determine what happened to fund managers based on their collaboration with a star.

"If you left a firm after a successful or unsuccessful collaboration with a star, we wanted to find out how this impacted where you ended up next. Was your next job at a higher-ranked firm or a lower-ranked firm?" said Bentley.

They found that stars often took the majority of the credit when things went right, but also took most of the blame when things went wrong. This meant that non-stars weren't always penalized after a failed collaboration with a star, but also weren't always rewarded with a better position after a successful collaboration with a star.

"We found that relative to failed collaborations with non-star colleagues, failed collaborations with star employees often left non-stars better off. This is because most of the blame is attributed to the star, and non-stars get the benefit from being able to work alongside a star employee and learn from them," said Bentley.

Bentley and Kehoe also explored the degree to which individual performance affects the potential risks and rewards of working with a star. What they found is that those with individual successes outside of collaborating with a star often reap even greater rewards of working with stars.

"Imagine being on a sports team and your star player has to sit out a game. If you end up performing well without them, people are going to shift the attribution for some of that success to you. If you end up playing poorly, that may confirm biases that you're simply riding on the star's coattails," said Bentley.

Bentley notes that even in failure, high-performing non-stars can still benefit from working with stars.

"You now have this status signal that comes with working with a well-known star. People may think 'well, you had an opportunity to work with a star, so you must be good at what you do, even if the collaboration didn't go well'" he said.

Bentley previously published another paper that explored different types of stars, and the unique value that each could bring to a firm.

"Stars are a really interesting area to research," he said. "There are a lot of longstanding assumptions and ideas about working stars that we are beginning to push back against, and there is a lot to learn about how this affects organizations."

Credit: 
Binghamton University

Robots can use eye contact to draw out reluctant participants in groups

video: Researchers from KTH Royal Institute of Technology demonstrate the experimental set-up they used to observe participation in a word game led by a social robot. The results indicate that, by directing its gaze to less proficient players, a robot can elicit involvement from even the most reluctant participants in a group.

Image: 
Sarah Gillet and Ronald Cumbal

Eye contact is a key to establishing a connection, and teachers use it often to encourage participation. But can a robot do this too? Can it draw a response simply by making "eye" contact, even with people who are less inclined to speak up. A recent study suggests that it can.

Researchers at KTH Royal Institute of Technology published results of experiments in which robots led a Swedish word game with individuals whose proficiency in the Nordic language was varied. They found that by redirecting its gaze to less proficient players, a robot can elicit involvement from even the most reluctant participants.

Researchers Sarah Gillet and Ronald Cumbal say the results offer evidence that robots could play a productive role in educational settings.

Calling on someone by name isn't always the best way to elicit engagement, Gillet says. "Gaze can by nature influence very dynamically how much people are participating, especially if there is this natural tendency for imbalance - due to the differences in language proficiency," she says.

"If someone is not inclined to participate for some reason, we showed that gaze is able to overcome this difference and help everyone to participate."

Cumbal says that studies have shown that robots can support group discussion, but this is the first study to examine what happens when a robot uses gaze in a group interaction that isn’t balanced – when it is dominated by one or more individuals.

The experiment involved pairs of players - one fluent in Swedish and one who is learning Swedish. The players were instructed to give the robot clues in Swedish so that it could guess the correct term. The face of the robot was an animated projection on a specially designed plastic mask.

While it would be natural for a fluent speaker to dominate such a scenario, Cumbal says, the robot was able to prompt the participation of the less fluent player by redirecting its gaze naturally toward them and silently waiting for them to hazard an attempt.

"Robot gaze can modify group dynamics - what role people take in a situation," he says. "Our work builds on that and shows further that even when there is an imbalance in skills required for the activity, the gaze of a robot can still influence how the participants contribute."

Credit: 
KTH, Royal Institute of Technology

Venom-extraction and exotic pet trade may hasten the extinction of scorpions

video: Venom-extraction trade may hasten the extinction of scorpions as vast numbers are harvested from nature to illegal scorpion farms. Research has an important role in the conservation efforts and stopping the biodiversity loss.

Image: 
University of Turku

An article published by the researchers of the Biodiversity Unit at the University of Turku, Finland, highlights how amateur venom-extraction business is threatening scorpion species. Sustainably produced scorpion venoms are important, for example, in the pharmacological industry. However, in the recent years, there has been a dramatic increase in the number of people involved in the trade and vast numbers of scorpions are harvested from nature. This development is endangering the future of several scorpion species in a number of areas.

Scorpions have existed on Earth for over 430 million years. Currently comprising over 2,500 extant species, scorpions occur on almost all the major landmasses in a range of habitats from deserts to tropical rainforests and caves. All scorpions are predators and use their venom to subdue and paralyse prey, as well as for defence.

Scorpion venoms are very complex and they are used in biomedical research. Despite their reputation, most scorpion species are harmless to humans, and in only approximately 50 species the venom is life-threatening. Scorpion stings cause around 200 fatalities each year.

"Interest towards scorpion venom has unfortunately led to the situation where enormous amounts of scorpions are collected from nature. For example, a claim was spread in social media in Iran that scorpion venom costs ten million dollars per litre. As the situation escalated, illegal scorpion farms were established in the country and tens of thousands of scorpions were collected into these farms. Simultaneously, businesses devoted to training people in captive husbandry and rearing, marketing, and bulk distribution of live scorpions began to flourish. As a result, many species are quickly becoming endangered," says Doctoral Candidate Alireza Zamani from the Biodiversity Unit at the University of Turku, Finland.

Biodiversity loss is accelerating at an alarming rate because of population growth and the related unsustainable overexploitation of natural resources. According to the estimate of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES), as many as million different species are in danger of becoming extinct in the next decades if this development is not slowed down.

"It is important to understand that long before a species disappears, the number of individuals in the populations decrease and the species becomes endangered. This means that the risk of becoming extinct has increased. With scorpions, the pressure to overharvest populations for venom-extraction and exotic pet trade threatens especially species with a small range. Scorpions also breed relatively slowly when compared with several other invertebrates. In addition to the increased pressure to harvest these animals, they are also threatened by habitat destruction," notes Professor of Biodiversity Research Ilari E. Sääksjärvi from the University of Turku.

Scorpion species are poorly known - research helps in conservation efforts

Research has a very important role in stopping the biodiversity loss. Our understanding of biodiversity is still inadequate and as much as 80 percent of the living organisms on Earth are unknown to science. Protecting biodiversity requires more and more researched information.

"Scorpion species are still poorly known. It is vital for the protection of scorpions to produce more information about the species and get them under conservation. At the moment, only few scorpion species are protected. At the same time, we should ensure that the local communities are sufficiently informed about scorpions and their situation. With knowledge, we can help people to understand that many species are endangered and in danger of becoming extinct due to the overharvesting. It is also important to make sure that people understand that there is no market for the venom produced by amateur scorpion farms," says Zamani.

The researchers of the Biodiversity Unit at the University of Turku are specialised in mapping out species in poorly documented areas. Each year, the researchers discover and describe dozens of species new to science.

"These studies help us to better understand the biodiversity loss and its factors. Many species currently suffer from the exotic animal trade that is enormous on a global scale. Our goal is to continue to shine light also on scorpions. It is important that people understand these magnificent animals better. Their importance for humans is great as well. As species become extinct, we also lose all the possibilities that their complex venoms could offer, for example, to drug development," emphasises Professor Sääksjärvi.

Credit: 
University of Turku

Daily e-cigarette use shows 'clear benefit' in helping smokers to quit

A new study published Tuesday 10 March, No Smoking Day, from King's College London highlights the 'clear benefit' of using e-cigarettes daily in order to quit smoking, and supports their effectiveness when compared to other methods of quitting, including nicotine replacement therapy or medication.

Although the number of people in England who smoke has continued to fall in recent years, tobacco smoking is still the leading preventable cause of premature death and disease - killing nearly 75,000 people in England in 2019.

While e-cigarettes have been around for more than a decade, evidence on their effectiveness for helping people to quit smoking is still limited. Recent studies have produced inconsistent findings or failed to measure important factors such as frequency of use or the effect of different types of e-cigarette on attempts to quit.

In their Cancer Research UK-funded study, the researchers analysed data from an online survey of more than 1,155 people, which included smokers, ex-smokers who had quit within one year prior to completing the survey, and e-cigarette users.

Five waves of data were collected between 2012 and 2017. The researchers analysed the effectiveness of e-cigarettes in aiding abstinence from smoking for at least one month at follow-up, and at least one month of abstinence between the first survey and subsequent follow-up waves.

Published today in the journal Addiction, the study found that people who used a refillable e-cigarette daily to quit smoking were over five times more likely to achieve abstinence from tobacco smoking for one month, compared to those using no quitting aids at all.

Similarly, people who used a disposable or cartridge e-cigarette daily were three times more likely to quit for one month, compared to those using no help.

Daily use of e-cigarettes was also more effective for quitting than other evidence-based methods of quitting - including nicotine replacement therapy, medication such as bupropion or varenicline, or any combination of these aids. None of these methods were associated with abstinence from smoking at follow-up, compared to using no help at all. However, in a secondary analysis, prescription medicine was associated with achieving at least one month of abstinence from smoking.

Dr Máirtín McDermott, Research Fellow at King's College London's National Addiction Centre and lead author of the study, said: "Our results show that when used daily, e-cigarettes help people to quit smoking, compared to no help at all. These findings are in line with previous research, showing that e-cigarettes are a more effective aid for quitting than nicotine replacement therapy and prescribed medication.

"It's important that we routinely measure how often people use e-cigarettes, as we've seen that more sporadic use at follow up - specifically of refillable types - was not associated with abstinence."

Dr Leonie Brose, Reader at King's College London's national Addiction Centre added: "Despite the World Health Organization's (WHO) cautious stance on e-cigarettes, studies like ours show they are still one of the most effective quitting aids available.

"The WHO is especially concerned about refillable e-cigarettes, as these could allow the user to add harmful substances or higher levels of nicotine. However, we've shown that refillable types in particular are a very effective quitting aid when used daily, and this evidence should be factored into any future guidance around their use."

Credit: 
King's College London

The important role of music in neurorehabilitation: Filling in critical gaps

image: Regions of interest (ROIs) identified around the therapist and child in A) Music-Based Intervention and B) Non-Music Control Intervention. Credit: NeuroRehabilitation.

Image: 
Faculty of Music and Faculty of Medicine, University of Toronto

Amsterdam, NL, March 10, 2021 - Music-based interventions have become a core ingredient of effective neurorehabilitation in the past 20 years thanks to the growing body of knowledge. In this theme issue of Neurorehabilitation, experts in the field highlight some of the current critical gaps in clinical applications that have been less thoroughly investigated, such as post-stroke cognition, traumatic brain injury, and autism and specific learning disabilities.

Neurologic Music Therapy is the clinical and evidence-based use of music interventions by a credentialed professional. Research in the 1990s showed for the first time how musical-rhythmic stimuli can improve mobility in stroke and Parkinson's disease patients. We now know that music-based interventions can effectively address a wide range of impairments in sensorimotor, speech/language, and cognitive functions.

"The use of music-based interventions in neurorehabilitation was virtually unknown 25 years ago," explained Guest Editor Michael Thaut, PhD, Director, Music and Health Science Research Collaboratory, Faculty of Music and Faculty of Medicine, University of Toronto. "Since then, a growing body of research has shown how musical-rhythmic stimuli can improve mobility disorders such as stroke and Parkinson's disease, and music-based interventions have now become a core ingredient of effective neurorehabilitation. For example, Rhythmic Auditory Stimulation (RAS) has now been adopted in several official stroke care guidelines in the United States and Canada."

This collection of articles includes three studies on the use of music in traumatic brain injury rehabilitation; two studies looking at music-based interventions in children with autism and learning disabilities, respectively; the little-investigated connection between motor training and cognitive outcomes in chronic stroke rehabilitation; and a theoretical paper on mechanisms of neuroplastic changes underlying successful Neurologic Music Therapy interventions that provides a theoretical understanding of how music shapes brain function in neurorehabilitation on an impairment level. Several papers in the issue review research into the treatment system of Neurologic Music Therapy that has been endorsed by the World Federation of Neurorehabilitation as evidence-based and is practiced by certified clinicians in over 50 countries.

Lead investigator Catherine M. Haire, PhD, Faculty of Music, Music and Health Science Research Collaboratory, University of Toronto, and colleagues report on results of a randomized controlled trial of therapeutic instrumental music performance (TIMP) with and without motor imagery on chronic post-stroke cognition and affect. They found that the mental flexibility aspect of executive functioning appears to be enhanced by therapeutic instrumental music training in conjunction with motor imagery, possibly due to multisensory integration and consolidation of representations through motor imagery rehearsal following active practice. "Active training using musical instruments appears to have a positive impact on affective responding," commented Dr. Haire. "However, these changes occurred independently of improvements to cognition."

The effectiveness of music-based interventions in autism has been recognized for decades, but there has been little empirical investigation of the processes involved and how they compare to other approaches. Aparna Nadig, PhD, School of Communication Sciences and Disorders, McGill University, and colleagues found that compared to a non-music control intervention, children in music-based interventions spent more time engaged in triadic engagement (between child, therapist, and activity) and produced greater movement, depending on the type of musical instrument involved. "Taken together, these findings provide helpful initial evidence of the active ingredients of music-based interventions in autism," noted Dr. Nadig.

Looking ahead, Dr. Thaut commented that "A significant trend is the move from a therapy approach to a learning/training approach that allows the patient to become a more autonomous and independent participant in therapy. Providing patients with music-based devices for more independent and more frequent training via music technology will be an important new development. Future challenges will be to develop approaches and build technology to integrate Neurologic Music Therapy into telehealth post-COVID-19 to reach more patients in need worldwide who do not have access to widely distributed neurorehabilitation services. We are at a point where we can state clinically that the brain that engages in music, is changed by engaging in music."

Credit: 
IOS Press

The quest for sustainable leather alternatives

Throughout history, leather has been a popular material for clothes and many other goods. However, the tanning process and use of livestock mean that it has a large environmental footprint, leading consumers and manufacturers alike to seek out alternatives. An article in Chemical & Engineering News (C&EN), the weekly newsmagazine of the American Chemical Society, details how sustainable materials are giving traditional leather a run for its money.

Traditional leather goods are known for their durability, flexibility and attractive finish, with a global market worth billions, writes Associate Editor Craig Bettenhausen. Despite leather's popularity, the modern tanning process uses potentially harmful chemicals and creates a large amount of wastewater. In addition, most hides come from the meat and dairy industries, which have sustainability problems. Leather-like materials, often called vegan leather, are gaining traction among high-end manufacturers, defying the negative perceptions of synthetic "pleather." These leather alternatives are made from an array of base materials, including plants, mushrooms and even fish skin, each with a unique take on sustainable production.

Plant-based materials are currently the most advanced leather mimics because of their straightforward manufacturing process, which combines inexpensive natural fibers and polymers that are rolled into sheets. A company based in Mexico has created a leather made from prickly pear cactus, which is ground into a powder and combined with a biobased polyurethane. Mushroom leather mimics the texture of cowhide leather very well, but production needs to scale up substantially to make an impact. Although not a vegan alternative, fish skin is poised to replace exotic leathers such as snake and alligator skin. Cell-culture leather is also in early development, which could disrupt the traditional leather market even further. Experts are confident that these materials are viable alternatives, and manufacturers plan to scale up their efforts going forward.

Credit: 
American Chemical Society

Hip fracture outcomes worse during busy periods

Hip fractures are serious, especially for the elderly. The operation can be a great strain, and 13 per cent of patients over the age of 70 do not survive 60 days after the fracture.

Their chance of survival may depend on how busy the surgeons are with other emergency procedures.

"When the operating room is busy, 20 per cent more of the patients die within 60 days after the operation," says Professor Johan Håkon Bjørngaard at the Norwegian University of Science and Technology's (NTNU) Department of Public Health and Nursing.

Surgeons can get especially busy during periods when the patient demand for surgery is high. In busy periods, hip fracture patients have to wait on average 20 per cent longer before being operated on compared to the quiet periods. This wait can have serious consequences.

Information from more than 60 000 hip surgeries and all simultaneous emergency surgeries provided the research group from St. Olavs Hospital and NTNU with a solid numerical basis.

"We investigated how many older people over the age of 70 died during the first 60 days following a hip operation when particularly many emergency patients were queued up for surgery at the hospitals," say researchers Andreas Asheim and Sara Marie Nilsen from the Regional Center for Health Services Development (RSHU) at St. Olavs hospital.

During busy periods, 40 per cent of the patients waiting in the operating wards are typically people who have recently been brought in for emergency surgery. In the quietest periods, the percentage can drop to 25 per cent.

Older people often have to undergo surgery for hip fractures. The average age of hip surgery patients in the study was 85, with women making up 72 per cent. The median wait time before being operated on was about 20 hours.

Older people naturally have a greater risk of dying than the average population has. Age is one reason why mortality is high following surgery.

The capacity of the hospitals also plays a major role.

Previous results show that patients who are operated on for hip fractures have a higher risk of dying if they are discharged from the hospital early due to lack of space.

Hip fracture patients may need to be prioritized in the queue to increase their chances of survival.

Prioritizing patients "is part of a discussion about organizing emergency surgery. This could mean that we need to screen surgery hip fracture patients more than what's currently being done," says orthopaedist Lars Gunnar Johnsen.

Credit: 
Norwegian University of Science and Technology

Story tips from Johns Hopkins experts on Covid-19

Study of Orthodox Jews May Help Guide COVID-19 Prevention in Culturally Bonded Groups
Media Contact: Michael E. Newman, mnewma25@jhmi.edu

The holiday of Purim is a festival of life, recalling how the Jewish people escaped the genocidal plot of an evil minister under an ancient Persian king. In 2021, Purim again marked the saving of Jewish lives, but this time from a different enemy: SARS-CoV-2, the virus that causes COVID-19. Leaders of the U.S. Orthodox Jewish community -- a group devastated by COVID-19 infections and deaths following Purim social gatherings in March 2020 before preventive measures such as masking and physical distancing became commonplace -- were able before this year's holiday to promote scientifically based safety guidelines for COVID-19-free celebrations. This was possible partly because of findings from a Johns Hopkins Medicine-led study evaluating just over 9,500 Orthodox Jews in 12 states that helped define the epidemiology of the Purim 2020 COVID-19 outbreak.

The study appears online March 10 in JAMA Network Open.

"Because Purim in 2020 caused hundreds of Orthodox Jews to become ill or hospitalized with COVID-19 in the earliest stages of the pandemic, we realized that these patients -- who were convalescing when others were just coming in contact with SARS-CoV-2 for the first time -- were an important population to study to better understand why and how the virus spreads through a culturally bonded community," says study co-senior author Avi Rosenberg, M.D., Ph.D., assistant professor of pathology at the Johns Hopkins University School of Medicine.

"We felt with that insight, health care practitioners could develop strategies based on scientific evidence to limit the spread of COVID-19 while still enabling important religious and other cultural practices to go on," he explains.

Rosenberg and his collaborators created the Multi-Institutional sTudy analyZing anti-coV-2 Antibodies Cohort, or MITZVA Cohort (the acronym is taken from the Hebrew word for "commandment" and often refers to a "good deed"), to explore the epidemiology of the Purim 2020 COVID-19 spread within the large Orthodox Jewish communities of Brooklyn, New York; Lakewood, New Jersey; Los Angeles, California; Nassau and Sullivan counties, New York; New Haven, Connecticut; and Detroit, Michigan. Also included were Orthodox Jews who resided in Colorado, Florida, Maryland, North Carolina, Ohio, Pennsylvania and Washington State.

Study participants were first asked to complete a survey to define their demographic characteristics; whether they had any symptoms of SARS-CoV-2 infection before, during or shortly after the 2020 Purim holiday; the onset of any symptoms experienced; and if they had already tested positive for the virus. Out of 12,626 people given the questionnaire, 9,507 completed it and were invited to undergo SARS-CoV-2 antibody testing in the second stage of the study. Of those participants, 6,665 (70.1%) were screened for immunoglobin G (IgG) antibodies to the nucleocapsid (outer covering) protein of SARS-CoV-2 between May 14 and 30, 2020.

The survey results defined the date range for possible COVID-19 symptom onset as from Dec. 1, 2019, to May 26, 2020. More than three-quarters, 77%, of the respondents reported their first symptoms between March 9 and April 1, with another 15% stating theirs began after April 1 -- indicating that they were likely exposed just before or during the Purim season.

Rosenberg says the Purim link to the outbreak is further supported by the fact that the median (the midpoint date when dates were listed from earliest to latest) and mode (the date that occurred most often) for symptom onsets for study participants in all the states fell within the same period, March 17-21, 2020 (with Purim occurring March 10 and 11).

Among the study participants who tested positive for SARS-CoV-2 IgG antibodies, Rosenberg says that most (between 82% and 94% in the primary five communities examined) reported the onset of COVID-19 symptoms between March 9 and March 31, 2020.

The seroprevalence rates -- the percentage of people in a population with antibodies against, and indicating infection with, SARS-CoV-2 -- were consistently higher in the Orthodox Jewish communities than those in neighboring areas during the study time period. This is consistent, Rosenberg says, with the culturally bonded nature of these communities within a neighborhood or city, and even across state lines.

"Based on these findings from a large study population within culturally bonded communities, we identified parallel SARS-CoV-2 outbreaks occurring in multiple areas around the Jewish festival of Purim," Rosenberg says. "The risk to these communities was amplified by the fact that these outbreaks occurred in the early days of the pandemic prior to widespread adoption of mask-wearing and physical distancing procedures."

Rosenberg says that once COVID-19 prevention measures were established and promoted by public health authorities, local and national Orthodox Jewish leaders put forth mandates for their communities to comply, and developed culturally sensitive policies to address how to safely engage in prayer services, family and communal gatherings and social support systems.

"This shows that preventing the spread of COVID-19 does not have to mean giving up or limiting religious and cultural practices that are vital to the lives of so many," Rosenberg says. "We believe that our study of the Purim 2020 outbreak, and the positive actions taken in part because of those findings, can provide guidance for safely celebrating many other religious and secular holidays in the United States, including Chinese New Year, Ramadan and Christmas."

Rosenberg is available for interviews.

Dynamic Tool Accurately Predicts Risk of COVID-19 Progressing to Severe Disease or Death
Media Contact: Michael E. Newman, mnewma25@jhmi.edu

Clinicians often learn how to recognize patterns in COVID-19 cases after they treat many patients with it. Machine-learning systems promise to enhance that ability, recognizing more complex patterns in large numbers of people with COVID-19 and using that insight to predict the course of an individual patient's case. However, physicians sworn to "do no harm" may be reluctant to base treatment and care strategies for their most seriously ill patients on difficult-to-use or hard-to-interpret machine-learning algorithms.

Now, Johns Hopkins Medicine researchers have developed an advanced machine-learning system that can accurately predict how a patient's bout with COVID-19 will go, and relay its findings back to the clinician in an easily understandable form. The new prognostic tool, known as the Severe COVID-19 Adaptive Risk Predictor (SCARP), can help define the one-day and seven-day risk of a patient hospitalized with COVID-19 developing a more severe form of the disease or dying from it.

SCARP asks for a minimal amount of input to give an accurate prediction, making it fast, simple to use and reliable for basing treatment and care decisions. The new tool is described in a paper first posted online March 2 in the Annals of Internal Medicine.

"SCARP was designed to provide clinicians with a predictive tool that is interactive and adaptive, enabling real-time clinical variables to be entered at a patient's bedside," says Matthew Robinson, M.D., assistant professor of medicine at the Johns Hopkins University School of Medicine and senior author of the paper. "By yielding a personalized clinical prediction of developing severe disease or death in the next day and week, and at any point in the first two weeks of hospitalization, SCARP will enable a medical team to make more informed decisions about how best to treat each patient with COVID-19."

The brains of SCARP is a predictive algorithm called Random Forests for Survival, Longitudinal and Multivariate Data (RF-SLAM), described in a 2019 paper by its creators, Johns Hopkins Medicine researchers Shannon Wongvibulsin, an M.D./Ph.D. student; Katherine Wu, M.D.; and Scott Zeger, Ph.D.

Unlike past clinical prediction methods that base a patient's risk score on their condition at the time they enter the hospital, RF-SLAM adapts to the latest available patient information and considers the changes in those measurements over time. To make this dynamic analysis possible, RF-SLAM divides a patient's hospital stay into six-hour windows. Data collected during those time spans are then evaluated by the algorithm's "random forests" of approximately 1,000 "decision trees" that operate as an ensemble. This enables SCARP to give a more accurate prediction of an outcome than each individual decision tree could do on its own.

"The same way that individual stocks and bonds perform better as a portfolio -- with the overall value staying strong as individual items balance each other's rises and falls in price -- the trees as a group create a flexible and adaptable forest that protect each other from individual errors," says Robinson. "So, even if some trees predict incorrectly, many others will get it right and move the group in the correct direction."

Robinson says that most machine-learning systems used to make clinical prediction can only consider static data at a single point in time. "RF-SLAM enables us to be nimble and predict the future at any point," he explains.

To demonstrate SCARP's ability to predict severe COVID-19 cases or deaths from the disease, Robinson and his colleagues used a clinical registry with data about patients hospitalized with COVID-19 between March and December 2020, at five centers within the Johns Hopkins Health System. The patient information available included demographics, other medical conditions and behavioral risk factors, along more than 100 variables over time, such as vital signs, blood counts, metabolic profiles, respiratory rates and the amount of supplemental oxygen needed.

Among 3,163 patients admitted with moderate COVID-19 during this time, 228 (7%) became severely ill or died within 24 hours; an additional 355 (11%) became severely ill or died within the first week. Data also were collected on the numbers who developed severe COVID-19 or died on any day within the 14 days following admission.

Overall, SCARP's one-day risk predictions for progression to severe COVID-19 or death were 89% accurate, while the seven-day risk predictions for both outcomes were 83% accurate.

Robinson says that further SCARP trials are planned to validate its performance on a large scale using national patient databases. Based on the results of the first study, Johns Hopkins Medicine has already incorporated a version of SCARP into the electronic medical record system at all five of its hospitals in the Maryland and Washington, D.C., area.

"Our successful demonstration shows that SCARP has the potential to be an easy-to-use, highly accurate and clinically meaningful risk calculator for patients hospitalized with COVID-19," says Robinson. "Having a solid grasp of a patient's real-time risk of progressing to severe disease or death within the next 24 hours and next week could help health care providers make more informed choices and treatment decisions for their patients with COVID-19 as they get sicker."

Robinson is available for interviews.

Credit: 
Johns Hopkins Medicine

Deciphering the impacts of small RNA interactions in individual bacterial cells

image: The panels show SgrS (red) and ptsG mRNA (green) labeled by Single-molecule Fluoresence in situ Hybridization for the wild-type strain before and after 20?min of αMG (non-metabolizable sugar analog) induction.

Image: 
Anustup Poddar

Bacteria employ many different strategies to regulate gene expression in response to fluctuating, often stressful, conditions in their environments. One type of regulation involves non-coding RNA molecules called small RNAs (sRNAs), which are found in all domains of life. A new study led by researchers at the University of Illinois describes, for the first time, the impacts of sRNA interactions in individual bacterial cells. Their findings are reported in the journal Nature Communications, with the paper selected as an Editors' highlight article.

Bacterial sRNAs are often involved in regulating stress responses using mechanisms that involve base-pairing interactions with a target mRNA and enhancing or repressing its stability or the amount of protein being made from the mRNA. Hfq, a hexameric RNA chaperone protein, facilitates binding between the RNAs and promotes stability of the sRNA. Although the kinetics of sRNA-mRNA interactions have been studied in vitro, the mutational impacts on base-pairing interactions in vivo remain largely unknown.

"We wanted to understand how individual base-pair interactions between the small RNA and one of its mRNA targets contributed to the overall regulatory outcome and thereby, the amount of protein being produced from the mRNA in these conditions," said Illinois professor of microbiology Cari Vanderpool, also a faculty member in the Carl R. Woese Institute for Genomic Biology (IGB), who co-led the study. "We took an approach that allowed us to visualize and count individual small RNA and mRNA molecules inside bacterial cells, giving us insight into what's happening at the molecular level."

The research team involved collaborations with professor of biophysics Taekjip Ha (Johns Hopkins University) and Illinois professor of chemistry and IGB faculty Zaida Luthey-Schulten. The researchers used mathematical modeling and quantitative super-resolution imaging to examine the consequences of changing individual base-pair interactions on kinetic parameters of regulation such as the time needed for an sRNA to find an mRNA target.

The study focused on the sRNA SgrS, which is made in bacterial cells when sugar transport exceeds what the cell can handle through metabolism. Previous work by the Vanderpool group demonstrated that under sugar stress conditions, SgrS binds to its primary target mRNA, ptsG, which encodes part of the glucose transport machinery. Together with Hfq, SgrS inhibits translation of ptsG and thereby prevents production of new glucose transporters to slow down transport until metabolism can catch up.

To identify the key regions of SgrS important for ptsG regulation, researchers used high-throughput sequencing to analyze thousands of mutants in parallel and find those with the strongest effects on the regulatory interactions.

"What we found was that individual base-pair interactions had some effect on the regulation, but they were relatively minor effects," said Vanderpool. "The much bigger effects we saw were when we saw mutations in the sRNA that disrupted its ability to interact with Hfq. If the sRNA couldn't effectively bind to the chaperone, then it was much slower in finding its mRNA target and once found, it came unbound much more quickly."

"Our high-throughput sequencing and quantitative super-resolution imaging platform was able to measure modest differences in rates of association and dissociation arising because of single base-pair mismatches between SgrS and ptsG mRNA," said Anustup Poddar, postdoctoral researcher and first author of the paper. "The fact that these values are much smaller than the thermodynamically predicted values tells us that there is a lot more to learn about the role of chaperone proteins in base-pairing mediated target search."

Even with disrupted base-pairing, SgrS regulation of ptsG was still observed as long as Hfq was around. The study clearly demonstrated Hfq's important role in promoting and stabilizing the interactions between SgrS and ptsG. These findings were surprising since the biggest impacts were thought to come from disruption of the base-pairing interactions, said Vanderpool.

"I learned a different way of thinking from the collaborators and realized how important and powerful mathematical modeling could be," said Muhammad Azam, former graduate student who worked on the study.

The Vanderpool group has an ongoing collaboration with professor of biochemistry and molecular biology Jingyi Fei (University of Chicago) to investigate the kinetic parameters of other SgrS targets, with the overall goal of understanding the hierarchy of regulation.

"We had the questions that we wanted to ask but didn't have the high-resolution methods to look at this in a quantitative way," said Vanderpool. "The best kind of collaboration is when each person brings some kind of expertise and different ways of looking at a problem to the table that makes the outcome exciting and fun."

Credit: 
Carl R. Woese Institute for Genomic Biology, University of Illinois at Urbana-Champaign

Neurological complications of COVID-19 in children: rare, but patterns emerge

While neurological complications of COVID-19 in children are rare, in contrast to adults, an international expert review of positive neuroimaging findings in children with acute and post-infectious COVID-19 found that the most common abnormalities resembled immune-mediated patterns of disease involving the brain, spine, and nerves. Strokes, which are more commonly reported in adults with COVID-19, were much less frequently encountered in children. The study of 38 children, published in the journal Lancet, was the largest to date of central nervous system imaging manifestations of COVID-19 in children.

"Thanks to a major international collaboration, we found that neuroimaging manifestations of COVID-19 infection in children could range from mild to severe and that pre-existing conditions were usually absent," says co-senior author Susan Palasis, MD, Division Head of Neuroradiology at Ann & Robert H. Lurie Children's Hospital of Chicago and Associate Professor of Radiology at Northwestern University Feinberg School of Medicine. "Attention to the neurological effects of COVID-19 and recognition of neuroimaging manifestations that can be encountered in children can facilitate correct and timely diagnosis, mitigate the spread of disease, and prevent significant morbidity and mortality."

In order to understand the neuroimaging findings encountered in clinical context, Dr. Palasis and colleagues divided the cases into four categories of disease based on the children's symptoms and laboratory findings. In this way they were able to evaluate a large number of cases simultaneously and identify recurring neuroimaging patterns in the acute and post-infectious phases of disease.

Abnormal enhancement of spinal nerve roots on MRI was observed in many of the cases. This neuroimaging finding is typically seen with Guillain-Barré Syndrome (GBS), a post-infectious autoimmune disease. The study demonstrated that GBS associated with COVID ?19 can present as an acute para-infectious process rather than the typical post-infectious neuronal injury.

Another significant observation was that cranial nerve enhancement was also frequently present.

"We noted that abnormal nerve enhancement did not always correlate with corresponding nerve symptoms," says Dr. Palasis. "This indicates that neuroradiologists must perform targeted searches for unsuspected abnormalities, as they could be the clue that COVID-19 is the underlying cause of disease."

Other findings that were frequently seen were areas of abnormality on MRI within a specific area of the brain, the splenium of the corpus callosum, and muscle inflammation. These were more frequently identified with multisystem inflammatory syndrome in children (MIS-C), a serious complication of COVID-19.

Myelitis, an infectious or post-infectious demyelinating condition of the spinal cord, was also a frequent pattern of disease. Most cases fell into the spectrum of a post-infectious process and the children were either normal on follow-up or had mild residual symptoms. One child developed a severe myelitis and ultimately became quadriplegic.

"Our observations indicate that while most children with COVID-19 related nervous system disease do well, some can be severely affected," says Dr. Palasis. "We encountered four cases of atypical infections of the central nervous system in previously healthy children diagnosed with acute COVID-19, which were uniformly fatal. The results of our study emphasize the importance of being cognizant of the atypical and less-prevalent sequelae of COVID-19 neurologic disease in children with recent or concurrent COVID-19 infections. Neuroimaging patterns we identified in our study should prompt investigation of possible COVID-19 as the underlying etiological factor for disease."

Credit: 
Ann & Robert H. Lurie Children's Hospital of Chicago

Large computer language models carry environmental, social risks

Computer engineers at the world's largest companies and universities are using machines to scan through tomes of written material. The goal? Teach these machines the gift of language. Do that, some even claim, and computers will be able to mimic the human brain.

But this impressive compute capability comes with real costs, including perpetuating racism and causing significant environmental damage, according to a new paper, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ?" The paper is being presented Wednesday, March 10 at the ACM Conference on Fairness, Accountability and Transparency (ACM FAccT).

This is the first exhaustive review of the literature surrounding the risks that come with rapid growth of language-learning technologies, said Emily M. Bender, a University of Washington professor of linguistics and a lead author of the paper along with Timnit Gebru, a well-known AI researcher.

"The question we're asking is what are the possible dangers of this approach and the answers that we're giving involve surveying literature across a broad range of fields and pulling them together," said Bender, who is the UW Howard and Frances Nostrand Endowed Professor.

What the researchers surfaced was that there are downsides to the ever-growing computing power put into natural language models. They discuss how the ever-increasing size of training data for language modeling exacerbates social and environmental issues. Alarmingly, such language models perpetuate hegemonic language and can deceive people into thinking they are having a "real" conversation with a person rather than a machine. The increased computational needs of these models further contributes to environmental degradation.

The authors were motivated to write the paper because of a trend within the field towards ever-larger language models and their growing spheres of influence.

The paper already has generated wide-spread attention due, in part, to the fact that two of the paper's co-authors say they were fired recently from Google for reasons that remain unsettled. Margaret Mitchell and Gebru, the two now-former Google researchers, said they stand by the paper's scholarship and point to its conclusions as a clarion call to industry to take heed.

"It's very clear that putting in the concerns has to happen right now, because it's already becoming too late," said Mitchell, a researcher in AI.

It takes an enormous amount of computing power to fuel the model language programs, Bender said. That takes up energy at tremendous scale, and that, the authors argue, causes environmental degradation. And those costs aren't borne by the computer engineers, but rather by marginalized people who cannot afford the environmental costs.

"It's not just that there's big energy impacts here, but also that the carbon impacts of that will bring costs first to people who are not benefiting from this technology," Bender said. "When we do the cost-benefit analysis, it's important to think of who's getting the benefit and who's paying the cost because they're not the same people."

The large scale of this compute power also can restrict access to only the most well-resourced companies and research groups, leaving out smaller developers outside of the U.S., Canada, Europe and China. That's because it takes huge machines to run the software necessary to make computers mimic human thought and speech.

Another risk comes from the training data itself, the authors say. Because the computers read language from the Web and from other sources, they can pick up and perpetuate racist, sexist, ableist, extremist and other harmful ideologies.

"One of the fallacies that people fall into is well, the internet is big, the internet is everything. If I just scrape the whole internet then clearly I've incorporated diverse viewpoints," Bender said. "But when we did a step-by-step review of the literature, it says that's not the case right now because not everybody's on the internet, and of the people who are on the internet, not everybody is socially comfortable participating in the same way."

And, people can confuse the language models for real human interaction, believing that they're actually talking with a person or reading something that a person has spoken or written, when, in fact, the language comes from a machine. Thus, the stochastic parrots.

"It produces this seemingly coherent text, but it has no communicative intent. It has no idea what it's saying. There's no there there," Bender said.

Credit: 
University of Washington

New study: Healthcare settings do not increase risk for Covid-19 infection spread

Healthcare personnel who were infected with COVID-19 had stronger risk factors outside the workplace than in their hospital or healthcare setting. That is the finding of a new study published today in JAMA Network Open conducted by University of Maryland School of Medicine (UMSOM) researchers, colleagues at the Centers for Disease Control and Prevention (CDC) and three other universities.

The study examined survey data from nearly 25,000 healthcare providers in Baltimore, Atlanta, and Chicago including at University of Maryland Medical System (UMMS) hospitals. They found that having a known exposure to someone who tested positive for COVID-19 in the community was the strongest risk factor for testing positive for COVID-19. Living in a zip code with a high COVID-19 cumulative incidence was also a strong risk factor.

"The news is reassuring in that it shows the measures taken are working to prevent infections from spreading in healthcare facilities," said study co-author Anthony Harris, MD, MPH, Professor of Epidemiology & Public Health at UMSOM. "Vaccination for healthcare workers, however, should remain a priority because of continual exposures in the workplace. There is also an urgent need to keep healthcare providers healthy so they can care for sick patients and reduce the risk of transmitting the virus to vulnerable patients."

Researchers from Emory University School of Medicine and Rollins School of Public Health in Atlanta, Rush University Medical Center in Chicago, and Johns Hopkins University School of Medicine also participated in this study. UMSOM faculty Robert Christenson, PhD, Brent King, MD, Surbhi Leekha, MBBS, Lyndsay O'Hara, PhD, Peter Rock, MD, MBA, and Gregory Schrank, MD, were co-authors on this study. The study was funded by CDC's Prevention Epicenters Program.

"Factors presumed to contribute most to infection risk among healthcare providers, including caring for COVID-19 patients, were not associated with increased risk in this study," said study co-author Sujan Reddy MD, an infectious disease specialist at the CDC. "The highest risks to healthcare workers may be from exposures in the community."

The study did, however, have some important caveats. Since infection control practices were not standardized across the various healthcare sites, the study could not determine the level of effectiveness of personal protective equipment (N95 respirator, surgical mask, gowns, face shields). Nor could the study determine whether certain infection control practices, such as frequent disinfection of surfaces in exam rooms, were better than others in preventing infection spread.

Confirming evidence from other studies, this study found that Black Americans who were healthcare personnel were more likely to test positive for COVID-19 infections than their white counterparts. This may be due to existing disparities in community exposure rather than from healthcare-associated exposures.

"We're proud of this very important collaborative clinical work with our research colleagues," said Mohan Suntha, MD, MBA, President and CEO of UMMS. "We have made the safety of our team members a top priority throughout this pandemic, and it is incredibly gratifying to see that our efforts to prevent the spread of COVID-19 in hospitals have worked. This is also another example of the importance of the partnership between our academic-focused health care System and the groundbreaking discovery-based medicine work happening every day at the UM School of Medicine."

"As front-line and support staff at hospitals and health systems continue to tirelessly battle COVID-19, they can draw reassurance in this important research finding that the infection control measures in place protected themselves and their families," said E. Albert Reece, MD, PhD, MBA, Executive Vice President for Medical Affairs, UM Baltimore, and the John Z. and Akiko K. Bowers Distinguished Professor and Dean, University of Maryland School of Medicine. "We need to know that we are doing all we can to protect our healthcare heroes, from providing them with adequate protective gear to giving them early access to vaccines."

Credit: 
University of Maryland School of Medicine