Culture

Army researchers create pioneering approach to real-time conversational AI

image: Army researchers create a novel approach that allows autonomous systems to flexibly interpret and respond to Soldier intent.

Image: 
(1st Lt. Angelo Mejia)

ADELPHI, Md. -- Spoken dialogue is the most natural way for people to interact with complex autonomous agents such as robots. Future Army operational environments will require technology that allows artificial intelligent agents to understand and carry out commands and interact with them as teammates.

Researchers from the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory and the University of Southern California's Institute for Creative Technologies, a Department of Defense-sponsored University Affiliated Research Center, created an approach to flexibly interpret and respond to Soldier intent derived from spoken dialogue with autonomous systems.

This technology is currently the primary component for dialogue processing for the lab's Joint Understanding and Dialogue Interface, or JUDI, system, a prototype that enables bi-directional conversational interactions between Soldiers and autonomous systems.

"We employed a statistical classification technique for enabling conversational AI using state-of-the-art natural language understanding and dialogue management technologies," said Army researcher Dr. Felix Gervits. "The statistical language classifier enables autonomous systems to interpret the intent of a Soldier by recognizing the purpose of the communication and performing actions to realize the underlying intent."

For example, he said, if a robot receives a command to "turn 45 degrees and send a picture," it could interpret the instruction and carry out the task.

To achieve this, the researchers trained their classifier on a labeled data set of human-robot dialogue generated during a collaborative search-and-rescue task. The classifier learned a mapping of verbal commands to responses and actions, allowing it to apply this knowledge to new commands and respond appropriately.

Researchers developed algorithms to incorporate the classifier into a dialogue management system that included techniques for determining when to ask for help given incomplete information, Gervits said.

In terms of Army impact, the researchers said this technology can be applied to combat vehicles and autonomous systems to enable advanced real-time conversational capability for Soldier-agent teaming.

"By creating a natural speech interface to these complex autonomous systems, researchers can support hands-free operation to improve situational awareness and give our Soldiers the decisive edge," Gervits said.

According to Gervits, this research is significant and unique in that it enables back-and-forth dialogue between Soldiers and autonomous systems.

"Interacting with such conversational agents requires limited to no training for Soldiers since speech is a natural and intuitive interface for humans and there is no requirement to change what they could say," Gervits said. "A key benefit is that the system also excels at handling noisy speech, which includes pauses, fillers and disfluencies - all features that one would expect in a normal conversation with humans."

Since the classifier is trained ahead of time, the system can operate in real-time with no processing delay in the conversation, he said.

"This supports increased naturalness and flexibility in Soldier-agent dialogue, and can improve the effectiveness of these kinds of mixed-agent teams," Gervits said.

Compared to commercial deep-learning approaches, which require large, expensive data sets to train the system, this approach requires orders of magnitude fewer training examples, he said. It also has the advantage of being able to reduce deployment time and cold start capability for new environments.

Another difference is that commercial dialogue systems are typically trained in non-military domains, while his focus is on a search-and-rescue task specifically designed to mimic the style of Soldier-robot interaction that could occur in a future tactical environment.

Finally, the classification approach allows for better transparency and explainability of the system performance making it possible to analyze why the system produced a certain behavior. This is critical for military applications wherein ethical concerns demand greater transparency of autonomous systems, Gervits said.

The research was performed primarily a few years ago when Gervits was an intern at ICT. The subsequent manuscript was accepted to the International Workshop on Spoken Dialogue Systems in 2019 and presented at the conference. It was published in the conference proceedings in 2021.

Dr. David Traum, from the Natural Language Dialogue group at ICT, led the dialogue research, which included the statistical classifier. Dr. Matthew Marge from ARL led the Botlanguage project, a collaborative effort between ARL at the Adelphi Laboratory Center, ARL West and ICT.

The next steps for this research are threefold:

-To improve the system performance by supplementing the classifier with additional linguistic representations.

-Extending the approach to enable learning of new training examples through real-time dialogue. An example of this is a robot encountering something new in the environment and asking a Soldier what it is.

-Integrating additional interaction modalities such as gaze and gesture, in addition to speech, for more robust interaction in physical environments.

"With the tactical environment of the future likely to involve mixed Soldier-agent teams, I am optimistic that this technology will have a transformative effect on the future of the Army," Gervits said. "It is highly rewarding for me as a researcher to see such a tangible outcome for my efforts."

Credit: 
U.S. Army Research Laboratory

Ice cap study promises new prospects for accurate local climate projections

image: The antennas of the radar are situated at the rear end of the sleigh. They send impulses into the ice, which are reflected by different properties in the ice and return the impulse signals to the antennae. Data can be plotted as a radio-echogram, from which you can read out thickness of the ice, the rock under the ice and different layers of ice in the ice sheet. Radar measurements from flyovers work in much the same way as the sleigh radar. Each radar can be adapted to focus on different properties, such as the shift from ice to bedrock, ice layer, melt layer etc.

Image: 
Christian Panton

New, detailed study of the Renland Ice Cap offers the possibility of modelling other smaller ice caps and glaciers with significantly greater accuracy than hitherto. The study combined airborne radar data to determine the thickness of the ice cap with on-site measurements of the thickness of the ice cap and satellite data. Researchers from the Niels Bohr Institute - University of Copenhagen gathered the data from the ice cap in 2015, and this work has now come to fruition in the form of more exact predictions of local climate conditions.

The accuracy of the study allows for the construction of models for other smaller ice caps and glaciers, affording significantly improved local projections of the condition of glaciers locally, around the globe. The results have recently been published in Journal of Glaciology.

A combination of approaches results in greater accuracy

The initial, principal aim of the study, was to assess the thickness and volume of the Renland Ice Cap, and in the process, validate computer-modelled data against real data. Airborne radar, which measured the thickness of the ice, was compared with measurement results that were known in advance. In addition, researchers availed of satellite-based measurements of the ice velocity on the surface of the ice cap, again juxtaposed with various parameters entered into the computer model, e.g. "basal slide" - in other words, the velocity of movement at the bottom of the ice cap. The combined results provided researchers with an extremely detailed basis material for constructing a computer model that can be applied in other situations.

From Renland to the rest of the world

Iben Koldtoft, PhD student at the Physics of Ice, Climate and Earth section at the Niels Bohr Institute, and first author of the scientific article, explains: "We now have the most optimal parameters for this ice flow model, the Parallet Ice Sheet Model, for the Renland Ice Cap. But despite these being specific local measurements for Renland, we can use these modelling parameters to simulate the ice cap over an entire ice age cycle, for example, and compare the results with the Renland ice core we drilled in 2015. We can examine to what extent the ice cap has changed over time, or how quickly the ice will melt if the temperature rises by a few degrees in the future. Or put more concisely: We now know how the model can be "tuned" to match different climate scenarios. This ensures greater accuracy and a method that is also transferable to other smaller ice caps and glaciers".

"In fact, we can see that our scientific article initially received many views from Japan and Argentina. At first this was a bit surprising - why there, exactly? But it makes absolute sense. These are countries with smaller local ice sheets and glaciers, who are now excited to be able to project the future evolution of these", comments Iben Koldtoft.

Smaller scale provides greater visibility

The larger ice sheets in Greenland and Antarctica are of course the most important, when assessing temperature changes and the effects of melting on global climate. However, the smaller ice caps react faster and can be considered as "mini- environments", where it is possible to follow developments across a shorter timescale. In addition, it is easier to model the smaller scenarios more precisely, points out Iben Koldtoft.

"If we look at Svalbard, an archipelago that lies very far north, they experience climate change as having a far greater local effect than one sees in Greenland, for example. Over time, of course, all these changes will eventually affect the entire climate system, but we can observe it more clearly on a smaller scale".

The Renland ice core reveals more secrets

In 2015 a core was drilled on the Renland Ice Cap. In the intervening years, scientists have extracted data from the recovered ice core in the form of water isotopes, gases and chemical measurements. These are all proxies for temperature, precipitation accumulation, altitude changes and other climate conditions of east Greenland, where the Renland Ice Cap is located. This data can now be compared with the detailed study and with data from other locations in Greenland. As a result, the study contributes to the increasingly detailed picture of how the climate is changing. Iben Koldtoft emphasises the importance of combining the observational data with computer modelling, and that climate research in general is at a stage where the use of advanced computer simulations and the ability to "tune" them correctly, is now a vital competence. Although glaciers across the globe can be monitored with incredible accuracy by satellites today, there is a need to develop strong computer-based models, combining physics and mathematics, in order to calculate how glaciers will change in the climate of the future, and their effect on future increases in sea levels.

Credit: 
University of Copenhagen - Faculty of Science

Pandemic eviction bans found to protect entire communities from COVID-19 spread

A new study led by researchers at Johns Hopkins and the University of Pennsylvania uses computer modeling to suggest that eviction bans authorized during the COVID-19 pandemic reduced the infection rate and not only protected those who would have lost their housing but also entire communities from the spread of infections.

With widespread job loss in the U.S. during the pandemic, many state and local governments temporarily halted evictions last spring, and just as these protections were about to expire in September, the Centers for Disease Control and Prevention (CDC) declared a national eviction ban.

However, the order is only extended a few months at a time and is under constant challenge in the court system, including debates about whether such measures control infection transmission.

The research team aimed to study if eviction bans help control the spread of SARS-CoV-2, the virus that causes COVID-19, explains Alison Hill, Ph.D., an assistant professor of biomedical engineering at Johns Hopkins.

In a bid to document the potential impact, Hill and Michael Levy, Ph.D., of the University of Pennsylvania, teamed up with experts in housing policy from the University of Illinois Urbana-Champaign. Hill and Levy specialize in using mathematical models to study how infections spread.

A report on the research was published April 15 in Nature Communications.

In the new report, the investigators say they used simulations to predict the number of additional SARS-CoV-2 infections in major U.S. cities if evictions had been allowed to occur during the fall of last year.

They estimated, for example, that in a city of approximately 1 million residents with evictions occurring at a heightened rate of 1% of households per month, an additional 4% of the population could become infected with SARS-CoV-2, which corresponds to about 40,000 more cases. Even with a much lower eviction rate of 0.25% per month, which is similar to the pre-pandemic level in cities such as Atlanta, Detroit and Tucson, Arizona, estimates were for about 5,000 additional cases.

To make these predictions, the researchers first calibrated their math model to re-create the most common epidemic patterns seen in major U.S. cities in 2020. The model took into account changes in infection rates over time due to public health measures, and it was tailored to match reported COVID-19 cases and deaths. The researchers used the model to track the spread of infection in and out of households. Then, they ran another version of the model in which eviction bans were lifted, to estimate how the bans have affected transmission of the virus.

Hill and her colleagues found that without eviction bans, people who are evicted or who live in a household that hosts evictees have 1.5 to 2.5 times more risk of being infected with SARS-CoV-2 than if the eviction bans were in place.

"People who experience eviction often move in with other households, increasing the density of people living together," says Hill. "Households are known to be an important setting for SARS-CoV-2 infection, so this can increase transmission rates."

The researchers' computer simulations also found that without eviction bans, the risk of SARS-CoV-2 infection would rise for all residents of a city, not just those who are evicted.

Even when the researchers evaluated a different version of the model, in which a city is divided into neighborhoods of different socioeconomic status and evictions are restricted to certain districts, evictions still could cause increases in infections with the virus.

"Some opponents of eviction bans say that evictions only affect a narrow part of the population, but our simulations indicate that evictions not only put disadvantaged households at risk of infection, but entire communities as well," says Hill. "When it comes to a transmissible disease like COVID-19, no neighborhood is entirely isolated."

When the researchers used this data to examine how evictions would specifically impact Philadelphia residents, they found that people in all neighborhoods of the city would experience increased COVID-19 levels due to evictions.

Teaming up with researchers from Northeastern University who used de-identified information about how city residents travel throughout neighborhoods, the researchers estimated that, without eviction bans, there could have been approximately 5,000 more COVID-19 cases in Philadelphia if evictions occurred at pre-pandemic levels, and up to 50,000 additional cases if evictions were five times more frequent.

An early version of this research was cited in court cases challenging eviction bans in Philadelphia, and in the CDC's national eviction order (click here and here).

To alleviate infection risk and reduce economic burdens, the researchers say, governments should consider not only extended eviction bans but also financial assistance for both tenants and landlords, as well as resources for households to reduce transmission of the virus within the home.

Credit: 
Johns Hopkins Medicine

MIPT and Harvard researchers grow stem cells to cure glaucoma

image: Stem cells on the retinal surface by two weeks post-transplantation.

Image: 
<em>Molecular Therapy - Methods and Clinical Development</em>

A joint research carried out by MIPT scientists and Harvard researchers have presented retinal cells that can integrate into the retina. This is the first successful attempt to transplant ganglion cells (retinal neurons that are destroyed by glaucoma) derived from stem cells in a lab setting. Scientists tested the technology in mice and established that the cells successfully integrated and survived for a year. In the future, the researchers plan to create specialized cell banks, which will permit individual, tailored therapy for each patient.

The world's first successful attempt to grow and transplant retinal ganglion cells developed from stem cells was made by scientists from MIPT's genomic engineering laboratory in collaboration with researchers from Harvard Medical School. Retinal ganglion cells, commonly damaged in glaucoma, are responsible for the transmission of visual information. The scientists managed to not only grow neurons (retinal ganglion cells are considered specialized neurons), but also transplant them into the eyes of mice, achieving the correct ingrowth of artificial retinal tissue. Without treatment, glaucoma can lead to irreversible damage to the optic nerve and, as a result, the loss of part of the visual field. Progression of this disease can lead to complete blindness.

Retinal cells were grown using special organoids, with the tissue formed in a petri dish, according to Evgenii Kegeles, a junior researcher from MIPT's genomic engineering laboratory. These cells were subsequently transplanted into several groups of mice. The MIPT scientists were responsible for re-isolating and analyzing the transplanted cells.

"Our studies in mice have shed light on some of the basic questions surrounding retina cell replacement, i.e. can donor RGCs survive within diseased host retinas? Or are transplants only possible within young hosts?", noted Julia Oswald, the first author of the paper and a research fellow from the Schepens Eye Research Institute, Harvard Medical School affiliate. "Using mice in which we used microbeads to artificially elevate intraocular pressure and a model of chemically induced neurotoxicity, we could show that transplanted donor cells survive in disease-like microenvironments. In addition, we could demonstrate that cells survived independent of the donor's age and the location to which the cells were delivered within the retina."

According to the authors, these cells have successfully existed inside mouse retinas for 12 months, which is a significant period for the species. Scientists confirmed that they were able to receive signals from other neurons in the retina; however, the ability of the cells to transmit signals to the brain has yet to be assessed with absolute certainty.

"We are confident that the grown cells are embedded where necessary and have extended axons into the brain, but their full functionality is currently impossible to assess, due to the relatively low number of cells surviving the procedure. However, our study shows a first proof-of-concept for the re-isolation of donor cells post-transplant, to observe on a molecular level that cells did, indeed, form synapses, grow axons, and integrate into the retina. This technique will enable countless future studies into the cross talk between transplanted cells and the host microenvironment. This will allow us to find and employ molecular mechanisms which will help transplanted cells to function properly and, as a result, improve visual function when transplanted in the right quantity," explained Evgenii Kegeles.

Mouse retinal cells can be grown from stem cells in around 21 days. However, according to the scientists at MIPT, it will take longer for human cells -- from 50 to 100 days.

Even so, a person with glaucoma preparing for a transplant will most likely not require retinal tissue grown from their own autologous stem cells. Since the eye is an immune-privileged organ where rejection is rare, it is possible to create a cell bank for these patients; grown retinal cells from a universal donor or induced pluripotent stem cells would be stored there. This would mean that it would be possible to grow cells in advance and freeze them. When a patient with glaucoma requires help, the most suitable cells would be selected for transplantation.

"The Nobel Prize for induced pluripotent stem cells was awarded almost 10 years ago, in 2012," said Pavel Volchkov, head of the laboratory of genomic engineering. "The so-called hype, when literally all of the research teams involved in the process considered it their duty to explore the topic, has long faded away. Now is the time not just for words, but for real technologies based on iPS (induced pluripotent stem cells). And it is precisely this technology that this research on the transplantation of retinal ganglion cells is based on. This is an opportunity to demonstrate that stem cells can really be applied in practice, that, with their help, something can be corrected. Although this work has not yet been brought to clinical practice, it is only a few steps away from a real transplant for the purpose of treating glaucoma."

"It was indeed an enabling study in which we demonstrated that it is possible to make diverse retinal ganglion cell neurons in quantity sufficient for transplantation. Moreover, donor neurons' ability to integrate into the diseased retina and survive for over a year brings hope and excitement for cell therapy development," - added Petr Baranov, - the Principal Investigator from the Schepens Eye Research Institute, Harvard Medical School.

According to scientists, this technology is around 10 years from being ready for use in clinical practice.

Credit: 
Moscow Institute of Physics and Technology

Patients who are obese or overweight are at risk for a more severe course of COVID-19

COVID-19 patients who are overweight or obese are more likely to develop a more severe infection than patients of healthy weight, and they require oxygen and invasive mechanical ventilation more often. There is no increased risk of death . These conclusions, for which more than 7,000 patients were studied, appear from international research in eleven countries, including the Netherlands (Radboud university medical center).

The study, led by Australian researchers, examined over 7000 patients from eleven different countries who were admitted to 18 hospitals. Of this group, over a third (34.8%) were overweight and almost a third (30.8%) were obese. COVID-19 patients with obesity required oxygen more frequently and were 73% more likely to require invasive mechanical ventilation. Remarkably, no greater mortality was observed in these groups of patients than in patients of healthy weight. The international results have now been published in scientific journal Diabetes Care.

Immunologist Siroon Bekkering of Radboud university medical center, principal investigator of the Dutch part, explains that never before so many different data on obesity have been combined in one large study. "Several national and international observations already showed the important role of overweight and obesity in a more severe COVID-19 course. This study adds to those observations by combining data from several countries with the possibility to look at the risk factors separately. Regardless of other risk factors (such as heart disease or diabetes), we now see that too high a BMI can actually lead to a more severe course in corona infection."

One explanation for this is that overweight and obesity are characterized by chronic inflammation, which can perhaps lead to increased susceptibility to viruses. This is also the case with the flu virus. Also, obese people are more likely to suffer from shortness of breath, which may lead to an increased need for ventilation.

Different risk factors for severe COVID-19 infection

More risk factors emerge from the study. For example, this study, similar to other international studies, confirms that men are more likely to have a more severe course of COVID-19 infection. In addition, this study also shows that people older than 65 years of age needed supplemental oxygen more often and are at greater risk of death.

Cardiovascular disease and pre-existing respiratory disease may be associated with an increased risk of in-hospital death, but not with an increased risk of using oxygen and mechanical ventilation. For patients with diabetes, there was an increased risk of needing invasive respiratory support, but no additional increase in risk in those with both obesity and diabetes. There was no increased risk of death.

Credit: 
Radboud University Medical Center

Ocean currents modulate oxygen content at the equator

Due to global warming, not only the temperatures in the atmosphere and in the ocean are rising, but also winds and ocean currents as well as the oxygen distribution in the ocean are changing. For example, the oxygen content in the ocean has decreased globally by about 2% in the last 60 years, particularly strong in the tropical oceans. However, these regions are characterized by a complex system of ocean currents. At the equator, one of the strongest currents, the Equatorial Undercurrent (EUC), transports water masses eastwards across the Atlantic. The water transport by the EUC is more than 60 times larger than that of the Amazon river. For many years, scientists at GEOMAR have been investigating in cooperation with the international PIRATA programme fluctuations of this current with fixed observation platforms, so-called moorings. Based on the data obtained from these moorings, they were able to prove that the EUC has strengthened by more than 20% between 2008 and 2018. The intensification of this major ocean current is associated with increasing oxygen concentrations in the equatorial Atlantic and an increase in the oxygen-rich layer near the surface. Such a thickening of the surface oxygenated layer represents a habitat expansion for tropical pelagic fish. The results of the study have now been published in the international journal Nature Geoscience.

"At first, this statement sounds encouraging, but it does not describe the entire complexity of the system", says project leader and first author Prof. Dr. Peter Brandt from GEOMAR. "We found that the strengthening of the Equatorial Undercurrent is mainly caused by a strengthening of the trade winds in the western tropical North Atlantic", Peter Brandt explains further. The analysis of a 60-year data set has shown that the recent oxygen increase in the upper equatorial Atlantic is associated with a multidecadal variability characterised by low oxygen concentrations in the 1990s and early 2000s and high concentrations in the 1960s and 1970s. "In this respect, our results do not contradict the global trend, but indicate that the observed current intensification likely will switch back into a phase of weaker currents associated with enhanced oxygen reduction. It shows the need for long-term observations in order to be able to separate natural fluctuations of the climate system from trends such as oxygen depletion caused by climate warming", says Brandt.

The changes in oxygen supply in the tropics due to circulation fluctuations have an impact on marine ecosystems and ultimately on fisheries in these regions. "Habitat compression or expansion for tropical pelagic fish can lead to altered predator-prey relationships, but also make it particularly difficult to assess overfishing of economically relevant fish species, such as tuna", says Dr Rainer Kiko, co-author from the Laboratoire d'Océanographie de Villefranche at Sorbonne University, Paris.

The investigations are based partly on a ship expedition carried out along the equator at the end of 2019 with the German research vessel METEOR. This expedition included a physical, chemical, biogeochemical and biological measurement programme that supports the development of climate-based predictions for marine ecosystems as part of the EU-funded TRIATLAS project. While another expedition with RV METEOR along the equator had to be cancelled due to the COVID-19 pandemic, several long-term moorings in the tropical Atlantic - including the one at the equator - will now be recovered and redeployed during an additional expedition with RV SONNE in June-August 2021, of course under strict quarantine conditions.

Credit: 
Helmholtz Centre for Ocean Research Kiel (GEOMAR)

Researchers use laser paintbrush to create miniature masterpieces

image: The researchers used their new laser painting method to make a miniature version of Van Gogh's painting "The Starry Night."

Image: 
Yaroslava Andreeva

WASHINGTON -- Researchers are blurring the lines between science and art by showing how a laser can be used to create artistic masterpieces in a way that mirrors classical paints and brushes. The new technique not only creates paint-like strokes of color on metal but also offers a way to change or erase colors.

"We developed a way to use a laser to create localized color on a metallic canvas using a technique that heats the metal to the point where it evaporates," said research team leader Vadim Veiko from ITMO University in Russia. "With this approach, an artist can create miniature art that conveys complex meaning not only through shape and color but also through various laser-induced microstructures on the surface."

In Optica, The Optical Society's (OSA) journal for high impact research, Veiko and colleagues show that their new laser tools can be used to create unique colorful paintings, including a miniature version of Van Gogh's painting "The Starry Night."

"We hope that laser painting will attract the attention of modern artists and lead to the creation of a completely new type of art," said research team member Yaroslava Andreeva. "The approach can also be used for modern design and to create color markings on various products."

Painting with light

The new study builds on previous work in which the researchers investigated how to use lasers to create color on titanium and stainless steel. "We wanted to do more than offer a wide palette of stable colors," said Galina Odintsova, a member of the research team. "Thus, we worked to create a convenient tool for applying them more like an artist's brush."

For the new technique, the researchers heat the metal to a point where it starts to evaporate -- much higher than the melting temperatures used in previously developed approaches. When the metal cools, an extremely thin film of metal oxide forms. Light reflected from the metallic surface and the top of the oxide film interfere in a way that produces different colors depending on the thickness of the film.

"Increasing the laser heating range enough to create the evaporation process makes our color strokes reversible, rewritable, erasable and much more efficient," said Odintsova. "Our marking speed is more than 10 times faster than reported before."

Erasable color

The researchers used a nanosecond ytterbium fiber system equipped with a galvanometric scanner to create strokes that combine surface relief with optical effects, creating nine basic colors. A second pass of the laser at a faster scan rate can erase or change the color of an area. They showed that the surface color of erased areas was indistinguishable from a non-treated surface and that colors could be erased and rewritten several times without affecting the resulting color.

They demonstrated the new laser paintbrush by using it to create a 3X2-inch version of "Starry Night" in just 4 minutes. They also made original artwork to demonstrate color mixture and erasing. The researchers point out that pictures made using this laser painting approach are extremely resistant to harsh environments and chemicals and don't require any type of special storage.

The researchers would like to incorporate their new laser painting capabilities into a handheld tool that could be used much like a pen or paintbrush to create colorful pictures or drawings on metals or metallic foils. The approach could also be used to add nanostructured and hybrid materials or periodic surface gratings to achieve a variety of optical effects.

Credit: 
Optica

Attacking aortic aneurysms before they grow

A new study investigates a genetic culprit behind abdominal aortic aneurysm, a serious condition that puts people at risk of their aorta rupturing - a potentially deadly event.

Finding a viable genetic target for AAA could change the game, says senior author Katherine Gallagher, M.D., a vascular surgeon and an associate professor of surgery and microbiology and immunology at Michigan Medicine, the academic medical center of the University of Michigan.

That's because there are no medications to directly treat the condition and prevent an aneurysm from growing. Current options include things like addressing blood pressure to lower the stress on the arteries and veins running through the body, and making lifestyle changes like quitting smoking. Most people monitor their aneurysm to see if it grows enough to eventually require endovascular or open surgical repair.

For this study, a team of Michigan Medicine researchers investigated the role of an epigenetic enzyme called JMJD3 in the development of AAAs. They found the gene was turned on in both people and mice who had an AAA and that the gene promoted inflammation in monocyte/macrophages. When they blocked the enzyme, it prevented an aneurysm from forming.

"Targeting the JMJD3 pathway in a cell specific-manner offers the opportunity to limit AAA progression and rupture," says lead author Frank Davis, M.D., a vascular surgery resident at the Frankel Cardiovascular Center.

"We are the first to perform an extensive single-cell RNA sequencing and gene expression analysis on human AAAs and non-aneurysmal aortic control samples," Gallagher adds.

Credit: 
Michigan Medicine - University of Michigan

National report highlights benefit of collaborative care models for people with dementia

image: A new National Academies report on benefits of collaborative care models for dementia cites research and implementation by Regenstrief Institute research scientists. Collaborative care models integrate medical and psychosocial care, delivered by a team of providers. There are between 3.7 million and 5.8 million people living with dementia in the United States, and that number is likely to grow as the population ages.

Image: 
Regenstrief Institute

INDIANAPOLIS -- A new report from the National Academies of Sciences, Engineering, and Medicine (National Academies) details the state of dementia care and research in America and provides guidance on future research to make sure both patients and their families are having their needs met by the care they receive. Sections of the report highlight the effectiveness of the collaborative care model as well as successful implementation, citing research from Regenstrief Institute, Eskenazi Health and the Indiana University School of Medicine.

There are between 3.7 million and 5.8 million people living with dementia in the United States, and that number is likely to grow as the population ages.

The report, Meeting the Challenge of Caring for Persons Living with Dementia and Their Care Partners and Caregivers: A Way Forward, sponsored by the National Institutes of Health's National Institute on Aging, looked at the various needs of people with dementia, including help with medication use, paying bills and managing day-to-day activities. It notes several limitations of current care interventions. However, based on an independent systematic review by the Agency for Healthcare Research and Quality, found two types of interventions that are supported by low-strength evidence of benefit: collaborative care models, which integrate medical and psychosocial care; and Resources for Enhancing Alzheimer's Caregiver Health (REACH) II, an intervention aimed at supporting family caregivers.

Regenstrief Research Scientist and IU School of Medicine Professor of Medicine Christopher Callahan, M.D., was a member of the committee that compiled the report. He has worked extensively with colleagues at Regenstrief, IU and Eskenazi Health on developing and implementing collaborative care models for Alzheimer's disease. Improving care for persons living with dementia and their care partners is an important research focus for a large team of scientists in the Center for Aging Research at the Regenstrief Institute.

"This report incorporated feedback from people with dementia and their families, and they indicated that outcomes measured by past research have been too narrow and there are other areas of wellbeing that are important to them that have not been taken into consideration," said Dr. Callahan. "Collaborative care works to provide holistic care to the patients and their families, and the model can be adapted to address needs that are not currently being met."

Collaborative care models integrate medical and psychosocial care, delivered by a team of providers including physicians, nurses, psychologists, care coordinators and social workers. This model has proven to significantly improve patient and caregiver outcomes and reduce costs and has been successfully implemented at the Sandra Eskenazi Center for Brain Care Innovation.

The National Academies report highlighted the urgent need to put into practice evidence-based interventions, referencing the work of Regenstrief Research Scientist and IU School of Medicine Professor Malaz Boustani, M.D., MPH. Dr. Boustani is creating strategies for more trials that take place in real-world situations, while still being rigorous in testing important outcomes.

"Patients and families indicated that they understand that interventions will not be perfect, but they want them implemented now, in the real world, while research and refinement continues," said Dr. Callahan.

The report proposes a blueprint for future research, which includes methodological improvements and approaches that can complement randomized control trials. A major focus should be assessing real-world effectiveness and prioritizing inclusive research.

Credit: 
Regenstrief Institute

Mayo researchers, collaborators identify 'instigator' gene associated with Alzheimer's disease

JACKSONVILLE, Fla. -- In a new paper published in Nature Communications, Mayo Clinic researchers and collaborators report the protein-coding gene SERPINA5 may worsen tau protein tangles, which are characteristic of Alzheimer's disease, and advance disease. By combining clinical expertise, brain tissue samples, pathology expertise and artificial intelligence, the team clarified and validated the relevance of the gene to Alzheimer's disease.

The researchers used tissue samples from 385 brains donated to the Mayo Clinic Brain Bank, which houses more than 9,000 brain tissue specimens for the study of neurodegenerative disorders. The samples were from people who were diagnosed with Alzheimer's disease and lacked co-existing diseases found in the brain. This ensured a spotlight on Alzheimer's disease, which enabled the team to focus on targets relevant to the disease.

These samples were used to classify the pattern of protein tangles associated with Alzheimer's. Then the team used digital pathology and RNA sequencing to identify gene expression in the samples, which effectively measures gene changes responsible for instructing proteins.

"We were able to look at an entire disease spectrum and find gene changes that may really influence the hippocampus, the brain's memory center," says Melissa Murray, Ph.D., a Mayo Clinic translational neuropathologist and lead author on the paper. "That means we may have targets that indicate why some people have relative preservation and some people have relative exacerbation of memory loss symptoms."

Using a machine learning algorithm, the authors narrowed the genes of interest from about 50,000 to five. The top candidate, SERPINA5, was found to be strongly associated with tau tangle progression in the hippocampus and cortex of the samples. The researchers plan to investigate how SERPINA5 interacts with tau protein to develop an inhibitor.

"Much of the focus of therapeutics is on abnormal proteins ? amyloid and tau ? used to biologically define Alzheimer's disease," says Dr. Murray. "But we hope to take a step back to look at a new interacting partner that may be actually accelerating tau or pushing tau accumulation past the tipping point."

Based on a growing understanding of how Alzheimer's disease may affect people at different ages, or differences observed between women and men, the study team did not limit their investigation by adjusting for age and sex.

The team was recently awarded a grant by the state of Florida to ensure the findings are broadly applicable across an ethno-racially diverse Alzheimer's disease cohort, as the need for more research to clarify the relevance of gene expression changes on hippocampal vulnerability may be critical for preserving memory loss in Alzheimer's disease. But, Dr. Murray says, by starting with the human brain and ending with it, the researchers hope their findings provide a deeper level of understanding that helps advance findings to clinical trials faster.

"While we have direct evidence of SERPINA5 in the context of Alzheimer's disease, SERPINA3 in this same family of genes has also been looked at in Alzheimer's, and SERPINA1 in ALS. So I think it's about collective awareness and paying attention to this group of proteins."

Credit: 
Mayo Clinic

Green hydrogen: "Rust" as a photoanode and its limits

image: Rust would be an extremely cheap and stable photoelectrode material to produce green hydrogen with light. But the efficiency is limited. The TEM image shows a photoanode containing a thin photoactive layer of rust.

Image: 
Technion

Hydrogen will be needed in large quantities as an energy carrier and raw material in the energy system of the future. To achieve this, however, hydrogen must be produced in a climate-neutral way, for example through so-called photoelectrolysis, by using sunlight to split water into hydrogen and oxygen. As photoelectrodes, semiconducting materials are needed that convert sunlight into electricity and remain stable in water. Metal oxides are among the best candidates for stable and inexpensive photoelectrodes. Some of these metal oxides also have catalytically active surfaces that accelerate the formation of hydrogen at the cathode or oxygen at the anode.

Why is rust not much better?

Research has long focused on haematite (α-Fe2O3), which is widely known as rust. Haematite is stable in water, extremely inexpensive and well suited as a photoanode with a demonstrated catalytic activity for oxygen evolution. Although research on haematite photoanodes has been going on for about 50 years, the photocurrent conversion efficiency is less than 50% of the theoretical maximum value. By comparison, the photocurrent efficiency of the semiconductor material silicon, which now dominates almost 90% of the photovoltaic market, is about 90% of the theoretical maximum value.

Scientists have puzzled over this for a long time. What exactly has been overlooked? What is the reason that only modest increases in efficiency have been achieved?

Israeli-German team solves the puzzle

In a recent study published in Nature Materials, however, a team led by Dr. Daniel Grave (Ben Gurion University), Dr. Dennis Friedrich (HZB) and Prof. Dr. Avner Rothschild (Technion) has provided an explanation as to why haematite falls so far short of the calculated maximum value. The group at Technion investigated how the wavelength of absorbed light in hematite thin films affects the photoelectrochemical properties, while the HZB team determined the wavelength dependent charge carrier properties in thin films of rust with time-resolved microwave measurements.

Fundamental physical property extracted

By combining their results, the researchers succeeded in extracting a fundamental physical property of the material that had generally been neglected when considering inorganic solar absorbers: The photogeneration yield spectrum. "Roughly speaking, this means that only part of the energy of the light absorbed by haematite generates mobile charge carriers, the rest generates rather localised excited states and is thus lost," Grave explains.

Rust will not get much better

"This new approach provides experimental insight into light-matter interaction in haematite and allows distinguishing its optical absorption spectrum into productive absorption and non-productive absorption," Rothschild explains. "We could show that the effective upper limit for the conversion efficiency of haematite photoanodes is significantly lower than that expected based on above band-gap absorption," says Grave. According to the new calculation, today's "champion" haematite photoanodes have already come quite close to the theoretically possible maximum. So it doesn't get much better than that.

Assessing new photoelectrode materials

The approach has also been successfully applied to TiO2, a model material, and BiVO4, which is currently the best-performing metal oxide photoanode material. "With this new approach, we have added a powerful tool to our arsenal that enables us to identify the realizable potential of photoelectrode materials. Implementing this to novel materials will hopefully expedite the discovery and development of the ideal photoelectrode for solar water splitting. It would also allow us to 'fail quickly', which is arguably just as important when developing new absorber materials" says Friedrich.

Credit: 
Helmholtz-Zentrum Berlin für Materialien und Energie

In-ambulance consults cut down on critical treatment time for stroke patients

image: Physicians at MUSC Health have partnered with Georgetown Hospital to significantly shorten the time between a patient’s stroke and their treatment using telehealth.

Image: 
MUSC Health

Eighteen minutes might be all it takes to ensure a full recovery for stroke patients in rural South Carolina.

By changing EMS workflows and incorporating telemedicine techniques, physicians at MUSC Health have partnered with Georgetown Memorial Hospital and Hampton Regional Medical Center to significantly shorten the time between a patient's stroke symptom onset and their treatment, as recently reported in the Journal of Stroke and Cerebrovascular Diseases.

Through MUSC Health's Telestroke Network, emergency medical technicians (EMTs) can video chat with stroke specialists to begin a patient's consult before they even arrive at the hospital.

"We realized that if we could start seeing these stroke patients before they came into the emergency room, we could reduce the time it took for us to treat them," said Christine Holmstedt, D.O., the medical director of MUSC Health's Comprehensive Stroke Center.

A stroke occurs when blood flow to the brain is interrupted. In an ischemic stroke, blood flow is clogged by a block in an artery leading to the brain. In a hemorrhagic stroke, there is bleeding into the brain tissue from a burst blood vessel, and in both cases, time is of the utmost importance.

Stroke treatments are extremely time sensitive and need to be started as soon as possible after patients begin experiencing stroke symptoms in order to improve clinical outcomes and reduce their chances of disability or death.

Acute stroke treatments include the intravenous clot-busting agent alteplase (tPA) and/or a mechanical thrombectomy where a device is threaded through the blood vessel to break up the clot. With this quick response, physicians ensure the greatest chance at a recovery, and every minute reduction in treatment improves their patient's chances more. The average human brain contains 22 billion neurons, according to an article in Stroke, and during an acute ischemic stroke, 1.9 million are lost every minute.

The new telestroke workflow in the study involved three-way communication between the stroke specialist, the EMT, the patient and the receiving hospital nurse and emergency medicine physician. EMTs could even start the consult while still at the patient's home and ask family members for a more accurate history of the patient. Performing the consult and examination on the way to the hospital allowed emergency room doctors and nurses to be more prepared for their incoming stroke patient. Holmstedt pointed out a few patients who were rerouted to a comprehensive stroke center while on the way to the closest hospital because the stroke was too severe for the local hospitals. A few patients were even flown to MUSC from their homes if the examination revealed they needed more specialized treatment and care.

Before the telestroke program, stroke patients would be brought directly to the closest hospital, where they would begin their examination soon after their arrival. Their treatment would continue there, or they could be transferred to another hospital. With the new workflow, that examination happens en route, cutting down on critical treatment time.

"A 15-minute reduction in door-to-treatment time leads to patients with reduced complications from tPA and significant reduction in disability or death," said Holmstedt. "They are more likely to be discharged to an acute rehab rather than long-term care, and they have much better functional outcomes." These new protocols influenced that 15-minute reduction even further by bringing average treatment times down from 38 minutes to 20.

This program is especially important in rural areas where patients are spread out geographically. Other programs in the U.S. have been incorporating mobile stroke units, which are armed with vital stroke equipment like CT scanners, but these stroke units can cost upwards of $2,000,000 and are not feasible for South Carolina. By comparison, the telestroke console costs about $2,000 per ambulance and helps rural areas see stroke experts before they even get to the hospital, according to Holmstedt.

In its 2015 infancy, the telestroke program began with a partnership between Holmstedt and Georgetown EMS Director Dale Hewitt and Georgetown Hospital Stroke Coordinator Jessica Hewitt. They started with 5 ambulances in 1 county to test the concept and feasibility of the program and have since grown to 26 additional trucks in 5 different counties throughout the state.

Holmstedt is currently working with the MUSC College of Health Professions and Clemson University to assess the economic impact of the telestroke program and the potential for further expansion.

"These improved outcomes reduce disability and even death for patients seen with acute stroke," said Holmstedt. "And they don't negatively impact the EMT workflow, so we can bring more efficient treatment options to the state's rural population. And that's significant."

Credit: 
Medical University of South Carolina

Without major changes, gender parity in orthopaedic surgery will take two centuries

April 19, 2021 - At the current rate of change, it will take more than 200 years for the proportion of women in orthopaedic surgery to reach parity with the overall medical profession, according to a study in Clinical Orthopaedics and Related Research® (CORR®), a publication of The Association of Bone and Joint Surgeons®. The journal is published in the Lippincott portfolio by Wolters Kluwer.

"Substantive changes must be made across all levels of orthopaedic education and leadership to steepen the current curve," concludes the report by Atul F. Kamath, MD, of the Cleveland Clinic Foundation and colleagues. "Our findings support the need for changes in medical schools, orthopaedic residency programs, as well as at the level of professional specialty and subspecialty societies."

Orthopaedic surgery remains 'dead last in gender diversity'

The researchers queried the National Provider Identifier Registry of the Centers for Medicare and Medicaid Services, which requires clinicians to identify themselves as male or female. As of April 2020, the registry included data on 31,296 practicing orthopaedic surgeons, of whom 8 percent were women. That's far lower than the proportion of women in the medical profession overall: 36 percent in 2019.

Between 2010 and 2019, the compound annual growth rate of women orthopaedic surgeons was 2 percent (20 percent over the decade). Assuming this rate were sustained after 2019, it would take 217 years - until 2236 - to achieve gender parity with the rest of the medical profession. Time to achieve parity with the US population - currently 51 percent female - would be 326 years, or until 2354.

The researchers also analyzed trends by orthopaedic subspecialty and by region. In 2019, women accounted for 26 percent of surgeons in pediatric orthopaedics and 14 percent in foot and ankle surgery, but only 3 percent in adult reconstructive surgery and 3 percent in spinal surgery. After 2019, the gains in subspecialty representation are projected to be just 1 or 2 percent, with no growth in adult reconstructive surgery.

The Midwest had the greatest growth in proportion of women orthopaedic surgeons, 27 percent; followed by the Northeast, 20 percent. Rates in the West and South were 17 percent and 19 percent, respectively: less than the rate of national growth.

Dr. Kamath and colleagues call for changes throughout orthopaedics to hasten the rate of change. Medical schools should offer an orthopaedic surgery rotation to foster interest among women students and to help curb concerns related to work-life balance and a culture dominated by men. The authors also suggest benchmarks to increase the proportion of women trainees and faculty in orthopaedic surgery training programs, particularly in the South and West; and more women among the leadership of orthopaedic subspecialty societies.

In an accompanying 'Take 5' interview with Dr. Kamath, CORR® Editor-in-Chief Seth S. Leopold, MD, writes: "Literally every other medical and surgical specialty has overcome this problem to a greater degree than has orthopaedic surgery; we're dead last in gender diversity." He calls on his specialty to stop paying "lip service" to "the substantial absence of women from our specialty and the lack of progress towards remedying the disparity over time."

Dr. Kamath believes the orthopaedic community should seek help from other fields that have been more successful in increasing representation of women: "It is not impossible to achieve gender parity; we just have to actually acknowledge that it is a problem, and then do something about it."

"We applaud these authors for highlighting this critical issue," comments Julie Samora, MD. Dr. Samora is President of the Ruth Jackson Orthopaedic Society, a support and networking group for the growing number of women orthopaedic surgeons. "The projection of over 300 years to achieve gender parity with the US population is alarming."

"We are missing out on outstanding talent and doing a disservice to our patients," Dr. Samora adds. "The time is now to commit to action, to have intentional efforts to increase the representation of women in orthopedics. Improving gender diversity will not only make our programs and profession better, but will also improve the overall care of our patients."

Credit: 
Wolters Kluwer Health

Research inside hill slopes could help wildfire and drought prediction

video: In 2018, researchers from the Jackson School of Geosciences and other institutions travelled to Northern California to conduct a first-of-its-kind field study to sample the interiors of a sequence of hillslopes.

The research revealed that rock weathering and water storage appear to follow a similar pattern across undulating landscapes where hills rise and fall for miles. Since weathering and water storage influence how water and nutrients flow throughout landscapes, the research could help improve predictions of wildfire and landslide risk and how droughts will affect the landscape.

Image: 
Michelle Pedrazas/UT Jackson School of Geosciences

A first-of-its-kind study led by The University of Texas at Austin has found that rock weathering and water storage appear to follow a similar pattern across undulating landscapes where hills rise and fall for miles.

The findings are important because they suggest that these patterns could improve predictions of wildfire and landslide risk and how droughts will affect the landscape, since weathering and water storage influence how water and nutrients flow throughout landscapes.

"There's a lot of momentum to do this work right now," said study co-author Daniella Rempe, an assistant professor at the UT Jackson School of Geosciences Department of Geological Sciences. "This kind of data, across large scales, is what is needed to inform next-generation models of land-surface processes."

The research was led by Michelle Pedrazas, who conducted the work while earning a master's degree at the Jackson School. It was published in the Journal of Geophysical Research: Earth Surface.

Despite the importance of what's happening inside hills, most computer models for simulating landscape behavior don't go deeper than the soil due to a lack of data that can scale to large areas, Rempe said.

This study helps fill that knowledge gap, being the first to methodically sample the interiors of a sequence of hill slopes. The research focused on investigating the "critical zone," the near surface layer that includes trees, soils, weathered rock and fractures.

"This study helps to unravel a mystery in the critical zone research community, the linkage between bedrock weathering, topography and storage of water in mountainous watersheds," said Eric Pierce, the director of the Environmental Sciences Division at Oak Ridge National Laboratory who was not involved with the study.

The research site is in Northern California and is part of a national network of Critical Zone Observatories. The scientists drilled 35 boreholes across a series of hill slopes and their valleys to collect subsurface samples and other data. They also collected a core sample at the peak of each hill slope that captured the entire height of the hill - a distance that varied from 34 to 57 feet (10.5 to 17.5 meters).

The samples revealed deeper weathering and fracturing in hilltops and thinner weathering in valleys, in addition to weathering that penetrates deeper into shorter hill slopes than taller ones.

This finding is important because it suggests that computer models could use this scaling trend to model the extent of weathering in similar undulating terrain.

Where water is stored in the weathered rocks of hill slopes is an important question, especially during the arid summers experienced in the field area. Research led by Rempe in 2018 revealed that trees tap into water stored as "rock moisture" in the fractures and pores of critical zone rocks during droughts.

This study also revealed rock moisture in the critical zone - but only within the first 20 feet of weathered rock.

Learning more about how hill slopes store their water can help researchers determine what areas are most at risk of becoming wildfire hazards. Pedrazas said that the wildfire connection was clear when they collected the field data in 2018. Wildfires blazing in other parts of California turned the sun red and filled the sky with smoke. The setting underscored the fact that knowing what's happening at the surface is closely connected to what's happening within the hills.

"We were really seeing the potential impact of our research, [the importance of] where is the water, and when are trees really going to dry up, and what risk that is for society," Pedrazas said.

Credit: 
University of Texas at Austin

New algorithm uses online learning for massive cell data sets

The fact that the human body is made up of cells is a basic, well-understood concept. Yet amazingly, scientists are still trying to determine the various types of cells that make up our organs and contribute to our health.

A relatively recent technique called single-cell sequencing is enabling researchers to recognize and categorize cell types by characteristics such as which genes they express. But this type of research generates enormous amounts of data, with datasets of hundreds of thousands to millions of cells.

A new algorithm developed by Joshua Welch, Ph.D., of the Department of Computational Medicine and Bioinformatics, Ph.D. candidate Chao Gao and their team uses online learning, greatly speeding up this process and providing a way for researchers world-wide to analyze large data sets using the amount of memory found on a standard laptop computer. The findings are described in the journal Nature Biotechnology.

"Our technique allows anyone with a computer to perform analyses at the scale of an entire organism," says Welch. "That's really what the field is moving towards."

The team demonstrated their proof of principle using data sets from the National Institute of Health's Brain Initiative, a project aimed at understanding the human brain by mapping every cell, with investigative teams throughout the country, including Welch's lab.

Typically, explains Welch, for projects like this one, each single-cell data set that is submitted must be re-analyzed with the previous data sets in the order they arrive. Their new approach allows new datasets to the be added to existing ones, without reprocessing the older datasets. It also enables researchers to break up datasets into so-called mini-batches to reduce the amount of memory needed to process them.

"This is crucial for the sets increasingly generated with millions of cells," Welch says. "This year, there have been five to six papers with two million cells or more and the amount of memory you need just to store the raw data is significantly more than anyone has on their computer."

Welch likens the online technique to the continuous data processing done by social media platforms like Facebook and Twitter, which must process continuously-generated data from users and serve up relevant posts to people's feeds. "Here, instead of people writing tweets, we have labs around the world performing experiments and releasing their data."

The finding has the potential to greatly improve efficiency for other ambitious projects like the Human Body Map and Human Cell Atlas. Says Welch, "Understanding the normal compliment of cells in the body is the first step towards understanding how they go wrong in disease."

Credit: 
Michigan Medicine - University of Michigan