Tech

Improving access to mental health services in low-income communities

image: Quenette L. Walton, lead author and principal investigator, is assistant professor at the University of Houston Graduate College of Social Work.

Image: 
University of Houston

When it comes to improving access to mental health services for children and families in low-income communities, a University of Houston researcher found having a warm handoff, which is a transfer of care between a primary care physician and mental health provider, will help build trust with the patient and lead to successful outcomes.

"Underserved populations face certain obstacles such as shortage of providers, family beliefs that cause stigma around mental health care, language barriers, lack of transportation and lack of insurance. A warm handoff, someone who serves as a go-between for experts and patients, can ensure connections are made," said Quenette L. Walton, assistant professor at the UH Graduate College of Social Work.

In the United States, as much as 14% of children experience emotional problems from birth to five years of age, and 75% of children with diagnosed mental health disorders are seen by pediatric primary care physicians. But in many under-resourced communities, integrated behavioral health interventions are not readily available.

Walton, principal investigator and lead author of a study published in Child & Youth Care Forum, identified and evaluated strategies used by pediatricians, psychiatrists, psychologists, licensed medical health counselors, program managers and coordinators to improve the referral system and access to pediatric mental health care for low-income, minority families in Los Angeles County, where the research was conducted.

"This group developed a patient-centered, telehealth-based intervention to streamline the process from referral to actual treatment," Walton explained. "That included updating their systems to give specialty mental health providers access to information they need - basically closing the loophole so services can be consistent."

Three major themes emerged from the research to inform how pediatric primary care physicians and mental health providers assist their low-income children and families with accessing mental health care: communication, coordination and collaboration.

Effective communication including phone calls, emails or written reports improved access to mental health services for this population.

Coordination of services required knowing how to make the referral process more efficient and effective so providers, working together, could more quickly discuss a shared treatment plan and implementation.

Collaboration of services entailed a warm handoff between pediatric primary care physicians and mental health providers. This person helped with navigating the system and worked with providers to develop a shared and agreed-upon plan of care.

"It takes several times for people to really buy into the need for mental health care. So, if we can be more intentional in our efforts to get people access to resources they need, despite their challenges, then they will feel valued and more likely to come in for services," Walton added. "Just an additional five or 10 minutes makes a difference for a patient."

Co-authors of the study include Elizabeth Bromley, University of California, Los Angeles Geffen School of Medicine; Lorena Porras-Javier, UCLA Children's Discovery & Innovation Institute; and Tumaini R. Coker, University of Washington School of Medicine and Seattle Children's Hospital.

Credit: 
University of Houston

UCI-led study finds unleashing Treg cells may lead to treatments for multiple sclerosis

image: Genetic deletion of Piezo1 in T cells leads to protection in autoimmunity: In the absence of Piezo1, Tregs expand more and, due to their increased numbers, are more effective in containing the damage inflicted by the effector T cells during an autoimmune neuroinflammation. Effector T cell function is not affected in the absence of Piezo1.

Image: 
UCI School of Medicine

Irvine, CA - July 20, 2021 - In a new University of California, Irvine-led study, researchers found that a certain protein prevented regulatory T cells (Tregs) from effectively doing their job in controlling the damaging effects of inflammation in a model of multiple sclerosis (MS), a devastating autoimmune disease of the nervous system.

Published this month in Science Advances, the new study illuminates the important role of Piezo1, a specialized protein called an ion channel, in immunity and T cell function related to autoimmune neuroinflammatory disorders.

"We found that Piezo1 selectively restrains Treg cells, limiting their potential to mitigate autoimmune neuroinflammation," said Michael D. Cahalan, PhD, distinguished professor and chair in the Department of Physiology & Biophysics at the UCI School of Medicine. "Genetically deleting Piezo1 in transgenic mice, resulted in an expanded pool of Treg cells, which were more capable of effectively reducing neuroinflammation and with it the severity of the disease."

T cells rely on specialized proteins, like Piezo1, to detect and respond to various diseases and conditions including bacterial infections, wound healing, and even cancer. Uncontrolled T cell activity, however, can give rise to autoimmune disorders in which the immune system attacks normal cells in the body. Tregs constantly curate immune responses and play a critical role in preventing autoimmunity.

"Given the demonstrated ability of Piezo1 to restrain Treg cells, we believe that inhibiting Piezo1 could lead to new treatments for neuroinflammatory disorders, like MS," explained Amit Jairaman, PhD, and Shivashankar Othy, PhD, lead authors of the study, both project scientists in the Department of Physiology & Biophysics.

Piezo1 conducts ions when cells are subjected to mechanical forces. Research over the last decade has shed light on the role of Piezo1 in regulating vital physiological functions including red blood cell (RBC) volume, blood pressure, vascular development, bone formation, and differentiation of neural stem cells. However, its role in modulating immune response has not been appreciated before. And, while it was known that calcium conducting ion channels, like Piezo1, direct various aspects of T cell function, researchers were surprised to find that Piezo1 was not essential for a whole host of T cell functions that rely on calcium, such as lymph node homing, interstitial motility, activation, proliferation, or differentiation into effector T cells.

"We found the role of Piezo1 appears to be quite specific to Tregs. Therefore, targeting Piezo1 might be a new and ideal strategy to cure MS while preserving the immune system's ability to fight new infections," added Othy, whose research over last 12 years has focused on finding ways to harness the therapeutic potential of Treg cells.

Further investigation of the function of Piezo1 is needed to understand therapeutic potential, and to more fully understand the processes through which cells sense and respond to mechanical stimuli during immune responses.

Credit: 
University of California - Irvine

Using snakes to monitor Fukushima radiation

image: A Japanese rat snake crossing a rural road in the Fukushima Evacuation Zone in Japan.

Image: 
Hannah Gerke

Ten years after one of the largest nuclear accidents in history spewed radioactive contamination over the landscape in Fukushima, Japan, a University of Georgia study has shown that radioactive contamination in the Fukushima Exclusion Zone can be measured through its resident snakes.

The team's findings, published in the recent journal of Ichthyology & Herpetology, report that rat snakes are an effective bioindicator of residual radioactivity. Like canaries in a coal mine, bioindicators are organisms that can signal an ecosystem's health.

An abundant species in Japan, rat snakes travel short distances and can accumulate high levels of radionuclides. According to the researchers, the snakes' limited movement and close contact with contaminated soil are key factors in their ability to reflect the varying levels of contamination in the zone.

Hanna Gerke, an alumna of UGA's Savannah River Ecology Laboratory and the Warnell School of Forestry and Natural Resources, said tracked snakes moved an average of just 65 meters (approximately 213 feet) per day.

"Our results indicate that animal behavior has a large impact on radiation exposure and contaminant accumulation," Gerke said. "Studying how specific animals use contaminated landscapes helps increase our understanding of the environmental impacts of huge nuclear accidents such as Fukushima and Chernobyl."

Why are snakes a good indicator of radioactive contamination?
James C. Beasley, Gerke's advisor during the study, said snakes can serve as better indicators of local contamination in the zone than more mobile species like East Asian raccoon dogs, wild boar and song birds.

"Snakes are good indicators of environmental contamination because they spend a lot of time in and on soil," said Beasley, associate professor at SREL and Warnell. "They have small home ranges and are major predators in most ecosystems, and they're often relatively long-lived species."

The team identified 1,718 locations of the snakes while tracking them for over a month in the Abukuma Highlands, approximately 15 miles northwest of the Fukushima Daiichi Nuclear Power Plant. The paper's findings reinforce the team's previous study published in 2020, which indicated the levels of radiocesium in the snakes had a high correlation to the levels of radiation in the soil where the snakes were captured.

How to track snakes
To determine where the snakes were spending their time and how far they were moving, the team tracked nine rat snakes using a combination of GPS transmitters and manual very-high frequency tracking. Beasley said VHF transmitters allowed the team to physically locate a snake every few days to identify if it was underground or in arboreal habitat.

The researchers placed the transmitters on the rear back of the snakes. Tape was initially placed around the snakes. Then superglue was used to ensure the transmitters were secured to the tape. This allowed the transmitters to easily be removed from the animals at the conclusion of the study.

Working in the hilly, rugged terrain of abandoned villages and farms, the team located snakes in trees, grasslands and along roadside streams. Gerke said the snakes avoided the interior of conifer forests but were often found in deciduous forests, along forest edges and inside of abandoned buildings. More than half of the tracked snakes, she said, spent time in abandoned barns and sheds, which can help shield them from contamination in the surrounding soil.

During winter months, their risk of exposure likely increases when they seek shelter underground, close to the more heavily contaminated soils. Future work to clarify the link between the micro-habitat use of species like snakes and their contaminant exposure, as well as the potential health risks to snakes and other wildlife due to increased radiation exposure, will be critical to understanding the effects of the Fukushima Daiichi accident on local wildlife populations.

Credit: 
University of Georgia

Elite runners spend more time in air, less on ground, than highly trained but nonelite peers

A recent study led by Geoff Burns, an elite runner and postdoctoral researcher at the University of Michigan Exercise & Sport Science Initiative, compared the "bouncing behavior"--the underlying spring-like physics of running--in elite-level male runners (sub-four-minute milers) vs. highly trained but not elite runners.

Subjects ran on a treadmill instrumented with a pressure plate beneath the belt, so Burns and colleagues could see how much time they spent in the air and in contact with the ground. When running, muscles and limbs coordinate to act like a giant pogo stick, and those muscles, tendons and ligaments interact to recycle energy from step to step, Burns says.

The researchers looked at the basic physics of the runners as pogo sticks--called a "spring-mass" system in biomechanics--to see how those giant springs differed between elite and highly trained runners, and found some interesting and surprising differences.

What did you find?

We often think of running as pushing off the ground, but it's actually a beautifully coordinated bounce. All animals that run behave like this--even the ones with multiple pairs of legs coordinating to "bounce" along the ground.

In general, the elite runners were "stiffer" spring-mass systems with steeper impact angles--think of stiffer, more upright pogo sticks. Across various speeds, the elites had similar stride lengths and stride frequencies (similar cadences, or steps-per-minute) as highly trained runners, but elites spent more time in the air and less on the ground, especially at the lower speeds. With their "stiffer" spring behavior on each step, they may be better recycling that gravitational energy from the time in the air to quickly and efficiently bounce along, step-to-step.

One of the key findings was that difference in speeds. Across all speeds, the elite runners were in the air longer, but both time on the ground and in the air changed differently across speeds in the two groups. In both measures, the highly trained group approached the elites at faster speeds, but at lower speeds--where both groups spend the bulk of training time--the times were very divergent, with the elites more similar to their patterns at faster speeds.

In your study, you write that the interaction of nature and nurture--not one or the other--may give rise to their emergent, elite ability. Can you explain?

I suspect for each runner it operates on a spectrum. There are aspects of nature, or at least aspects that are developed very early on, such as tendon properties or neuromuscular recruitment patterns. But those things, and other contributing factors to these "system" characteristics of a runner's bouncing patterns, can be developed to some extent. It's probable that an interaction of nature and nurture allowed certain people to take to the training and racing that further developed the characteristics of an elite runner.

How did your own talent emerge? Was there a time when it became clear that you had special abilities?

I think my intrinsic physical qualities that allowed me to excel were my more malleable physiology and capacity to absorb training, meaning an ability to get better (perhaps, a nature that is readily nurtured). That ability to adapt and improve coupled with an intense desire and drive to do so is a good combo. So, with respect to mechanics, that may have manifested as the ability to adopt the characteristics that my competition-specific training and preparation demanded. I suspect I'm fortunate to have this capacity more than most, but I also think it should hopefully inspire everyone who runs, that these things aren't likely entirely predetermined. We, as humans, have the capacity to change ourselves with the right stimulus to adopt characteristics, be it mechanical or physiological, that allow us to achieve better performances.

Is there a practical application for runners here? Can this be taught?

Yes, these aspects are--to some extent--trainable. Things like resistance training for the lower limbs are known to increase leg stiffness, and even incorporating plyometric drills can help with this. Even something as simple as running on different surfaces (pavement vs. grass vs. dirt) will force you to change your body's bouncing stiffness. Simply put, challenging your body to interact with the ground differently will likely promote some sort of beneficial adaptation, if dosed responsibly.

But, I would certainly caution runners from trying to change this consciously. By this I mean, don't go out for a run and think, "I'm going to run with a stiffer leg or body" or "I'm going to try and spend as little time on the ground and as much time in the air as possible for this whole run." Broadly speaking, when we run, our bodies choose the movement patterns that tend to be most efficient and safest for us at that time. Moreover, because these are "system" characteristics, trying to exert conscious control over a continuously changing coordination of components is a recipe for trouble.

Putting those two things together, I'd say runners shouldn't try to consciously change their system or emulate the elite characteristics, but rather incorporate elements in their training that demand their system to take on those characteristics. What are those things? I think we can look right at the common ingredients of the middle distance runners' training menu: hills, sprints and fast intervals, plyometric drills, and lots of easy, slower running. These are things that are easy to incorporate into training that will challenge the whole "system" of a runner--be it the musculoskeletal strength, those neuromuscular coordination patterns, or tendinous structures--all to interact with the ground more efficiently across a range of speeds. While most of both the trained and elite runners in our study used these ingredients to some extent, I would add that the elites were certainly more regimented in them, and most incorporated resistance training as well.

Did anything about the findings surprise you?

The different relations with speed between the two groups in the contact time and flight time were interesting. I expected those variables to be linked to speed in both groups, and maybe some general difference between the two, but the interaction was interesting. Especially in that the elite group was less influenced by speed, meaning those patterns persisted at slower speeds. This could suggest a robustness of the musculoskeletal and physiological patterns that give rise to the overall spring-like characteristics of their stride, or that they're continually training these patterns at lower speeds when they're not incurring the physiological and mechanical stresses of running at faster speeds. That's just my own speculation, but it was very interesting.

It was also interesting that despite these different patterns in the speed dependency between the two groups, the runners still coordinated their global mechanics to maintain consistent leg stiffness across all speeds. This was expected, as all running animals tend to maintain leg stiffness across speeds (which was cool, nonetheless, to see that play out here in all the runners). But that there were different relationships with the other variables across speeds between the two groups that ultimately produced this consistent bouncing behavior in each group was pretty neat. And that it was consistently higher in the elites was further cool.

It was also interesting that the two groups had similar vertical compressions during stance across speeds, meaning that their bodies moved up and down similar amounts, yet the net result was still higher leg stiffnesses and vertical stiffnesses. This stems from the interaction of the steeper impact angles in the elites and the higher vertical forces. So, they were more upright in their force delivery to the ground, and loaded their bodies slightly more. Ultimately, they moved up and down the same amount, but because the forces were greater and their mechanical system was stiffer, they could recycle more energy through the stance cycle.

Did any one or two elements of movement stand out as being more significant than others?

In addition to the ones discussed above (contact and flight relations, vertical compression), the impact angle patterns were also interesting. These are essentially a synthesis of the runner's time on the ground, the runner's speed and the body's geometry. That the runners converged on the contact time at faster speeds but diverged on the impact angle (meaning the elites and subelites had more similar contact times at faster speeds, but less similar impact angles) would suggest that the two groups were changing their contact times in relation to their center-of-mass height and leg length differently, with the elites doing so to keep their system more "upright" or vertical at the faster speeds.

Do you plan to study female runners?

I do hope to publish something very similar in the coming year on groups of elite and highly trained female runners. I'm very curious to see how the trends between groups compare, but also how the characteristics at common speeds are similar or different.

Credit: 
University of Michigan

Digital health technologies hold key to new Parkinson's treatments

TUCSON, Ariz., July 20, 2021 — The use of digital health technologies across health care and drug development has accelerated. A new paper titled “Digital Progression Biomarkers as Novel Endpoints in Clinical Trials: A Multistakeholder Perspective,” co-authored by experts across diverse disciplines, highlights how new remote monitoring technologies present a tremendous opportunity to advance digital medicine in health care even further, specifically in Parkinson’s disease. This perspective paper is co-authored by the academic leader of the largest funded project for digital technologies in Europe, Professor Lynn Rochester, University of Newcastle; European Medicines Agency (EMA) scientific leader, Dr. Maria Tome; young investigator and Ph.D. candidate Reham Badawy; physician and Parkinson’s patient, Dr. Soania Mathur; and Dr. Diane Stephenson, Executive Director of the Critical Path for Parkinson’s (CPP) Consortium.

Global collaborative efforts are underway with the goal of advancing the use of digital health technologies for use in Parkinson’s clinical research and therapeutic trials — yet several gaps and barriers stand in the way of success. These include data security issues, the rapidly evolving nature of the technology, lack of consensus on data standards, vast diversity of distinct studies carried out on different devices and the need for open science.

CPP’s Digital Drug Development Tool team at Critical Path Institute consists of industry members, scientific academic advisors, patient research organizations and people living with Parkinson’s all collaborating across the globe to seek advice early and often from regulatory agencies. Companies advancing innovative therapies for the treatment of Parkinson’s see the promise of digital technologies, yet they also recognize that there are gaps that are too challenging to overcome on their own. CPP’s focus on the voice of people living with Parkinson’s aligns with the U.S. Food and Drug Administration (FDA) and EMA’s vision for patient-focused drug development. Sharing costs, risks and knowledge will streamline a more efficient runway for regulatory endorsement in the future.

“We felt it was imperative to come together on this paper, at this moment, to bring attention to how existing digital health technologies can complement traditional modalities and transform and accelerate clinical research and therapeutic development,” said Rochester.

Dr. Mathur, who has lived with Parkinson’s for 22 years, inspired the team of five women leaders to work on this paper across different countries during the pandemic. “It is vital to include the patient voice to drive the sense of urgency when it comes to Parkinson’s research. As patients, we fully experience the unrelenting progression of this disease, the ongoing daily challenges that we live with. From the direction of research to identifying the tools that can estimate relevant outcome measures in the search for new therapeutics that are directed towards disease modification or improved quality of life, patient input is absolutely integral to its success. This collaboration kept that sense of urgency at the forefront.”

“EMA works with the FDA to assure that digital technologies are aligned with what is important to patients,” said Dr. Tome. “The pace of progress is going to be accelerated by applying principles of what it took the world to tackle the COVID-19 pandemic,” Stephenson added. “True collaborations amongst all stakeholders are urgently needed to make efficient progress, avoid duplication of effort, share costs and risks and advance with warp speed.”

Professor Bas Bloem, editor-in-chief of the Journal of Parkinson’s Disease and author of the book “Ending Parkinson’s Disease” said, “We are very excited to publish this very important paper in our journal, as it provides a clear and visionary glimpse into the future of better care and innovative research approaches in the field of Parkinson’s disease.”

Credit: 
Critical Path Institute (C-Path)

New discoveries reveal how acute myeloid leukemia walks line between growth and cell death

Researchers revealed new insights into how acute myeloid leukemia (AML) develops and progresses, according to a study published in Molecular Cell on July 20, 2021. They describe a mechanism by which AML cells regulate a cancer-related protein, mutant IDH2, to increase the buildup of blood cancer cells--a distinguishing characteristic of the disease. This improved understanding of IDH2-related mechanism in AML will allow physicians to better understand how current IDH2-targeting medications work to ultimately improve treatments for AML patients.

AML is a cancer of the blood cells that can occur when immature white blood cells, cells that normally fight infection, acquire certain genetic mutations that cause them to multiply rapidly and build up the bone marrow and blood. It is the most common acute leukemia in adults and usually gets worse quickly if it is not treated.

This cancer can be driven by genetic mutations leading to the production of cancer-related mutant proteins, such as mutant IDH2 and IDH1. Normal IDH proteins are important in cell metabolism and play a role in the production of energy from the breakdown of molecules from food. The mutant forms of IDH proteins, found in AML cells, take on an extra function of making a cancer-causing molecule called 2-HG. 2-HG blocks the maturation of white blood cells, driving the development of leukemia.

Although 2-HG can drive the develop of cancer, at high concentrations it becomes toxic, killing cancer cells. Researchers at the University of Chicago Medicine Comprehensive Cancer Center with collaborators, were interested in learning how mutant IDH2 drives the development of AML, and how the leukemia cells are able to regulate the production of 2-HG to promote spread and avoid cell death.

The research team, led by Jing Chen, PhD, professor of medicine, discovered that AML cells are able to modify mutant IDH2 and regulate its activity, thus controlling the amount of 2-HG that it can produce. They determined the threshold of 2-HG concentration that allows it to switch from a cancer-causing to a cancer-killing agent.

Using human AML cells, they found that mutant IDH2 was controlled by a master regulator, called FLT3, that can activate and deactivate proteins through a process of modification. The team defined the FLT3-initiated series of events leading to a chemical modification, called acetylation, of mutant IDH2, and found that this type of modification blocks its activity and decreases the amount of 2-HG in the cell, allowing AML to avoid cell death.

"Our studies demonstrate that different intracellular concentrations of 2-HG correlate with critical cellular functions that can mean life or death for cancer cells," said Chen. "We also elucidated the distinct regulatory mechanisms for the protein variants, mutant IDH1 and mutant IDH2, in AML. This thorough understanding of AML driven by IDH mutant proteins allows us to better understand the mechanisms of action of AML-targeted therapies."

Treatments for AML include chemotherapy, radiation and medications that specifically target the protein drivers of AML. In fact, inhibitors that target mutant IDH2 and IDH1 have been approved by the FDA to treat AML that has relapsed or been resistant to other medications. For example, enasidenib is a small molecule that binds to and inhibits the IDH2 mutant. It has been shown to increase maturation of leukemia cells and reduce the number of leukemic cells in animal models. This latest discovery describing how AML cells regulate mutant IDH proteins and the production of 2-HG provides new insights into how mutant IDH-targeting medications work against AML. Studies such as these may allow for the development of better treatment options for patients with AML.

Credit: 
University of Chicago

Researchers develop novel method for glucagon delivery

image: Sihan Yu and Matthew Webber

Image: 
University of Notre Dame

For children with Type 1 diabetes, the risk of experiencing a severe hypoglycemic episode is especially common -- and for parents, the threat of that happening in the middle of the night is especially frightening. Sudden and critical drops in blood sugar can go undetected overnight when the child is asleep, resulting in coma and death -- an event known as "dead in bed syndrome."

"A parent can check their child's glucose levels right before they go to bed and everything looks fine, then around 2 a.m. their blood sugar is dangerously low -- near comatose level," said Matthew Webber, associate professor of chemical and biomolecular engineering at the University of Notre Dame.

Webber has listened to parents of diabetic children describe the fear of such an episode -- waking up several times a night to check glucose levels and the panic of emergency situations and rushing children to the hospital in the middle of the night.

In severe situations, glucagon injections can stabilize blood glucose levels long enough for parents to get their child medical attention. But in a new study, published in the Journal of the American Chemical Society, Webber is rethinking the traditional use of glucagon as an emergency response by administering it as a preventive measure.

In the research, Webber and his team illustrate how they successfully developed hydrogels that remain intact in the presence of glucose but slowly destabilize as levels drop, releasing glucagon into the system, raising glucose levels.

"In the field of glucose-responsive materials, the focus has typically been on managing insulin delivery to control spikes in blood sugar," Webber said. "There are two elements to blood glucose control. You don't want your blood sugar to be too high and you don't want it to be too low. We've essentially engineered a control cycle using a hydrogel that breaks down when glucose levels drop to release glucagon as needed."

The gels are water-based with a three-dimensional structure. Webber describes them as having a mesh-like architecture resembling a pile of spaghetti noodles with glucagon "sprinkled" throughout. According to the study, in animal models the gels dissolved as glucose levels dropped, eventually breaking down to release their glucagon contents.

Ideally in future applications, the gels would be administered each night before bed, Webber explained. "If a hypoglycemic episode arose later on, three or five hours later while the child is sleeping, then the technology would be there ready to deploy the therapeutic, correct the glucose imbalance and prevent a severe episode."

Webber emphasized that the research is in extremely early stages and parents and individuals living with Type 1 diabetes should not expect to see such a therapeutic available in the near term.

"One of the big challenges was engineering the hydrogel to be stable enough in the presence of glucose and responsive enough in the absence of it," he said. Another challenge was preventing the glucagon from leaking out of the hydrogel's mesh-like structure. While the team was ultimately successful, Webber said he hopes to improve stability and responsiveness with further study.

Credit: 
University of Notre Dame

Abelacimab effective blood clot treatment, McMaster-led study shows

image: Dr. Jeffrey Weitz, professor of medicine, Michael G. DeGroote School of Medicine, McMaster University

Image: 
McMaster University

Hamilton, ON (July 19, 2021) - A potentially game-changing treatment for people with, or at risk of, blood clots has been found effective by an international team of researchers led by McMaster University's Jeffrey Weitz.

Weitz's team compared abelacimab with enoxaparin as a control drug in 412 patients undergoing knee replacement surgery. Results showed that just one abelacimab injection prevents blood clots for up to a month after surgery, reducing the risk by about 80% compared with enoxaparin without increasing the risk of bleeding.

Their findings were published in the New England Journal of Medicine today, coinciding with Weitz's presentation of the research at the International Society on Thrombosis and Hemostasis 2021 Congress.

Weitz, a hematologist, is a professor of medicine and of biochemistry and biomedical sciences at McMaster's Michael G. DeGroote School of Medicine and executive director of the Thrombosis and Atherosclerosis Research Institute.

"Patients who undergo knee replacement routinely receive anti-clotting treatment with enoxaparin or other anticoagulant medications that require daily administration," he said.

"With a single injection of abelacimab after surgery, we found much better protection against clots in the veins in the leg compared with enoxaparin, one of the current standards of care."

Patients enrolled in the study were closely monitored for symptoms or signs of clotting or bleeding and underwent an x-ray of the veins of the operated leg to detect any possible clot formation.

Abelacimab's potential to treat other cardiovascular conditions has Weitz and his fellow scientists excited.

"This success of abelacimab in this study provides the foundation for its use for prevention of stroke in patients with atrial fibrillation and for treatment of deep-vein thrombosis and pulmonary embolism, clots in the veins of the leg and clots in the lung, in patients with cancer," said Weitz.

Abelacimab is an antibody that binds to both the inactive and activated forms of factor XI, one of the clotting factors, and prevents its activation and activity, thereby halting clot formation.

The study proved that factor XI is a key driver of clot formation after surgery, Weitz said, adding that the fact that abelacimab was more effective than enoxaparin, which inhibits clotting factors downstream to factor XI, highlights the importance of factor XI in clot formation.

"We expect factor XI to be a safer target for new anticoagulants than the targets of currently available anticoagulants because patients with congenital factor XI deficiency are at reduced risk for clots but rarely have spontaneous bleeding," he said.

Weitz said new anti-clotting medications are often tested first on patients undergoing orthopedic surgery, such as knee replacement, because such patients are at risk for clots in the veins of their operated leg. Therefore, different doses of the drug can be evaluated to identify those that are both effective and safe compared with the standard of care such as enoxaparin.

These doses can then be carried forward into studies of patients with clots or at risk of clots, such as cancer associated clots or stroke prevention in patients with atrial fibrillation.

Credit: 
McMaster University

Disparities in outpatient visit rates

What The Study Did: Researchers examined racial/ethnic disparities in outpatient visit rates to 29 physician specialties in the United States.

Authors: Christopher Cai, M.D., of the Internal Medicine Residency Program at Brigham and Women's Hospital/Harvard Medical School in Boston, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamainternmed.2021.3771)

Editor's Note: The article includes conflicts of interest disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

Scientists uncover drivers of phenotypic innovation and diversification in gymnosperms

image: Figure 1. Transcriptome-based species tree of gymnosperms showing major genomic events.

Image: 
KIB

Determining the major drivers of species diversification and phenotypic innovation across the Tree of Life is one of the grand challeges in evolutionary biology.

Facilitated by the Germplasm Bank of Wild Species of the Kunming Institute of Botany (KIB) of the Chinese Academy of Sciences (CAS), Prof. YI Tingshuang and Prof. LI Dezhu of KIB led a novel study on gymnosperm diversification with a team of international researchers.

This study provides critical insight on the processes underlying diversification and phenotypic evolution in gymnosperms, with important broader implications for the major drivers of both micro- and macroevolution in plants.

The results were published today online in Nature Plants in an article entitled "Gene duplications and phylogenomic conflict underlie major pulses of phenotypic evolution in gymnosperms."

In green plants, it is well understood that whole-genome duplication (WGD, or polyploidization) is an important evolutionary force. However, scientists have not clearly understood the role of WGD in shaping broad-scale evolutionary patterns in plants, especially when WGD is combined with adaptive radiation and other processes arising from climate change or new ecological opportunities.

Likewise, extant gymnosperm lineages clearly exhibit a complex history of ancient radiations, major extinctions, extraordinary stasis, and recent diversification. However, the correlates and causes of major phases of gymnosperm evolution have also not been well understood.

Using a novel transcriptome dataset as well as a diversity of comparative datasets, the researchers examined the relationship between various facets of genomic evolution, including gene and genome duplication, genome size, and chromosome number, and macroevolutionary patterns of phenotypic innovation, species diversification, and climatic occupancy in gymnosperms.

The scientists showed that spikes of gene duplication typically coincide with major spikes of phenotypic innovation, representing one of the first demonstrations of a direct relationship between gene duplication and phenotypic innovation on a macroevolutionary scale.

They also found that most shifts in gymnosperm diversification, since the rise of angiosperms, were decoupled from WGD events and instead are associated with increased rates of climatic occupancy evolution in cooler and/or more arid climatic conditions.

This suggests that ecological opportunity, especially in the late Cenozoic, along with environmental heterogeneity have driven a resurgence of gymnosperm diversification.

Credit: 
Chinese Academy of Sciences Headquarters

Scientists adopt deep learning for multi-object tracking

image: A novel framework achieves state-of-the-art performance without sacrificing efficiency in public surveillance tasks

Image: 
Gwangju Institute of Science and Technology

Computer vision has progressed much over the past decade and made its way into all sorts of relevant applications, both in academia and in our daily lives. There are, however, some tasks in this field that are still extremely difficult for computers to perform with acceptable accuracy and speed. One example is object tracking, which involves recognizing persistent objects in video footage and tracking their movements. While computers can simultaneously track more objects than humans, they usually fail to discriminate the appearance of different objects. This, in turn, can lead to the algorithm to mix up objects in a scene and ultimately produce incorrect tracking results.

At the Gwangju Institute of Science and Technology in Korea, a team of researchers led by Professor Moongu Jeon seeks to solve these issues by incorporating deep learning techniques into a multi-object tracking framework. In a recent study published in Information Sciences, they present a new tracking model based on a technique they call 'deep temporal appearance matching association (Deep-TAMA)' which promises innovative solutions to some of the most prevalent problems in multi-object tracking. This paper was made available online in October 2020 and was published in volume 561 of the journal in June 2021.

Conventional tracking approaches determine object trajectories by associating a bounding box to each detected object and establishing geometric constraints. The inherent difficulty in this approach is in accurately matching previously tracked objects with objects detected in the current frame. Differentiating detected objects based on hand-crafted features like color usually fails because of changes in lighting conditions and occlusions. Thus, the researchers focused on enabling the tracking model with the ability to accurately extract the known features of detected objects and compare them not only with those of other objects in the frame but also with a recorded history of known features. To this end, they combined joint-inference neural networks (JI-Nets) with long-short-term-memory networks (LSTMs).

LSTMs help to associate stored appearances with those in the current frame whereas JI-Nets allow for comparing the appearances of two detected objects simultaneously from scratch--one of the most unique aspects of this new approach. Using historical appearances in this way allowed the algorithm to overcome short-term occlusions of the tracked objects. "Compared to conventional methods that pre-extract features from each object independently, the proposed joint-inference method exhibited better accuracy in public surveillance tasks, namely pedestrian tracking," highlights Dr. Jeon. Moreover, the researchers also offset a main drawback of deep learning--low speed--by adopting indexing-based GPU parallelization to reduce computing times. Tests on public surveillance datasets confirmed that the proposed tracking framework offers state-of-the-art accuracy and is therefore ready for deployment.

Multi-object tracking unlocks a plethora of applications ranging from autonomous driving to public surveillance, which can help combat crime and reduce the frequency of accidents. "We believe our methods can inspire other researchers to develop novel deep-learning-based approaches to ultimately improve public safety," concludes Dr. Jeon. For everyone's sake, let us hope their vision soon becomes a reality!

Credit: 
GIST (Gwangju Institute of Science and Technology)

Bats in Tel Aviv enjoy the rich variety and abundance of food the city has to offer

image: A Fruit Bat in Tel Aviv

Image: 
S. Greif.

A new Tel Aviv University study found that, like humans, bats living in Tel Aviv enjoy the wide variety and abundance of food that the city has to offer, in contrast to rural bats living in Beit Guvrin, who are content eating only one type of food. The study was led by research student Katya Egert-Berg, under the guidance of Prof. Yossi Yovel, head of Tel Aviv University's Sagol School of Neuroscience and a faculty member of the School of Zoology in the George S. Wise Faculty of Life Sciences and the Steinhardt Museum of Natural History, as well as a recipient of the 2021 Kadar Family Award for Outstanding Research. The study was published in the journal BMC Biology.

The researchers explain that despite the intensification of urbanization processes, which tend to lead animals to leave the city, there are animals that are able to thrive in an urban domain. One such example is the fruit bat. These bats, like humans, live in a variety of environments, including the city and the countryside; there are even some that forage in the city and then go home to roost in the country.

The urban environment is fundamentally different from the rural environment in terms of the diversity and accessibility of food. Although the city has a larger variety of trees per area, there are many challenges that bats have to face, such as buildings and humans. In rural areas, on the other hand, most of the trees are concentrated in orchards without barriers, but have less diversity - the trees are mostly of one type. Because of the fundamental environmental differences between the city and the country with regards to the distribution and variety of fruit trees, the nature of the bats' movement when foraging in these areas differs as well.

In this new study, the researchers compared the nature of the movement of rural bats and city bats as they foraged for food. They used tiny GPS devices to track the bats, to see if the way they moved while searching for food was affected by their living environment, or the environment in which they were foraging.

They found that fruit bats foraging in the city are much more exploratory, enjoy the abundance of the urban environment, visit a variety of fruit trees every night, and feed from a wide a variety of trees. In contrast, the rural bats focus on only one or two fruit trees each night. Moreover, the researchers found that among the rural bats who roost in the countryside, there were many who left their rural homes every night in search of food in the city, and then flew back to the country after their meal. During their stay in the city, such bats share the same flight patterns as those of the bats that live in the city around the clock.

The study's findings led the researchers to assess that even bats that live in rural environments their entire lives will be able to orient themselves in an urban, industrialized environment. They explain that there are animal species that are flexible - for them, the ability to adapt to a new and unfamiliar environment such as an urban settlement is an acquired skill. Such species, of which the fruit bats are an example, will in many cases be able to adapt to life in urban areas.

Prof. Yovel: "How animals cope with urbanization is one of the most central and important questions in ecological research today. Understanding the ways in which animals adapt to urban areas can help us in our conservation efforts. The urban environment is characterized by much fragmentation, and we currently have little understanding of how animals, especially small animals, like the bats, move and fly in such areas."

Credit: 
Tel-Aviv University

Remote sensing techniques help treat and manage hollow forests

image: A diseased holm oak tree with a canopy with many dead branches.

Image: 
Swansea University.

Using advanced remote sensing techniques can help the early detection of oak tree decline and control many other forest diseases worldwide, says new research from Swansea University.

The research published in Remote Sensing of Environment examined holm oak decline, which in its early stages causes changes to the tree's physiological condition that is not readily visible. It is only later, when the tree is more advanced in its decline, that outward changes to its leaf pigment and canopy structure become apparent.

The researchers used an integrated approach of hyperspectral and thermal imaging, a 3-D radiative transfer model and field surveys of more than 1100 trees with varying severity of disease, which enabled them to successfully predict of holm oak decline at an early stage.

The research, which was in collaboration with the University of Cordoba, the Spanish National Research Council and the University of Melbourne, concluded that this integrated approach is vital to the large-scale monitoring of forest decline.

Lead author of the research, Alberto Hornero of the Department of Geography said: "It is essential to monitor trees for harmful forest diseases before the symptoms become visible. When the trees start to dry out or lose their leaves, it is simply too late to start treating and managing these hollow forests. Our research has shown that having a range of tools like advanced airborne imagery and satellite data observations will help us understand and monitor the physiological state of our oak trees and could potentially apply to many other forest diseases worldwide."

Credit: 
Swansea University

Making clean hydrogen is hard, but researchers just solved a major hurdle

image: The team's experimental water-splitting apparatus.

Image: 
Cockrell School of Engineering, The University of Texas at Austin

For decades, researchers around the world have searched for ways to use solar power to generate the key reaction for producing hydrogen as a clean energy source -- splitting water molecules to form hydrogen and oxygen. However, such efforts have mostly failed because doing it well was too costly, and trying to do it at a low cost led to poor performance.

Now, researchers from The University of Texas at Austin have found a low-cost way to solve one half of the equation, using sunlight to efficiently split off oxygen molecules from water. The finding, published recently in Nature Communications, represents a step forward toward greater adoption of hydrogen as a key part of our energy infrastructure.

As early as the 1970s, researchers were investigating the possibility of using solar energy to generate hydrogen. But the inability to find materials with the combination of properties needed for a device that can perform the key chemical reactions efficiently has kept it from becoming a mainstream method.

"You need materials that are good at absorbing sunlight and, at the same time, don't degrade while the water-splitting reactions take place," said Edward Yu, a professor in the Cockrell School's Department of Electrical and Computer Engineering. "It turns out materials that are good at absorbing sunlight tend to be unstable under the conditions required for the water-splitting reaction, while the materials that are stable tend to be poor absorbers of sunlight. These conflicting requirements drive you toward a seemingly inevitable tradeoff, but by combining multiple materials -- one that efficiently absorbs sunlight, such as silicon, and another that provides good stability, such as silicon dioxide -- into a single device, this conflict can be resolved."

However, this creates another challenge -- the electrons and holes created by absorption of sunlight in silicon must be able to move easily across the silicon dioxide layer. This usually requires the silicon dioxide layer to be no more than a few nanometers, which reduces its effectiveness in protecting the silicon absorber from degradation.

The key to this breakthrough came through a method of creating electrically conductive paths through a thick silicon dioxide layer that can be performed at low cost and scaled to high manufacturing volumes. To get there, Yu and his team used a technique first deployed in the manufacturing of semiconductor electronic chips. By coating the silicon dioxide layer with a thin film of aluminum and then heating the entire structure, arrays of nanoscale "spikes" of aluminum that completely bridge the silicon dioxide layer are formed. These can then easily be replaced by nickel or other materials that help catalyze the water-splitting reactions.

When illuminated by sunlight, the devices can efficiently oxidize water to form oxygen molecules while also generating hydrogen at a separate electrode and exhibit outstanding stability under extended operation. Because the techniques employed to create these devices are commonly used in manufacturing of semiconductor electronics, they should be easy to scale for mass production.

The team has filed a provisional patent application to commercialize the technology.

Improving the way hydrogen is generated is key to its emergence as a viable fuel source. Most hydrogen production today occurs through heating steam and methane, but that relies heavily on fossil fuels and produces carbon emissions.

There is a push toward "green hydrogen" which uses more environmentally friendly methods to generate hydrogen. And simplifying the water-splitting reaction is a key part of that effort.

Hydrogen has potential to become an important renewable resource with some unique qualities. It already has a major role in significant industrial processes, and it is starting to show up in the automotive industry. Fuel cell batteries look promising in long-haul trucking, and hydrogen technology could be a boon to energy storage, with the ability to store excess wind and solar energy produced when conditions are ripe for them.

Going forward, the team will work to improve the efficiency of the oxygen portion of water-splitting by increasing the reaction rate. The researchers' next major challenge is then to move on to the other half of the equation.

"We were able to address the oxygen side of the reaction first, which is the more challenging part, " Yu said, "but you need to perform both the hydrogen and oxygen evolution reactions to completely split the water molecules, so that's why our next step is to look at applying these ideas to make devices for the hydrogen portion of the reaction."

Credit: 
University of Texas at Austin

Robotic neck brace can help analyze cancer treatment impacts

New York, NY--July 19, 2021-- A new robotic neck brace from researchers at Columbia Engineering and their colleagues at Columbia's Department of Otolaryngology may help doctors analyze the impact of cancer treatments on the neck mobility of patients and guide their recovery.

Head and neck cancer was the seventh most common cancer worldwide in 2018, with 890,000 new cases and 450,000 deaths, accounting for 3% of all cancers and more than 1.5% of all cancer deaths in the United States. Such cancer can spread to lymph nodes in the neck, as well as other organs in the body. Surgically removing lymph nodes in the neck can help doctors investigate the risk of spread, but may result in pain and stiffness in the shoulders and neck for years afterward.

Identifying which patients may have issues with neck movement "can be difficult, as the findings are often subtle and challenging to quantify," said Scott Troob, assistant professor of otolaryngology - head and neck surgery and division chief of facial plastic and reconstructive surgery at Columbia University Irving Medical Center. However, successfully targeting what difficulties they might have with mobility can help patients benefit from targeted physical therapy interventions, he explained.

The current techniques and tools that doctors have to judge the range of motion a patient may have lost in their neck and shoulders are somewhat crude, explained Sunil K. Agrawal, a professor of mechanical engineering and rehabilitative and regenerative medicine and director of the ROAR (Robotics and Rehabilitation) Laboratory at Columbia Engineering. They usually either provide unreliable measurements or require too much time and space to set up for use in routine clinical visits.

To develop a more reliable and portable tool to analyze neck mobility, Agrawal and his colleagues drew inspiration from a robotic neck brace they previously developed to analyze head and neck motions in patients with amyotrophic lateral sclerosis (ALS). In partnership with Troob's group, they have now designed a new wearable robotic neck brace. Their study appears July 12 in the journal Wearable Technologies.

The new brace was made using 3D-printed materials and inexpensive sensors. The easy-to-wear device was based on the head and neck movements of 10 healthy individuals.

"This is the first study of this kind where a wearable robotic neck brace has been designed to characterize the full head and neck range of motion," Agrawal said.

In the new study, the researchers used the prototype brace, along with electrical measurements of muscle activity, to compare the neck mobility of five cancer patients before and one month after surgical removal of neck lymph nodes. They found their device could precisely detect changes in patient neck movements during routine clinical visits.

"Use of the sensing neck brace allows a surgeon to screen patients postoperatively for movement difficulty, quantify their degree of impairment, and select patients for physical therapy and rehabilitation," Troob said.

"Patients consistently identify need for rehabilitation and guided exercises after surgery as an unmet need in their medical care," Troob said. "This work will lay the foundation for the appropriate identification of patients for intervention. We additionally hope that through using the neck brace, we will be able to objectively quantify their improvement and develop evidence-based rehabilitative programs."

In the future, the researchers hope to investigate larger groups of patients and use the neck brace to follow patients through physical therapy to develop evidence-based protocols for rehabilitation, Troob said. They also would like to develop similar braces for other surgical sites, such as the forearm, ankle, or knee, he added.

Credit: 
Columbia University School of Engineering and Applied Science