Culture

NUI Galway mathematician publishes article in world's top mathematics journal

An Irish mathematician, Dr Martin Kerin, from the School of Mathematics, Statistics and Applied Mathematics at NUI Galway, has had a research article published in the Annals of Mathematics, widely regarded as the top journal for pure mathematics in the world. The article, written in collaboration with Professor Sebastian Goette of the University of Freiburg and Professor Krishnan Shankar of the University of Oklahoma, resolves a question ?rst asked around 60 years ago on the geometrical properties of seven-dimensional objects which very closely resemble spheres.

The Annals of Mathematics was founded in 1884 and is published by the Department of Mathematics at Princeton University, in cooperation with the Institute for Advanced Study. Only around thirty articles are accepted each year and Dr Kerin is only the second ever Irish-based mathematician to have an article appear in the journal.

The article deals with the geometry of seven-dimensional exotic spheres. A standard sphere can be thought of as the set of all points at a ?xed distance from a given point and is the result of gluing two discs (the hemispheres) together along their boundaries. If the boundaries of the two discs were instead glued together in a more interesting way, one would obtain an exotic sphere: to the casual observer it appears like the standard sphere, but it is a very different object.

The discovery of exotic spheres by John Milnor in the late 1950's resulted in his being awarded the Fields' medal, the highest honour in mathematics. The subsequent quest to understand these spaces led to the development of much of modern topology and geometry. In the 1960's, mathematicians began to wonder how much the geometry of exotics spheres, that is the shape, resemble that of the standard spheres. A common measurement of shape is the curvature, the same quantity used in Einstein's general theory of relativity to describe gravity and the shape of the universe. The standard sphere is the basic example of a positively curved space, and previous work had shown that some of the seven-dimensional exotic spheres admit nonnegative curvature. In this article, a new construction of the seven-dimensional exotic spheres was discovered, which allows one to conclude that, in fact, all of these spaces admit non-negative curvature.

Dr Kerin said: "It is a tremendous honour, and a dream come true, to have our article appear in the Annals and to see our names listed among many of the greatest mathematicians in history. I am fortunate to have two fantastic collaborators in this project, each of us bringing different strengths to the table. Some of the basic ideas in the paper had been ?oating around in the back of my mind for around a decade, and we were able to successfully apply these basic ideas to a long-standing open problem. We are very proud of our achievement, but it is possibly even more pleasing that this project has thrown up many other interesting questions. We will likely be busy with this line of research for many years to come."

The article can be found at https://annals.math.princeton.edu/2020/191-3/p03.

Credit: 
University of Galway

New study: Stroke patients are significantly delaying treatment amid COVID-19

FAIRFAX, Va. -- New research published today in the Journal of NeuroInterventional Surgery (JNIS) shows ischemic stroke patients are arriving to hospitals and treatment centers an average of 160 minutes later during the COVID-19 pandemic, as compared with a similar timeframe in 2019. These delays, say stroke surgeons from the Society of NeuroInterventional Surgery (SNIS), are impacting both survival and recovery.

The first study to confirm suspected stroke patient avoidance assessed 710 patients presenting with acute ischemic strokes at 12 stroke centers across six states. It compared the period of February and March 2019 (the baseline period) to February 2020 (the "pre-COVID-19" period) and March 2020 (the "COVID-19" period). In addition to the delay in treatment, the study also found a marked decrease in overall reported stroke patients, from 223 to 167, in these same treatment centers from February to March 2020 with the onset of the COVID-19 pandemic.

Neurointerventionalists with the Get Ahead of Stroke campaign say that in the case of the most serious strokes -- known as an emergent large vessel occlusions (ELVO) -- up to two million brain cells die each minute. The longer patients wait before treatment, the greater the impact the stroke will have -- potentially paralyzing them for life, or worse. Additionally, for every minute lost before receiving appropriate care, there is an associated medical cost of $1,000 for short- and long-term care. A 160-minute delay amounts to the loss of 320 million brain cells and $160,000 in additional medical costs.

"When it comes to stroke treatment, every minute counts. My colleagues and I have been devastated to see patients arriving at the hospital too late for us to help them," said the study's lead author Dr. Clemens Schirmer who is based at Geisinger Medical Center in Danville, Pennsylvania. "Our findings indicate a dire need for public education to address COVID-19 related fears to ensure people with stroke symptoms seek the lifesaving care they need without delay."

"Stroke care teams across the country have implemented protocols to safeguard patients from COVID-19," said Dr. Richard P. Klucznik, president of SNIS. "A stroke will not go away if you ignore it, and delaying treatment could eliminate your chance for recovery. It's critical to pay attention to any symptoms of stroke and call 911 right away."

Get Ahead of Stroke is a national public education and advocacy campaign designed to improve systems of care for stroke patients. Founded in 2016 by the Society of NeuroInterventional Surgery (SNIS), the campaign is currently working with its partners to ensure that the COVID-19 pandemic does not deter stroke patients from getting the lifesaving care they deserve.

Credit: 
Society of NeuroInterventional Surgery

Stem cell treatments 'go deep' to regenerate sun-damaged skin

May 28, 2020 - For a while now, some plastic surgeons have been using stem cells to treat aging, sun-damaged skin. But while they've been getting good results, it's been unclear exactly how these treatments - using adult stem cells harvested from the patient's own body - work to rejuvenate "photoaged" facial skin.

A new microscopic-level study provides the answer: within a few weeks, stem cell treatment eliminates the sun-damaged elastin network and replacing them with normal, undamaged tissues and structures - even in the deeper layers of skin.

Injection of the patient's own mesenchymal stem cells (MSCs) is "appropriate, competent and sufficient to elicit the full structural regeneration of the sun-aged skin," according to the report by Dr. Luis Charles-de-Sá, MD, of Universidade Federal do Rio de Janeiro, Brazil, Natale Gontijo-Amorim, MD and Gino Rigotti, MD of Verone-Italy University and colleagues. Their study appears in the June issue of Plastic and Reconstructive Surgery®, the official medical journal of the American Society of Plastic Surgeons (ASPS).

The researchers assessed the cellular- and molecular-level effects of MSC treatment on sun-damaged (photoaged) facial skin. All 20 patients in the study, average age 56 years, were scheduled for facelift surgery. The patients lived in northeast Brazil, a region where intense sun exposure is expected.

For each patient, a small sample of fat cells from the abdomen was processed to create patient-specific MSCs. The cultured stem cells were injected under the skin of the face, in front of the ear. When the patients underwent facelift surgery three to four months later, skin samples from the stem cell-treated area were compared to untreated areas.

Histologic and structural under the microscope analysis demonstrated that MSC treatment led to improvement in overall skin structure. Treated areas showed "partial or extensive reversal" of sun-related damage to the skin's stretchy elastin network - the main skin structure affected by photoaging. In the layer immediately beneath the skin surface, the stem cell-treated areas showed regeneration of a new, fully organized network of fiber bundles and dermal ECM remodeling changes.

In the deeper skin layer, "tangled, degraded, and dysfunctional" deposits of sun-damaged elastin were replaced by a normal elastin fiber network. These changes were accompanied by molecular markers of processes involved in absorbing the abnormal elastin and development of new elastin.

The findings suggested that stem cells triggered each of the many cellular- and molecular-level pathways involved in skin repair and regeneration. Use of the patient's own fat-derived MSCs "may be a relevant proposal for the anti-ageing action in regeneration of photodamaged human skin," Dr. Charles-de-Sá and colleagues write.

"The researchers conclude that stem-cells can lead to regeneration of sun-aged skin," according to a video commentary by Plastic and Reconstructive Surgery Editor-in-Chief Rod J. Rohrich, MD. In his video, Dr. Rohrich walks viewers through the dramatic changes in the microscopic appearance of skin samples obtained before and after MSC treatment.

"The re-building of structures below the surface translates to true improvements to the strength and appearance of the facial dermis," Dr. Rohrich adds. He emphasizes that patients interested in stem-cell treatment for aging, sun-damaged skin should discuss their options with a Board-certified plastic surgeon.

Credit: 
Wolters Kluwer Health

Tel Aviv University and IDC Herzliya researchers thwart large-scale cyberattack threat

In October 2016, a cyberattack temporarily took down Amazon, Reddit, Spotify and Slack for users along the U.S.'s East Coast. "Mirai," a botnet of hacked security cameras and Internet routers, aimed a flood of junk traffic at the servers of Dyn, a company that provides the global directory (or phonebook) for the web known as the Domain Name System or DNS.

Now researchers at Tel Aviv University and the Interdisciplinary Center (IDC) of Herzliya say that a weakness in the DNS could have brought about an attack of a much larger scale.

In their new study, which will be presented at the USENIX Security Conference in August 2020, the research group, co-led by Prof. Yehuda Afek of TAU's Blavatnik School of Computer Science, Blavatnik Interdisciplinary Cyber Research Center and the Checkpoint Institute and Prof. Anat Bremler-Barr, Vice Dean of IDC's Efi Arazi School of Computer Science, together with TAU doctoral student Lior Shafir, provide new details of a technique that could have allowed a relatively small number of computers to carry out DDoS (distributed denial of service) attacks on a massive scale, overwhelming targets with false requests for information until they were thrown offline.

As early as February, the researchers alerted a broad collection of companies responsible for the Internet's infrastructure to their findings. The researchers say those firms, including Google, Microsoft, Cloudflare, Amazon, Dyn (now owned by Oracle), Verisign, and Quad9, have all updated their software to address the problem, as have several makers of the DNS software those companies use.

Through joint research projects, Prof. Afek and Prof. Bremler-Barr have already stopped hundreds of thousands of DDoS cyberattacks over the last two decades, starting with the design of the first DDoS attacks scrubber server at Riverhead Networks, a company they co-founded with Dr. Dan Touitou in 2001.

"The DNS is the essential Internet directory," explains Prof. Bremler-Barr. "In fact, without the DNS, the Internet cannot function. As part of a study of various aspects of the DNS, we discovered to our surprise a very serious breach that could attack the DNS and disable large portions of the network."

The new threatening DDoS technique, which the researchers dubbed "NXNSAttack" (Non Existent Name Server Attack) takes advantage of vulnerabilities in common DNS software. DNS converts the domain names you click or type into the address bar of your browser into IP addresses. But the NXNSAttack can cause an unwitting DNS server to perform hundreds of thousands of requests in response to just one hacker's request.

"The attack in 2016 used over 1M IoT devices, whereas here we see the same impact with only a few hundred," adds Prof. Afek. "We are talking about a major amplification, a major cyberattack that could disable critical parts of the internet."

The way it works is that when a client machine tries to reach a certain resource on the Internet, it issues a request with the name of the resource to a resolver type DNS server, which is in charge of translating the requested name into an IP address. In order to find the required IP address, the resolver goes into an exchange of messages with several DNS servers of another type, called "authoritative." The authoritative servers redirect the resolver from one to the other, essentially telling it to "go and ask that one" until the resolver reaches an authoritative server that knows the final answer -- the requested IP address.

"To mount the NXNSattack," continues Prof. Afek, "an attacker either acquires for a negligible price or simply penetrates an authoritative server, which would redirect the resolver to send an enormous number of requests to the authoritative servers. This happens while the resolver is trying to answer the particular request that the attacker has crafted.

"The attacker sends such a request multiple times over a long period of time, which generates a tsunami of requests between the DNS servers, which are subsequently overwhelmed and unable to respond to the legitimate requests of actual legitimate users."

Mr. Shafir explains further: "A hacker that discovered this vulnerability would have used it to generate an attack targeting either a resolver or an authoritative DNS server in particular locations in the DNS system. In either case, the attack server would be incapacitated and its services blocked, unable to function due to the overwhelming number of requests it got. It would prevent legitimate users from reaching the resources on the Internet they sought."

The research for the study formed part of Mr. Shafir's PhD work; he built a set up with an authoritative server, on which he simulated an attack on the servers, generating a tsunami of requests between the servers, incapacitating them as a result.

"Our discovery has prevented major potential damage to web services used by millions of users worldwide," concludes Prof. Yehuda Afek. "The 2016 cyberattack, which is considered the greatest in history, knocked down much of the Internet in the U.S. But an attack like the one we now prevented could have been more than 800 times more powerful."

Credit: 
American Friends of Tel Aviv University

Autism severity can change substantially during early childhood

image: UC Davis MIND Institute logo

Image: 
UC Davis Health

During early childhood, girls with autism tend to show greater reduction and less rise in their autism symptom severity than boys with autism, a UC Davis MIND Institute study has found.

Early childhood is a period of substantial brain growth with critical ability for learning and development. It also is the typical time for an initial diagnosis of autism and the best time for early intervention. In the U.S., about 1 in 54 children has been identified with autism spectrum disorder (ASD), with four times as many boys with ASD as girls.

Previous studies indicated inconsistent results in terms of changes in autism severity during childhood. The general sense was that the severity of autism at diagnosis would last a lifetime.

The MIND Institute's study, published May 14 in the Journal of Autism and Developmental Disorders, evaluated changes in symptom severity in early childhood and the potential factors associated with those changes. It included 125 children (89 boys and 36 girls) with ASD from the Autism Phenome Project (APP), a longitudinal project in its 14th year at the MIND Institute. The children received substantial community-based autism intervention throughout their childhood.

The researchers used a 10-point severity measure called the ADOS Calibrated Severity Score (CSS) derived from the Autism Diagnostic Observation Schedule (ADOS), the gold standard assessment tool in autism research. They computed a severity change score for participants as the difference between their ADOS CSS scores at age 6 and at age 3. A change of two points or more was considered a significant change in symptom severity.

Change in severity of autism symptoms and optimal outcome

The study classified participants based on their severity change score into a Decreased Severity Group (28.8%), a Stable Severity Group (54.4%) and an Increased Severity Group (16.8%). One key finding was that children's symptom severity can change with age. In fact, children can improve and get better.

"We found that nearly 30% of young children have less severe autism symptoms at age 6 than they did at age 3. In some cases, children lost their autism diagnoses entirely," said David Amaral, a distinguished professor of psychiatry and behavioral sciences, faculty member at the UC Davis MIND Institute and senior author on the study.

"It is also true that some children appear to get worse," Amaral said. "Unfortunately, it is not currently possible to predict who will do well and who will develop more severe autism symptoms and need different interventions."

Optimal outcome is a standard achieved when someone previously diagnosed with ASD no longer meets autism diagnostic criteria due to loss of autism symptoms. In this study, seven participants (four girls and three boys) had an ADOS CSS below the ASD cutoff at age 6, potentially indicating optimal outcome. Children showing decreasing symptom severity had better adaptive skills in multiple domains compared to those in the stable or increased severity groups.

Girls with autism and camouflaging as a coping strategy

Girls and boys might be characterized with different manifestations of autism symptoms. Girls might show better developmental results than boys in cognition, sociability and practical communication skills.

"We found that girls with autism decrease in severity more than boys and increase in severity less than boys during early childhood," said Einat Waizbard-Bartov, a graduate researcher at the MIND Institute and the first author of the paper.

One possible explanation for this difference is the girls' ability to camouflage or hide their symptoms, according to Waizbard-Bartov. Camouflaging the characteristics of autism includes masking one's symptoms in social situations. This coping strategy is a social compensatory behavior more prevalent in females diagnosed with ASD compared to males with ASD across different age ranges, including adulthood.

"The fact that more of the girls appear to have decreased in autism severity may be due to an increasing number of girls compared to boys who, with age, have learned how to mask their symptoms," Waizbard-Bartov said. "We will explore this possibility in future studies."

IQ, initial severity and change in autism severity

The study also found that IQ had a significant relationship with change in symptom severity. Children with higher IQs were more likely to show a reduction in ASD symptoms.

"IQ is considered to be the strongest predictor of symptom severity for children with autism," Waizbard-Bartov said. "As IQ scores increased from age 3 to age 6, symptom severity levels decreased."

The researchers could not identify a relationship between early severity levels and future symptom change. Surprisingly, the group of children with increased symptom severity at age 6 showed significantly lower severity levels at age 3, and their severity scores were less variable than the other groups.

The study raises several issues for further investigation, such as the relationships between IQ, initial severity level, and type and intensity of intervention received, in relation to symptom change over time.

Credit: 
University of California - Davis Health

In planet formation, it's location, location, location

image: The brilliant tapestry of young stars flaring to life resembles a glittering fireworks display in this Hubble Space Telescope image. The sparkling centerpiece of this fireworks show is a giant cluster of thousands of stars called Westerlund 2. The cluster resides in a raucous stellar breeding ground known as Gum 29, located 20,000 light-years away from Earth in the constellation Carina. Hubble's Wide Field Camera 3 pierced through the dusty veil shrouding the stellar nursery in near-infrared light, giving astronomers a clear view of the nebula and the dense concentration of stars in the central cluster. The cluster measures between six light-years and 13 light-years across.

Image: 
Credits: NASA, ESA, the Hubble Heritage Team (STScI/AURA), A. Nota (ESA/STScI) and the Westerlund 2 Science Team

Astronomers using NASA's Hubble Space Telescope are finding that planets have a tough time forming in the rough-and-tumble central region of the massive, crowded star cluster Westerlund 2. Located 20,000 light-years away, Westerlund 2 is a unique laboratory to study stellar evolutionary processes because it's relatively nearby, quite young, and contains a large stellar population.

A three-year Hubble study of stars in Westerlund 2 revealed that the precursors to planet-forming disks encircling stars near the cluster's center are mysteriously devoid of large, dense clouds of dust that in a few million years could become planets.

However, the observations show that stars on the cluster's periphery do have the immense planet-forming dust clouds embedded in their disks. Researchers think our solar system followed this recipe when it formed 4.6 billion years ago.

So why do some stars in Westerlund 2 have a difficult time forming planets while others do not? It seems that planet formation depends on location, location, location. The most massive and brightest stars in the cluster congregate in the core, which is verified by observations of other star-forming regions. The cluster's center contains at least 30 extremely massive stars, some weighing up to 80 times the mass of the Sun. Their blistering ultraviolet radiation and hurricane-like stellar winds of charged particles blowtorch disks around neighboring lower-mass stars, dispersing the giant dust clouds.

"Basically, if you have monster stars, their energy is going to alter the properties of the disks around nearby, less massive stars," explained Elena Sabbi, of the Space Telescope Science Institute in Baltimore and lead researcher of the Hubble study. "You may still have a disk, but the stars change the composition of the dust in the disks, so it's harder to create stable structures that will eventually lead to planets. We think the dust either evaporates away in 1 million years, or it changes in composition and size so dramatically that planets don't have the building blocks to form."

The Hubble observations represent the first time that astronomers analyzed an extremely dense star cluster to study which environments are favorable to planet formation. Scientists, however, are still debating whether bulky stars are born in the center or whether they migrate there. Westerlund 2 already has massive stars in its core, even though it is a comparatively young, 2-million-year-old system.

Using Hubble's Wide Field Camera 3, the researchers found that of the nearly 5,000 stars in Westerlund 2 with masses between 0.1 to 5 times the Sun's mass, 1,500 of them show fluctuations in their light as the stars accrete material from their disks. Orbiting material clumped within the disk would temporarily block some of the starlight, causing brightness fluctuations.

However, Hubble detected the signature of such orbiting material only around stars outside the cluster's packed central region. The telescope witnessed large drops in brightness for as much as 10 to 20 days around 5% of the stars before they returned to normal brightness. They did not detect these dips in brightness in stars residing within four light-years of the center. These fluctuations could be caused by large clumps of dust passing in front of the star. The clumps would be in a disk tilted nearly edge-on to the view from Earth. "We think they are planetesimals or structures in formation," Sabbi explained. "These could be the seeds that eventually lead to planets in more evolved systems. These are the systems we don't see close to very massive stars. We see them only in systems outside the center."

Thanks to Hubble, astronomers can now see how stars are accreting in environments that are like the early universe, where clusters were dominated by monster stars. So far, the best known nearby stellar environment that contains massive stars is the starbirth region in the Orion Nebula. However, Westerlund 2 is a richer target because of its larger stellar population.

"Hubble's observations of Westerlund 2 give us a much better sense of how stars of different masses change over time, and how powerful winds and radiation from very massive stars affect nearby lower-mass stars and their disks," Sabbi said. "We see, for example, that lower-mass stars, like our Sun, that are near extremely massive stars in the cluster still have disks and still can accrete material as they grow. But the structure of their disks (and thus their planet-forming capability) seems to be very different from that of disks around stars forming in a calmer environment farther away from the cluster core. This information is important for building models of planet formation and stellar evolution."

This cluster will be an excellent laboratory for follow-up observations with NASA's upcoming James Webb Space Telescope, an infrared observatory. Hubble has helped astronomers identify the stars that have possible planetary structures. With Webb, researchers can study which disks around stars are not accreting material and which disks still have material that could build up into planets. This information on 1,500 stars will allow astronomers to map a path on how star systems grow and evolve. Webb also can study the chemistry of the disks in different evolutionary phases and watch how they change, and help astronomers determine what influence environment plays in their evolution.

NASA's Nancy Grace Roman Space Telescope, another planned infrared observatory, will be able to perform Sabbi's study on a much larger area.? Westerlund 2 is just a small slice of an immense star-formation region. These vast regions contain clusters of stars with different ages and different densities. Astronomers could use Roman Space Telescope observations to start to build up statistics on how a star's characteristics, like its mass or outflows, affect its own evolution or the nature of stars that form nearby. The observations could also provide more information on how planets form in tough environments.

Credit: 
NASA/Goddard Space Flight Center

American Indians and Alaska Natives have disproportionately higher rates of CVD

DALLAS, May 28, 2020 -- Type 2 diabetes (T2D) affects American Indians and Alaska Natives at approximately three times the rate of white Americans and is closely linked to the disproportionately high rates of cardiovascular diseases such as heart attacks and strokes, according to the American Heart Association Scientific Statement "Cardiovascular Health in American Indians and Alaska Natives," published today in the Association's flagship journal Circulation.

The statement provides an overview for the general public, health care providers and policy makers about the major cardiovascular challenges faced by this population group who have one of the highest rates of cardiovascular disease in the United States.

American Indians and Alaska Natives develop cardiovascular diseases at earlier ages than white Americans. Heart disease rates are approximately 50% higher among the 5.2 million Americans who self-identify as American Indian and Alaska Native, compared to white Americans. And, more than one-third of deaths attributed to cardiovascular disease occur before the age of 65.

Obesity, a major contributor to T2D, is an epidemic among American Indians and Alaska Natives - it is estimated that 30-40% of American Indians have obesity. The Strong Heart Study, cited in the statement, found that some of the risk for obesity and diabetes is genetically inherited. Given this propensity, it is important for health care professionals to assess coronary heart disease risk and continually monitor for coronary heart disease in American Indians and Alaska Natives.

"There are urgent cardiovascular health risks for American Indians and Alaska Natives that health care professionals and policy makers should not ignore. We strongly encourage patients, health care professionals and most importantly, community leaders to take steps to prevent and fight cardiovascular disease," said Khadijah Breathett, M.D., M.S., FAHA, chair of the writing committee for the Scientific Statement, assistant professor of medicine in the division of cardiology at the University of Arizona, and advanced heart failure and transplant cardiologist at Banner - University Medical Center in Tucson, Arizona.

The social determinants of health facing American Indians and Alaska natives are longstanding and complex, and access to health care is limited. In 2017, 19% did not have health insurance. The federal health program, Indian Health Service, provides health care for 1.6 million American Indians and Alaska Natives. However, that accounts for less than one-third of the total American Indian and Alaska Native population in the U.S.

Historical events such as displacement, war, infectious disease, unfulfilled agreements and decimation of tribal lands by the U.S. government in the 1800s created a cultural mistrust among many American Indians and Alaska Natives. Being forced to move from their native lands and living in rural areas without access to proper health care contribute to the issue. Currently, 21% of American Indians live below the federal poverty line.

"Racial and ethnic minority groups in the U.S. have suffered from inequitable policies for hundreds of years. These policies have contributed to mistrust in the traditional health care system. The most effective way to create change is through restructuring of inequitable policies and empowerment of communities," said Breathett.

The statement notes that the most effective interventions by health care professionals start by seeking the support of community leaders, building a relationship with the community, assessing barriers and resources for individuals, creating a method of public communication and developing an action plan for progress.

Nearly 32% of American Indians and Alaska Natives use tobacco, a rate almost twice as high as other ethnic populations in the U.S. Anti-tobacco intervention methods, such as media campaigns and modified smoking policies, are less effective than culturally adapted methods of community reinforcement.

In addition to individual risk factors, there are others that need to be addressed at the policy level, including the exposure to toxic metals due to groundwater contamination, which is particularly high in the Midwest and Southwest. Toxic metal exposure to arsenic and cadmium are associated with increased development of atherosclerosis as well as increased total cholesterol levels in multiple American Indian populations. Atherosclerosis is the slow narrowing of arteries that underlies most heart attacks and strokes.

Physician bias influences healthcare delivery among racial and ethnic minorities. Experiences of discrimination and microaggressions in the healthcare setting have correlated with worse physical and mental health among American Indians with chronic diseases. Implicit bias training may help reduce the impact of bias in decision-making between healthcare professionals and American Indian and Alaska Native patients.

Shared decision-making between a patient and their health care professional has been a longstanding tradition among some American Indians and Alaska Natives. Many participate in community-talking circles in which everyone in the group has the right to provide uninterrupted perspectives. Talking circles have been instrumental in providing education and empowering the American Indian and Alaska Native community to manage T2D.

The greatest risk factor for cardiovascular disease among American Indians and Alaska Natives is T2D, which is exacerbated by the social determinants of health experienced by this group. Community-based, culturally appropriate interventions are necessary to reduce T2D risk by encouraging physical activity and weight loss; controlling cardiovascular disease risk factors, such as high cholesterol and high blood pressure; and promoting tobacco cessation.

"Health care providers must individualize care by identifying the individual patient's needs and matching them to the appropriate resources such as community-based interventions. We encourage patients, healthcare professionals and community stakeholders to learn the risk factors for cardiovascular disease and take steps to fight cardiovascular disease before it starts. We are all in this together," said Breathett.

Credit: 
American Heart Association

The death marker protein cleans up your muscles after exercise

image: A single, intense 10-minute bicycle ride prompts a clean-up of muscles as the protein Ubiquitin tags onto worn-out proteins, causing them to be degraded. This prevents the accumulation of damaged proteins and helps keep muscles healthy.

Image: 
Photo: Chris and Simon Branford.

Researchers at the University of Copenhagen's Department of Nutrition, Exercise and Sports have demonstrated that physical activity prompts a clean-up of muscles as the protein Ubiquitin tags onto worn-out proteins, causing them to be degraded. This prevents the accumulation of damaged proteins and helps keep muscles healthy.

Physical activity benefits health in many ways, including the building and maintenance of healthy muscles, which are important for our ability to move about normally, as well as to fulfill the vital role of regulating metabolism. As most of the carbohydrate that we eat is stored in muscle, our muscles are extremely important for regulating metabolism.

An intense bike ride boosts Ubiquitin activity

Maintaining muscular function is essential. Part of our ability to do so depends upon proteins - the building blocks of muscles - being degraded when worn-out and eliminated in a kind of clean up process that allows them to be replaced by freshly synthesized proteins.

Now, Danish researchers - in collaboration with research colleagues at the University of Sydney, Australia - have demonstrated that a single, intense, roughly 10-minute bicycle ride results in a significant increase in the activity of Ubiquitin, the 'death marker protein' and a subsequent intensification of the targeting and removal of worn-out proteins in muscles. This paves the way for an eventual build-up of new proteins:

"Muscles eliminate worn-out proteins in several ways," explains Professor Erik Richter of the Section for Molecular Physiology at UCPH's Department of Nutrition, Exercise and Sports. He continues:

"One of these methods is when Ubiquitin, "the death-marker", tags a protein in question. Ubiquitin itself is a small protein. It attaches itself to the amino acid Lysine on worn-out proteins, after which the protein is transported to a Proteasome, which is a structure that gobbles up proteins and spits them out as amino acids. These amino acids can then be reused in the synthesis of new proteins. As such, Ubiquitin contributes to a very sustainable circulation of the body's proteins."

Why physical activity is healthy

While extensive knowledge has been accumulated about how muscles regulate the build-up of new proteins during physical training, much less is known about how muscle contractions and exercise serve to significantly clean-up worn-out proteins. According to Professor Bente Kiens, another project participant: "The important role of Ubiquitin for 'cleaning-up' worn-out proteins in connection with muscular activity was not fully appreciated. Now we know that physical activity increases Ubiquitin tagging on worn-out proteins."

Professor Jørgen Wojtaszewski, a third Danish project participant, explains that their findings serve to strengthen the entire foundation for the effect of physical activity: "Basically, it explains part of the reason why physical activity is healthy. The beauty is that muscle use, in and of itself , is what initiates the processes that keep muscles 'up to date', healthy and functional."

There remains a great amount of knowledge that would be interesting to delve deeper into, as very little is known about how different training regimens, gender, diet and genetic background impact the process and thus, the possibility of influencing optimal muscle function.

Credit: 
University of Copenhagen - Faculty of Science

AMP releases preliminary results to nationwide SARS-CoV-2 molecular testing survey

ROCKVILLE, Md. - May 28, 2020 - The Association for Molecular Pathology (AMP), the premier global, molecular diagnostic professional society, today released the preliminary results of its April 2020 SARS-CoV-2 Testing Survey for clinical laboratories. The anonymous survey was created and administered to document clinical laboratory efforts and experiences. The results will be used to help inform future advocacy and clinical practice programs related to pandemic responses.

AMP's 67-question survey assessed many important aspects of SARS-CoV-2 molecular diagnostic testing, including methodology, performance, capacity, supply chain, regulatory, and reporting requirements. The preliminary results today included feedback from 118 representatives from US-based academic medical centers, commercial reference laboratories and community hospitals. 85% of these respondents are currently offering SARS-CoV-2 testing to patients, while another 10% are currently in the test validation phase. 90% of the laboratories recognize the need to increase diagnostic testing capacity further, and they are working hard to make this happen in the next few months. However, more than 70% of these laboratories have experienced supply chain interruptions that have resulted in significant delays, in many cases forcing them to validate at least three different diagnostic testing methods at the same time just in case the supply of reagents or materials runs out. These supply shortages have included everything from the RNA extraction kits, primers, probes, and enzymes to the physical sample collection materials, such as the swabs and containers for storage and transportation.

"Clinical laboratories across the country are working hard and being extremely resourceful in order to provide diagnostic SARS-CoV-2 testing to Americans, with the majority running at full staffing/testing capacity seven days a week," said Karen E. Weck, MD, AMP President and Professor of Pathology and Laboratory Medicine, Professor of Genetics and Director of Molecular Genetics and Pharmacogenomics at University of North Carolina Chapel Hill. "However, AMP members know more testing is needed as the country begins to reopen. We are continuing to deploy multiple testing methodologies to overcome supply shortages, increase capacity and improve turnaround times."

Based on the common themes found in the survey results, AMP is recommending that federal, state and local governments:

1. Reassess type and location of SARS-CoV-2 testing services needed: In order to provide acute care, safely reopen businesses and reinvigorate the economy, there should be a reassessment of what type of testing is needed and where.

2. Reprioritize supply allocations based on clinical testing needs, which could change over time: Depending upon the prevalence of SARS-CoV-2 in a community, there may be a shift in testing methodology and related supply needs over time. The need for testing supplies designed for acute care, surveillance, high-throughput, and other clinical needs should be monitored widely to provide real-time feedback to agencies to support data-driven supply allocations.

3. Increase transparency, communication, and real-time transmission of Information between laboratories and suppliers (commercial manufacturers and government): There is a need for laboratories to understand in real-time resource availability and reagent and supply quantities.

4. Real-time coordination amongst laboratories to leverage moments of excess capacity: Based on data regarding testing capacity and demand, there may be an opportunity to coordinate regionally to ensure that any excess test capacity is leveraged to ensure samples get processed as quickly as possible.

5. Standardize agency reporting format and processes for reportable infectious diseases during a pandemic: Complying with multiple agency reporting requirements with variable formats has been burdensome to the clinical laboratories.

AMP will continue to review and analyze the results of the survey as part of its ongoing commitment to share expertise, assess laboratory needs, engage key stakeholders and provide recommendations for improving future pandemic responses and ensuring more patients have access to high-quality testing procedures.

Credit: 
Association for Molecular Pathology

New research points to treatment for COVID-19 cytokine storms

image: This is a microscopic photo of a blood smear from a transgenic mouse that mimics the human immune disorder, secondary HLH (hemophagocytic lymphohistiocytosis). The image shows macrophage immune cells (indicated by arrow) flooding healthy tissue cells during a cytokine storm caused by HLH in a very similar fashion t what occurs in patients with severe COVID-19 disease. Researchers reporting in the Journal of Allergy and Clinical Immunology say the HLH data were a factor in a decision to test the anti-inflammatory drug ruxolitinib (used to treat secondary HLH) in patients with COVID-19 in China.

Image: 
Cincinnati Children's

CINCINNATI - A transgenic mouse developed at Cincinnati Children's to model the deadly childhood immune disease HLH (hemophagocytic lymphohistiocytosis) may play a key role in saving lives during the COVID-19 virus pandemic.

One of the genetically engineered mouse strain's inventors--Cincinnati Children's cancer pathologist Gang Huang, PhD-- is co-investigator on a small clinical trial that successfully tested a drug used to treat to HLH (ruxolitinib) to dramatically reverse respiratory and multi-system inflammation in severely ill COVID-19 patients. Data from the Phase II clinical study is published in the Journal of Allergy and Clinical Immunology.

The study involved 43 hospitalized patients diagnosed with severe COVID-19 between February 9 and February 28 in Wuhan, China, believed to be ground zero for the pandemic. The multi-center study was led by Jianfeng Zhou, MD, PhD, Department of Hematology at Tongji Hospital, Tongji Medical College and Huazhong University of Science in Wuhan.

Zhou is a longtime collaborator of Huang and colleagues at the Cincinnati Children's HLH Center of Excellence, part of the Cancer and Blood Diseases Institute.

Ruxolitinib Shows Signs of Benefit

Patients taking ruxolitinib were randomly selected to receive two daily 5mg oral doses of the anti-inflammatory drug, plus the standard of care treatment for COVID-19. A randomly selected control group of 21 patients received a placebo along with the standard of care treatment.

"Ruxolitinib recipients had a numerically faster clinical improvement," study authors write in their report. "Significant chest CT improvement, a faster recovery from lymphopenia and favorable side-effect profile in ruxolitinib group were encouraging and informative to future trials to test efficacy of ruxolitinib in a larger population."

Patients treated with ruxolitinib saw a shorter median time to clinical improvement compared to the control group. Patients treated with ruxolitinib saw a shorter median time to clinical improvement compared to the control group. Researchers reported that 90 percent of ruxolitinib patients showed CT scan improvement within 14 days, compared with 61. 9 percent of patients from the control group. Three patients in the control group eventually died of respiratory failure. All the severely ill patients who received ruxolitinib survived.

More clinical testing of the drug is needed. A larger Phase III clinical trial RUXCOVID by Incyte and Novartis is now testing up to 400 severely ill COVID-19 patients with the drug, according to Huang. Preliminary clinical data from the study is expected during the summer, he added.

"This is the first therapy we know of that appears to work effectively to quiet the cytokine storm and inflammation in severe COVID-19 disease, and there are no significant toxicities to patients who take the drug by two pills a day," Huang said. "This is critical until we can develop and distribute enough effective vaccine to help prevent people from becoming infected."

Calming the 'Cytokine Storm'

The so-called cytokine storm that inundates the bodies of severely ill COVID-19 patients with inflammatory cells produced by the immune system is a common feature of children battling secondary HLH, which happens in patients where initial HLH treatment has not worked. Huang, who along with a large portion of the world's scientific community was busy trying to study and find solutions to COVID-19, noticed this common clinical feature of both illnesses.

He also noticed that severe COVID-19 disease clinical manifestations are very similar to those seen in transgenic laboratory mice created to faithfully mimic human secondary HLH in the lab. That preclinical laboratory research, some of it in collaboration with the researchers in Wuhan, China, helped identify the drug ruxolitinib for treating secondary HLH. The anti-inflammatory drug is also used to treat other blood diseases including leukemia.

"I approached our research colleagues in Wuhan and explained our observations and recommended this drug be tested to quiet the cytokine storm in the multi-system inflammation in patients with severe COVID-19 disease," Huang said. "The disease was spreading very rapidly and many people were dying. We believed the existing clinical drug would help save lives. So, we worked to push it forward before there is an effective vaccine for everyone."

Huang said the work with colleagues in China was completed on a compressed timeframe as scientists around the world went on high alert to battle the pandemic in January. During their work, Huang and researchers in China found other clinical studies involving other diseases where ruxolitinib also had worked well at quieting inflammation, and testing on COVID-19 patients proceeded.

Credit: 
Cincinnati Children's Hospital Medical Center

Public option would lower health premiums, but not greatly expand coverage

Offering a government-sponsored health plan with publicly determined payment rates to people who buy their own insurance could lower the cost of premiums, but on its own it is unlikely to substantially increase the overall number of people with coverage, according to a new RAND Corporation study.

Modeling four scenarios for adding a public option for individual coverage available nationwide, researchers found that premiums for public plans could be 10% to 27% lower than private insurance plans because of lower provider payment rates in the public option.

A public option had much less impact on boosting the number of people with insurance. Under three of the scenarios, the number of uninsured people fell 3% to 8%, while the number of uninsured declined marginally under a fourth scenario studied.

The analysis also found that lower-income people are less likely to benefit from the public option because of the tax credit structure of the federal Affordable Care Act.

"Since higher-income people pay the full cost of insurance on the individual market, they could receive substantial savings under a public option," said Jodi Liu, the study's lead author and a policy researcher at RAND, a nonprofit research organization. "But policymakers should consider how the design of a public option could decrease the tax credits lower-income enrollees receive under the ACA."

State and federal lawmakers have expressed interest in creating a public health insurance option, broadly defined as an insurance plan for individuals under age 65 that provides access to publicly determined payment rates.

Four different bills that would create a federal public option were introduced in the Congress in 2019 and several Democratic presidential candidates (including presumptive nominee Joe Biden) included public options in their health reform platforms.

RAND researchers used a microsimulation approach to estimate how the addition of a federal public option for individual market insurance could affect overall insurance coverage, individual market enrollment and premiums for individual market enrollees. About 14 million people buy plans on the individual market each year.

The analysis considered four designs that vary based on what rates providers are paid, whether the public option is considered "on-Marketplace" or "off-Marketplace" coverage that affects premium tax credit amounts under the ACA , and whether premium tax credits are available to higher-income individuals. Researchers assumed that the public option for individual market insurance would offer bronze, silver, gold and platinum tiers of actuarial-based coverage.

Payment rates were set at 79 percent of the current commercial rates (between Medicaid and commercial rates) for two scenarios and at 93 percent of commercial rates (between Medicare and commercial rates) for the other two scenarios.

The analysis assumes that providers are willing to contract at lower payment rates and that adequate provider networks can be formed. (The work was conducted prior to the coronavirus pandemic and does not assess its impact on participating providers and payment rates.)

The findings suggest that most enrollees in the individual market would switch from private plans to public plans in these scenarios.

A relatively small pool of sicker and more-expensive people would remain enrolled on private plans because of assumptions that higher spenders would have lower preference for public plans because of real or perceived access barriers related to lower payments to providers. As a result, researchers found that some individual market premiums increased when they modeled the public plan.

Federal spending fell under all of the scenarios, with savings ranging from $7 billion to $24 billion annually. The savings occur as premium tax credit amounts decline, because the silver-level public option becomes the benchmark for setting subsidy levels in some scenarios and because of changes to the risk pool and competition effects.

To gauge the welfare effects on individuals, researchers estimated the number of people who would be "better off" (becoming newly insured or paying less for an equivalent or more generous plan) or "worse off" (becoming uninsured or paying more for an equivalent or less generous plan) in each scenario. Across the public option scenarios analyzed, 5.1 million to 12.1 million people were better off, and 2.2 million to 6.8 million people were worse off.

Those who were worse off have incomes below 400 percent of federal poverty level. Because tax credits were tied to the public silver premium in most of the RAND scenarios, individuals' tax credits fell when the public plan was introduced. As a result, for many subsidized individuals, the introduction of the public option did not reduce out-of-pocket premiums.

Researchers say that one option to ease this burden would be to reinvest the cost savings from a public option into programs that would benefit lower-income people, such as providing a larger tax credit to lower-income people who buy individual health insurance policies.

Credit: 
RAND Corporation

Unique 'home built' device provides fast disease analysis in kidneys affected by diabetes

WASHINGTON -- The amount of scarring in damaged kidneys as a result of diabetes or acute injury, is a key factor in determining treatment. But it has not been possible, using traditional techniques, to quickly and accurately assess how widespread this kind of wounding extends within the organ. Now, however, a physicist and chemist at Georgetown University Medical Center has shown that a microscope he began developing with colleagues at University of California-Irvine can provide an immediate answer.

His findings, published in the journal Kidney International, suggest that, given further successful testing of this device, it could be adopted in an operating suite using biopsies, usually taken with a needle, from a patient's kidney. These biopsies, which don't need to be stained, will score the degree of tubulointerstitial fibrosis -- progressive scarring due to a failed wound-healing process of kidney tissue after chronic, sustained injury. This score can then be combined with results from traditional pathology to help physicians assess long-term prognosis.

While this kind of approach was developed for cancer prognosis, this study represents "to our knowledge, the first expansion of this type of test to understand human kidney disease and to specifically to characterize disease states," says the study's lead investigator, Suman Ranjit, PhD, assistant professor in Georgetown's Department of Biochemistry and Molecular & Cellular Biology.

The advanced microscope, called DIVER, uses phasor approach to fluorescence lifetime imaging (FLIM). Simply said, the devices work together to examine the type of molecules that are in an image of the tissue sample captured by the microscope. It uses endogenous fluorescence emitted naturally by the biomolecules, measuring the time it takes for different molecules to stay in the excited state (fluorescence lifetime). The results are pseudo-color mapped, with each color representing specific types and degrees of molecular content that reveal changes in structure and biology of the organ that link to disease severity.

"Using this method, numerous biopsies from a kidney can be examined quickly. The process is automated, eliminating operator bias," says Ranjit. In contrast, traditional biopsies often an hours-long process of staining and pathological examination happens outside of an operating room.

This study examined frozen human kidney biopsy tissues from patients with diabetes, obtained from the University of Chicago. Researchers found that the new method closely replicated findings obtained by pathology analyses.

Ranjit started to work on this clinical project using what he calls his "home built" device while a postdoctoral scholar at UC Irvine's Laboratory for Fluorescence Dynamics, but completed it at Georgetown. He now has applied for federal grants that will help him "shrink" this idea into a small handheld device that could be used in operating rooms, and to further improve automation and imaging speed.

When the device is perfected, Ranjit says it may be possible to use it for assessing disease state in many organs, a process that now depend on cumbersome pathological analysis.

"When widely used this imaging technique will enable physicians to detect early fibrosis, or scarring, in tissues as well as determine the health of kidney, liver and other tissues that are being considered for transplantation," says co-author Moshe Levi, MD, Interim Dean for Research at GUMC and professor of biochemistry and molecular & cellular biology.

Credit: 
Georgetown University Medical Center

Summer forage capabilities of tepary bean and guar in the southern great plains

image: A field view of tepary bean at 55 days after planting at the USDA-ARS Grazinglands Research Laboratory, El Reno, Oklahoma.

Image: 
Courtesy of Dr. Gurjinder Baath, Oklahoma State University

Perennial warm-season grasses do not provide high-quality forage during mid to late-summer, which limits yearling stocker cattle from maintaining high rates of growth in the Southern Great Plains. This shortage has resulted in a continual search by researchers for annual legumes that can provide sufficient amounts of nutritious forage during August through September.

In a recently published article in the Agronomy Journal, researchers from USDA-ARS Grazinglands Research Laboratory and Oklahoma State University document the function of tepary bean and guar as potential summer forages under the growing conditions of Southern Great Plains. The two-year field experiment compared the productivity, leaf-to-stem ratios, and chemical composition of forage produced by three cultivars of each of tepary bean and guar with the soybean used as a control.

Results showed that tepary bean consistently offered rapid and better forage yields with a higher leaf-to-stem ratio. In contrast, guar maintained a low leaf-to-stem ratio and soybean possessed the least digestible stems in forage biomass among the tested legumes.

The article suggests tepary bean as an alternate forage option to soybean for producers and encourages further research to define management strategies for growing tepary bean in extensive production settings.

Credit: 
American Society of Agronomy

In stressed ecosystems Jurassic dinosaurs turned to scavenging, maybe even cannibalism

image: Theropod cannibals in a stressed Late Jurassic ecosystem

Image: 
Brian Engh

Among dinosaurs of ancient Colorado, scavenging and possibly cannibalism were responses to a resource-scarce environment, according to a study published May 27, 2020 in the open-access journal PLOS ONE by Stephanie Drumheller of the University of Tennessee, Knoxville, and colleagues.

Tooth marks on fossil bones can be excellent evidence of ancient feeding habits, but such marks left by carnivorous dinosaurs (theropods) are typically very rare. The Mygatt-Moore Quarry of Colorado, dating back to the late Jurassic Period around 150 million years ago, is an exception. In this study, Drumheller and colleagues found that nearly 29% of 2,368 examined bones from the quarry bore the bites of theropod dinosaurs.

Examining the damage left by the serrated edges of the dinosaurs' teeth, the authors infer that the bulk of these bites were most likely made by the large predator Allosaurus, the most common theropod found in the quarry. While most of the bites were found on the bones of herbivorous dinosaurs, about 17% were bites that theropods had made on the bones of other theropods--and around half of these bites targeted less nutritious body parts, suggesting the action of scavengers who arrived after the best bits had decomposed or been eaten by earlier carnivores.

The authors suggest this unusual assemblage is the result of an ancient environment where carcasses were buried slowly, providing ample time for scavengers to find them. The high incidence of scavenging may be the result of a stressed ecosystem whose large predators suffered a scarcity of food. Additionally, since many of the presumed Allosaurus bite marks were found on the bones of other Allosaurus, these might represent rare evidence of dinosaur cannibalism, and the first such evidence for the behavior in this famous Jurassic predator.

Dr. Drumheller adds: "Big theropods like Allosaurus probably weren't particularly picky eaters, especially if their environments were already strapped for resources. Scavenging and even cannibalism were definitely on the table."

Credit: 
PLOS

An analysis of psychological meta-analyses reveals a reproducibility problem

Meta-analysis research studies in psychology aren't always reproducible due to a lack of transparency of reporting in the meta-analysis process, according to a new study published May 27, 2020 in the open-access journal PLOS ONE by Esther Maassen of Tilburg University, the Netherlands, and colleagues.

Meta-analysis is a widely used method to combine and compare quantitative data from multiple primary studies. The statistical approach used in meta-analyses can reveal whether study outcomes differ based on particular study characteristics, and help compute an overall effect size--for instance, the magnitude of a treatment effect--for the topic of interest. However, many steps of a meta-analysis involve decisions and judgements that can be arbitrary or differ by researcher.

In the new study, researchers analyzed 33 meta-analysis articles in the field of psychology. The meta-analytical studies were all published in 2011 and 2012, all had data tables with primary studies, and all included at least ten primary studies. For each meta-analysis, the team searched for the corresponding primary study articles, followed any methods detailed in the meta-analysis article, and recomputed a total of 500 effect sizes reported in the meta-analyses.

Out of 500 primary study effect sizes, the researchers were able to reproduce 276 (55%) without any problems. (In this case, reproducibility was defined as arriving at the same result after reanalyzing the same data following the reported procedures.) However, in some cases, the meta-analyses did not contain enough information to reproduce the study effect size, while in others a different effect than stated was calculated. 114 effect sizes (23%) showed discrepancies compared to what was reported in the meta-analytical article. 30 of the 33 meta-analyses contained at least one effect size that could not be easily reproduced.

When the erroneous or unreproducible effect sizes were integrated into each meta-analysis itself, the team found that 13 of the 33 (39%) meta-analyses had discrepancies in their results, although many were negligible. The researchers recommend adding to existing guidelines for the publication of psychological meta-analyses to make them more reproducible.

The authors add: Individual effect sizes from meta-analyses in psychology are difficult to reproduce due to inaccurate and incomplete reporting in the meta-analysis. To increase the trustworthiness of meta-analytic results, it is essential that researchers explicitly document their data handling practices and workflow, as well as publish their data and code online.

Credit: 
PLOS